Exporting Your Queries’ M Code From Power Query And Power BI Using Copy/Paste

Two years ago I blogged about a method to export all the M code for all of your queries in Power Query using the Send A Frown button – useful if you need the code for documentation purposes. This trick doesn’t work with Power BI Desktop, unfortunately, but the good news is that there’s a better way to do this now in Power Query and Power BI Desktop using copy/paste. It’s pretty simple really: when you copy a query from the Power Query or Power BI Desktop Query Editor you can not only paste the query to another Query Editor (pasting from Power Query to Power BI and vice versa works too) but you can also paste the query to a text editor like Notepad and get the M code for the query. What’s more, you can also select more than one query in the Query Editor and when you paste you get all the code for all of the selected queries:

image

Remember that because the properties of each step in a query become comments in your M code, they get copied too.

Thanks to VossF for telling me about this on this thread.

The List.* M Functions And The equationCriteria Argument

Last week a reader left a comment on my blog asking what the third argument of the List.Contains() M function (somewhat cryptically called equationCriteria) does. I didn’t know, so I did some research and found out that lots of the List.* functions have the same argument. The documentation for List.Distinct() has a few examples but no real explanation of how they work. It also says:

For more information about equationCriteria, see Parameter Values.

…but there’s no link or indication where the documentation on ‘Parameter Values’ can be found. A bit more digging led me to the bottom of this page:

Equation criteria

Equation criteria for list values can be specified as either a

  • A function value that is either

    • A key selector that determines the value in the list to apply the equality criteria, or

    • A comparer function that is used to specify the kind of comparison to apply. Built in comparer functions can be specified, see section for Comparer functions.

  • A list value which has

    • Exactly two items

    • The first element is the key selector as specified above

    • The second element is a comparer as specified above.

Still not exactly helpful, is it? After a bit of time testing, though, I think I’ve worked out how what’s possible with the equationCriteria argument and this blog post will, I hope, help any future M coders who are struggling with the same question. Let’s see some examples…

The basics

First of all, the basics. The following expression using List.Contains() returns TRUE because the text value “apples” appears in the list {“apples”, “oranges”, “pears”}:

List.Contains({"apples", "oranges", "pears"}, "apples")

image

The following returns FALSE because the text value “grapes” does not appear in the list {“apples”, “oranges”, “pears”}:

List.Contains({"apples", "oranges", "pears"}, "grapes")

image

However there are lots of different ways that text values can be compared and the equationCriteria argument allows you to specify which rules to follow.

Case sensitivity and culture

If you’ve written any M code you’ll know that it is case sensitive. As a result, the following returns FALSE:

List.Contains({"apples", "oranges", "pears"}, "Apples")

What happens if you want to do a case-insensitive comparison though? This is where the Comparer functions come in. The Comparer.FromCulture() function returns a function that compares two values according to the rules of a given culture or locale and optionally ignore case, and can be used in the equationCriteria argument. The following example returns TRUE:

List.Contains(
	{"apples", "oranges", "pears"}, 
	"Apples", 
	Comparer.FromCulture("en-GB", true)
)

In this case Comparer.FromCulture("en-GB", true) returns a function that compares two values for the English – Great Britain culture (for a full list of culture values, see the Language Tag column of the table on this page); the second, optional argument here makes the function ignore case when making the comparison. The function that Comparer.FromCulture() returns is then used by List.Contains() to make the comparison.

Rather than specify a culture you can also use the Culture.Current function to return the current system culture. For me, Culture.Current returns the value “en-GB” because I live in Great Britain and have my PC configured to use a British English locale:

image

The following example shows how Culture.Current can be used with Comparer.FromCulture and also returns TRUE, at least for me:

List.Contains(
	{"apples", "oranges", "pears"}, 
	"Apples", 
	Comparer.FromCulture(
		Culture.Current, 
		true
	)
)

If you’re curious to see an example where different cultures produce different results here’s one I stole from this article on string comparisons and sorting in .NET. Apparently in English the character æ is treated the same as the combination of the two characters ae but this is not the case in Danish. As a result the following returns TRUE:

List.Contains(
	{"aepples", "oranges", "pears"}, 
	"æpples", 
	Comparer.FromCulture(
		"en-GB", 
		true
	)
)

Whereas this returns FALSE:

List.Contains(
	{"aepples", "oranges", "pears"}, 
	"æpples", 
	Comparer.FromCulture(
		"da-DK", 
		true
	)
)

Ordinal comparisons

If you don’t want all the uncertainty of cultures and case sensitivity you can just make an ordinal comparison, which will compare two strings by finding the unicode character value for each character in each string and compare those values. To do this you can use the Comparer.Ordinal() function. The following returns FALSE:

List.Contains(
	{"apples", "oranges", "pears"}, 
	"Apples", 
	Comparer.Ordinal
)

…because “a” is not the same unicode character as “A”, and so “apples” and “Apples” are not treated as equal.

Custom comparer functions

As the documentation hints you can also write your own function to do the comparison. A comparer function is just – as far as I can see – a function that has two arguments and returns a logical value. Here’s an example of a custom function that takes two text values, x and y, and returns true if the first three characters of x are the same as y:

(x as text, y as text)=>Text.Start(x,3)=y

It can be used with List.Contains() as in the following example, which returns TRUE:

List.Contains(
	{"apples", "oranges", "pears"}, 
	"app",
        (x as text, y as text)=>Text.Start(x,3)=y
)

What must be happening here is that the function is called three times, every value in the list {“apples”, “oranges”,”pears”} is being passed to the x argument and for each call “app” is passed to y; because the first three characters of “apples” are “app” the function returns true in this case, so List.Contains() returns true.

Key selectors

If you’re working with a list of records you might only want to do the comparison on one field in the record, and this is what key selectors allow you to do. The following example, which returns TRUE:

List.Contains(
	{[Fruit="apples", Colour="Red"], 
         [Fruit="oranges", Colour="Orange"],
         [Fruit="pears", Colour="Green"]}, 
	[Fruit="apples", Colour="Russet"],
        each [Fruit]  
)

…does so because it only compares the Fruit field in each record, and the Fruit fields in [Fruit=”apples”, Colour=”Red”] and [Fruit=”apples”, Colour=”Russet”] are indeed the same. However the following example returns FALSE:

List.Contains(
	{[Fruit="apples", Colour="Red"], 
         [Fruit="oranges", Colour="Orange"],
         [Fruit="pears", Colour="Green"]}, 
	[Fruit="apples", Colour="Russet"],
        each [Colour]  
)

…because the Colour “Russet” does not appear anywhere in the Colour field of any of the records in the first parameter.

Combining key selectors and comparison functions

Finally, as the documentation suggests, you can combine the above methods of comparison by passing a list containing two items to equationCriteria: the first item in the list must be a key selector, the second must be a comparer function. For example, the following returns TRUE:

List.Contains(
	{[Fruit="apples", Colour="Red"], 
         [Fruit="oranges", Colour="Orange"],
         [Fruit="pears", Colour="Green"]}, 
	[Fruit="Apples", Colour="Russet"],
        {each [Fruit], Comparer.FromCulture("en-GB", true)} 
)

…because it only looks at the Fruit field of each record, and it does a case-insensitive comparison using the en-GB culture, so “apples” and “Apples” are equal.

Record.AddField(), Functions And The Delayed Option In M

Today I was looking at the Record.AddField() M function and saw it had a mysterious – and badly documented – optional fourth argument called delayed. Of course I had to work out what it did, and having done so I thought I’d blog about it in case someone found it useful (and however obscure the topic, there’s always someone out there who finds this stuff useful).

Imagine you have an M function called ReturnANumber that has no arguments at all and returns the number 999:

() as number => 999

image

Now imagine that you want to return this function in a record (which is exactly what #shared does). The following expression:

[MyFunction = ReturnANumber]

…returns a record with one field whose value is of type function:

image

But what if you wanted the field to contain the number that the function returns, not a reference to the function itself? The delayed option of Record.AddField() allows you to do this: if you set it to true, you get the value the function returns.

Record.AddField([], "MyFunction", ReturnANumber, true)

image

Setting delayed to false does the same as the first example above:

Record.AddField([], "MyFunction", ReturnANumber, false)

image

Another way to get the same result as setting delayed to true is to use Function.Invoke():

[MyFunction = Function.Invoke(ReturnANumber,{})]

image

Now I need to think of a real-world use for this…

Web.Contents(), Caching And The ExcludedFromCacheKey Option In Power BI And Power Query

When you’re using the Web.Contents() M function to call a web service from Power Query or Power BI, you don’t necessarily get one HTTP request each time you call the function: some caching takes place, so that if you make the same request multiple times your query won’t waste time asking for the same data over and over. In this post I’m going to share the results of some tests I made to show how caching works with Web.Contents() and what factors influence it.

For my tests I built a simple web service in Microsoft Flow, similar to the one I blogged about here, that accepts a HTTP POST request and calls a stored procedure in an Azure SQL Database. The stored procedure then updates a table in the database and this in turn allows me to count the number of times the web service is called. Finally, the web service returns the value 0 if the stored procedure has executed successfully.

This web service can then be called from either Power Query or Power BI using the Web.Contents() function, something like this (because the URL for the web service is very long I stored it in a parameter called WebServiceURL):

let
    Source = Web.Contents(
	WebServiceURL,
	[Content=Text.ToBinary("Hello")]
	),
    #"Imported JSON" = Json.Document(Source,1252)
in
    #"Imported JSON"

The output of the query when run in Power Query and loaded to an Excel table is this:

image

The first important thing to point out is that the above query, when refreshed in the latest versions of Power Query (I’m running Excel 2016 build 7571.2109) and Power BI (build 2.41.4581.361- November 2016 release), results in a single call to the web service. It might seem like I’m stating the obvious but in the past I’ve seen plenty of cases where a data source has been queried multiple times by Power Query/Power BI even if I was only expecting it to be queried once.

Now, let’s look at a query that calls this web service several times. Here’s the query above converted to a function called fnCallWebService:

() => 
let
    Source = Web.Contents(
	WebServiceURL,
	[Content=Text.ToBinary("Hello")]),
    #"Imported JSON" = Json.Document(Source,1252)
in
    #"Imported JSON"

Here’s a query that calls this function once for each row of the following table:

image

let
    Source = Excel.CurrentWorkbook(){[Name="MyTable"]}[Content],
    #"Changed Type" = Table.TransformColumnTypes(
	Source,
	{{"Row", Int64.Type}}),
    #"Invoked Custom Function" = Table.AddColumn(
	#"Changed Type", 
	"fnCallWebService", 
	each fnCallWebService())
in
    #"Invoked Custom Function"

In the query above I used the Invoke Custom Function button to call the function for each row in the source table and put the value returned by the function in a new column. The output is this:

image

Even though the function is called four times, once for each row in the original table, that does not mean that the web service is called four times – it isn’t, it’s only called once. In this case Power BI/Power Query knows that each of the four calls to the function is making an identical request to the web service and so it only goes to the web service once, and thereafter uses a cached result the other three times.

One way to stop this caching from taking place is to add an HTTP header to the request to the web service and pass a different value to that header for each call. Here’s another version of my function, now called fnCallWebServiceWithHeaders, which this time takes a number as a parameter and then passes that number to the web service via a header called MyHeader:

(RowNum as number) => let
    Source = Web.Contents(
	WebServiceURL,
	[Content=Text.ToBinary("Hello"), 
	Headers=[MyHeader=Text.From(RowNum)]]),
    #"Imported JSON" = Json.Document(Source,1252)
in
    #"Imported JSON"

Now if I call this function for each row of the table, and for each call pass the value in the [Row] column through to the function like so:

image

let
    Source = Excel.CurrentWorkbook(){[Name="MyTable"]}[Content],
    #"Changed Type" = Table.TransformColumnTypes(
	Source,
	{{"Row", Int64.Type}}),
    #"Invoked Custom Function" = Table.AddColumn(
	#"Changed Type", 
	"fnCallWebServiceWithHeaders", 
	each fnCallWebServiceWithHeaders([Row]))
in
    #"Invoked Custom Function"

…the web service gets hit four times. The presence of a different value for the MyHeader header in each request is enough to prevent any caching from taking place.

It is possible, however, to get Power BI/Power Query to ignore one or more headers when working out whether caching should take place using the ExcludedFromCacheKey option in Web.Contents(). Here’s one more version of my function, now called fnCallWebServiceWithHeadersExlCache, which uses this option:

(RowNum as number) => 
let
    Source = Web.Contents(
	WebServiceURL,
	[Content=Text.ToBinary("Hello"), 
	Headers=[MyHeader=Text.From(RowNum)], 
	ExcludedFromCacheKey={"MyHeader"}]),
    #"Imported JSON" = Json.Document(Source,1252)
in
    #"Imported JSON"

The ExcludedFromCacheKey option takes a list of text values which represent the names of headers that are to be ignored when considering which requests can be cached. In the example above my list contains just the one header, MyHeader, and when this is used in a query like so:

let
    Source = Excel.CurrentWorkbook(){[Name="MyTable"]}[Content],
    #"Changed Type" = Table.TransformColumnTypes(
	Source,
	{{"Row", Int64.Type}}),
    #"Invoked Custom Function" = Table.AddColumn(
	#"Changed Type", 
	"fnCallWebServiceWithHeaders", 
	each fnCallWebServiceWithHeadersExlCache([Row]))
in
    #"Invoked Custom Function"

…even though the function is called four times, once for each row in the table, and even though each time Web.Contents() is called it is with a different value passed through to the MyHeader header, only one request is made to the web service and the three subsequent requests are answered from cache again.

In summary, if you’re calling a web service multiple times in a query and especially if you’re using the Headers option in Web.Contents(), this is important stuff to understand because caching can make a big difference to the performance of your queries.

First Thoughts On The Integration Of Power Query/M Into Analysis Services Tabular

Last Friday’s big news was the release of the first CTP for Analysis Services v.next. Among several major new pieces of functionality (Ragged hierarchies! Drillthrough that works properly, even for calculations! Table-level security!) probably the biggest is the integration of Power Query/M into Analysis Services. As you can probably guess, I’m incredibly pleased that my two favourite technologies have got together. The technical details are given in this blog post, which I suggest you read if you haven’t done so already, but what I think is missing is an explanation of why this is so important and what kind of opportunities it opens up – hence this post. Of course this is just my take on the subject and not what Microsoft may actually thinking; it’s also very early days, so as the functionality develops and I have more chance to think about this my opinions may change. If you have any ideas on this subject I would be interested to hear them so please leave a comment!

Why this had to happen: Power BI

There is an obvious reason why Microsoft decided to integrate Power Query/M into SSAS, and that is because it needs to support the conversion of Power BI models into Analysis Services Tabular models. There are two scenarios where this will be necessary.

The first is the ability to convert a Power BI model into an Azure Analysis Services Tabular model (listed as ‘planned’ here), something that will be a key selling point for Azure Analysis Services when it releases. The engine behind Power BI is essentially the same as the one used in Analysis Services so migrating the data model should be straightforward, but since Power BI uses Power Query/M to load data then a migrated Azure Analysis Services model will also have to use Power Query/M.

The second scenario is similar to the first. We now know that on-premises Power BI will be delivered through Reporting Services v.next, and it’s reasonable to assume Reporting Services will need a database engine to store the data for published Power BI reports. That engine will have to be an Analysis Services instance of some kind (either standalone or running in-process inside Reporting Services) and again for that to work Analysis Services will have to support the same data access mechanisms as Power BI.

Better support for a larger number of data sources

I’ve just argued why Microsoft was obliged to include this functionality in SSAS v.next but in fact there are many positive reasons for doing this too. The most obvious one is to do with support for more data sources. At the moment SSAS Tabular supports a pretty good range of data sources, but the world of BI is getting more and more diverse and in order to stay relevant SSAS needs to support far more than it does today. By using Power Query/M as its data access mechanism, SSAS v.next will immediately support a much larger number of data sources and this number is going to keep on growing: any investment that Microsoft or third parties make for Power BI in this area will also benefit SSAS. Also, because Power Query/M can query and fold to more than just relational databases, I suspect that in the future this will allow for DirectQuery connections to many of these non-relational data sources too.

Different data sources for partitions in the same table

Another benefit of this change is that we’ll have a lot more flexibility with partitioning tables in an SSAS Tabular model. As the blog post says:

As long as a partition’s M query adheres to the column mappings of the table, you are free to perform any transformations and pull in data from any data source defined in the model.

In SSAS 2016 the partitions in a table all have to get data from the same data source whereas in v.next we’ll be able to get data from different data sources in different partitions, and this opens up some interesting new possibilities. For example, I can imagine a simple budgeting application where the partitions in a table get data from different Excel workbooks stored in OneDrive for Business, and where the each partition gets processed automatically when changes are saved to one of these workbooks.

Does this replace SSIS and my data warehouse? 

The short answer is no. Power Query/M is not a full-featured ETL tool and I don’t think it ever will be; it certainly does not have the kind of functionality needed to perform enterprise-level ETL right now. My view is that Microsoft have built Power Query/M into SSAS for the reasons above and not to encourage enterprise SSAS users to do their own quick-and-dirty ETL when loading data (although there is a risk that that will happen anyway). That said, I think the dividing line between corporate and self-service BI will become increasingly blurred over the next few years as the Microsoft BI stack develops, and we’ll see Analysis Services being used in self-service scenarios as well as the more traditional corporate ones.

Centralised data source objects

One last thing to point out is that the way SSAS v.next makes a distinction between data sources and other queries is very interesting. In Power BI and Power Query it’s easy to end up with data source connection information duplicated across multiple queries unless you know what you’re doing, and this can cause no end of problems later on in a project. As far as I can see, in SSAS v.next a “data source object” is an M query that only contains the connection to external data, while all other queries have to reference a data source to be able to access external data. This means, as the blog post says:

Referring to data source objects helps to centralize data source settings for multiple queries and simplifies deployments and maintenance if data source definitions must be updated later on. When updating a data source definition, all M queries that refer to it automatically use the new settings.

I wonder whether this concept is coming to Power BI and Power Query at some point? I hope so – it makes a lot of sense.

Referenced Queries And Caching In Power BI And Power Query

Last week, Maxim Zelensky (whose blog is well worth checking out) tweeted about a very interesting answer he had received to a question he posted on the Power Query MSDN forum, on the subject of caching and referenced queries in Power Query. You can read the thread here:

https://social.technet.microsoft.com/Forums/en-US/8d5ee632-fdff-4ba2-b150-bb3591f955fb/queries-evaluation-chain?forum=powerquery

…but since this is such important information – it’s certainly something I’ve wondered about myself – I though I would share Maxim’s question and the response from Ehren of the Power Query dev team here in full so it gets the wider visibility it deserves. I’m very grateful to Maxim for letting me share this and to Ehren for writing such a detailed response.

First, Maxim’s original question:

There are two different scenarios I am working with:

1) Query1 connects to the data source (flat file) and make basic cleaning and transformations.

Then Query2 and Query3 reference to Query1, performing other transformations needed.

Query3 also take some data from Query2.

Query2 and Query3 then exported to sheet, Query1 – connection only.

As far as I can understand, PQ can define refresh chain like this: evaluate Query1, then evaluate Query2, then evaluate Query3 (as it need the results from Query2). The question is: When PQ performs calculation of Query3, will it recalculate Query1? Or, as it was evaluated in the chain before, Query3 will use cached results of previous Query1 calculation (performed when Query2 was evaluated)?

2) I have a set of flat files, and I take data from them with one Query1. Query1 also performs some transformations. Then I have two independent Query2 and Query3, both connected to Query1 performing different transformations of source data. Results of Query2 and Query3 evaluations then used in Query4, which exports its results to the sheets, Query1, Query2 and Query3 – connection only

The second question is: performing "Refresh" on Query4, how much times will be Query1 evaluated – one, two or three ? Or there also will be chain: calculate Q1, caching, then Q2 or Q3, calculate next using cached results of Q1 evaluation, and then – Q4?

3) Is there is a difference with connection to database?

4) Is there any rules of evaluation chain (like each expression/query will be calculated once in the evaluation chain)?

And here’s Ehren’s reply:

There’s a lot involved in answering your question, so let me back up and explain a few things first.

Caching
Power Query (in both Excel and Power BI Desktop) utilizes a "persistent cache", stored on disk, when refreshing queries. But what exactly does that mean?
First, let’s look at what gets cached. The persistent cache does not store the results of your M Queries (Q1, Q2, etc. in your example). Instead, it stores the results of the behind-the-scenes requests sent to data sources.

So if Q1 queries a SQL Server database called "MyServer/MyDatabase" and returns a single unfiltered table called "MyTable", the query sent to the server might be "select [Col1] from [MyTable]". In this case, the persistent cache will now know the result of sending "select [Col1] from [MyTable]" to "MyServer/MyDatabase". If another M query (whether through referencing Q1, or by querying the same table directly) needs the same result, the persistent cache can provide it, and the result won’t have to be fetched a second time from the SQL Server.
"Great," you might say. "So if I’m pulling from a flat file in Q1, and in a few places in Q2 I need to do Table.RowCount(Q1), the file should only be read from disk once, right?" And the answer would be…no. This is because not all data sources are cached. Specifically, the results of calls to File.Contents are not stored in the persistent cache. Why not? Well, the cache is stored on disk, and caching local files (which are already on disk) elsewhere on disk doesn’t really make sense. (Using Table.Buffer in this context may help…see more on Table.Buffer below.)
"Okay", you might say. "But if Q1 is pulling from a SQL table, and in a few places in Q2 I reference Q1, that should hit the persistent cache, right?" Maybe. It depends on how Q2 is using Q1, since doing additional operations on Q1 (such as filtering or merging) might cause the M engine to compute a different SQL query, resulting in the server being hit again.
Next, let’s look at the scope of caching. The scope of caching differs depending on what you’re doing, as well as what tool you’re using.

Previewing
If you’ve opened the Power Query editor in Excel or Power BI Desktop, you might have seen warnings like "This preview may be up to 3 days old". This is because there is a persistent cache used for interactive previewing of query results. As you can imagine from the fact that we have warnings about preview results being days old, this cache is long-lived and is intended to make the experience of working in the editor faster.

Loading to Excel
If you load/refresh three queries in Excel, each of them gets their own persistent cache. So the fact that a SQL result is cached during the load of Q2 won’t benefit the loading of Q3, even if it needs the same result.

Loading to Power BI Desktop
If you load/refresh three queries in PBI Desktop, they all share a single persistent cache. When you refresh multiple times, each refresh operation gets its own cache (shared by all the queries being refreshed at that particular time). This means that if SQL result is cached during the load of Q2, it will still be cached during the loading of Q3 (assuming they’re both being loaded at the same time).

What about Table.Buffer?
Table.Buffer can be useful if you want to store an intermediate result in memory and avoid pulling content from disk, a remote file share, a SQL Server, a website, or any other data source multiple times during an evaluation.  Think of Table.Buffer as, "load this table into memory, and stop folding subsequent operations back to the data source".

However, because buffering happens in memory and is not persisted on disk, buffering during the load of one query does not affect the load of another query. If Q1 is buffered when Q2 is loaded, Q1 will be re-buffered when Q3 is loaded.

And now to answer your question…
Now let’s take a look at your example (Q4 references Q2 and Q3, and Q2 and Q3 both reference Q1).
Since you’re pulling from a flat file, and File.Contents results aren’t cached, the flat file will be read each time Q1 is referenced (twice in Q4, once in Q3, Q2, and Q1). If you buffered the result of Q1, then Q4 would only read the file once. But when Q1, Q2, and Q3 are loaded (even in PBI Desktop), they will still each also read the file.

What about immutability?
You asked about the fact that M values are supposed to be immutable. This is true for the "pure" parts of the language, but breaks down when you introduce external data sources and folding. (In fact, you could think of Table.Buffer as transferring a table from the fuzzy unpredictable world of folding to the immutable world of pure M values.) You can see this in action by doing the following test, using a query called "MyTextFileQuery" that pulls from a local file on disk.

Reads the file five times
= Table.RowCount(MyTextFileQuery) + Table.RowCount(MyTextFileQuery) + Table.RowCount(MyTextFileQuery) + Table.RowCount(MyTextFileQuery) + Table.RowCount(MyTextFileQuery)

Reads the file once
= let rowCount = Table.RowCount(MyTextFileQuery) in rowCount + rowCount + rowCount + rowCount + rowCount

I’ve read this response several times and it’s still sinking in, but clearly there are some important implications here for anyone doing more advanced data loading work in Power Query and Power BI. I’m sure it will be the inspiration for many future blog posts on tuning Power Query query performance.

Pushing Data From Excel To Power BI Using Streaming Datasets

One Power BI feature that almost passed me by (because it was released in August while I was on holiday) was the ability to create streaming datasets in the Power BI web app and push data to them via the Power BI REST API. This blog post has the announcement:
https://powerbi.microsoft.com/en-us/blog/real-time-in-no-time-with-power-bi/ 
The documentation is here:
https://powerbi.microsoft.com/en-us/documentation/powerbi-service-real-time-streaming/
And Charles Sterling has an example of how to use it with Flow and PowerApps here:
https://blogs.msdn.microsoft.com/charles_sterling/2016/10/17/how-to-create-and-customize-applications-with-powerapps-with-laura-onu-webinar-1020-10am-pst/

However, when I played around with this I found there were a few things that were either confusing or not properly documented, so I thought it would be useful to give an example of how to use this functionality to automatically synch data from a table in Excel to Power BI using a Power Query query.

Creating the streaming dataset in Power BI

Imagine that you have a table called Sales in an Excel workbook on your desktop:

image

There are three columns: Month and Product, which contain text values, and Sales, which contains an integer value. This is the data that we want to push up to Power BI.

The first step is to create a streaming dataset in Power BI to receive this data. Go to PowerBI.com and on the left-hand menu, under the Datasets heading, you’ll find a link called Streaming Datasets right at the bottom next to the Get Data button:

image

Click it and you’ll go to the Streaming data screen. Click on the “Add streaming dataset” button in the top-right to create a new streaming dataset:

image

Choose the API option in the pop-out pane then click Next:

image

Then give your dataset a name, enter the names and data types for the columns in the table and leave the Historic data analysis option turned off (we’ll come back to this later):

image

Hit Create and you’ll see a screen showing the URL to use to push data to the dataset and an example of the JSON to use to send the data:

image

Copy the URL and put it somewhere handy – you’ll need it in a moment.

Pushing data to the streaming dataset from Excel

Back in your Excel workbook, open the Power Query Query Editor window and create a new text parameter called PowerBIStreamingDatasetURL and paste in the URL for the streaming dataset:

image

Next, create a new blank query and use the following M code:

let
    Source = Excel.CurrentWorkbook(){[Name="Sales"]}[Content],
    ChangedType = Table.TransformColumnTypes(
                   Source,
                   {
                    {"Month", type text}, 
                    {"Product", type text}, 
                    {"Sales", Int64.Type}
                   }),
    ConvertToJson = Json.FromValue(ChangedType),
    SendToPowerBI = Web.Contents(PowerBIStreamingDatasetURL,
                    [Content=ConvertToJson, 
                    ManualStatusHandling={400,404}]),
    GetMetadata = Value.Metadata(SendToPowerBI),
    GetResponseCode = GetMetadata[Response.Status],
    CurrentTime = DateTime.ToText(DateTime.FixedLocalNow()),
    Output = #table({"Status"}, 
              {{
              if GetResponseCode=200 then 
               "Data updated successfully at " & CurrentTime 
              else 
               "Failure at " & CurrentTime}})
in
    Output

This query does the following:

  • Reads the data from the Sales table in the workbook
  • Converts the data to JSON (for some background on how it does this, see here)
  • Sends the data to the streaming dataset using Web.Contents() to make a POST request. See this post on how to make POST requests using Web.Contents() and this post on the technique I’m using to handle HTTP errors manually.
  • Returns a table containing a message saying whether the data was updated successfully or not, and the time of execution like so:

    image

Finally, back in Excel, go to the Data tab on the ribbon, click on the Connections button to open the Workbook Connections dialog, select the connection that represents the query you’ve just created, click Properties, then in the Connection Properties dialog tick the “Refresh every” box and set the query to refresh automatically every minute:

image

Displaying data in a Power BI dashboard

Back in the browser in Power BI, create a new dashboard, click the Add Tile button and choose the Custom Streaming Data option:

image

Click Next and select the streaming dataset created earlier:

image

Click Next again and then choose Clustered bar chart as your Visualization Type, select the Month field of the dataset for the Axis, Product for the Legend…

image

…the Sales field for the Value and set the time window to display to 1 second:

image

Frustratingly there’s no way to create a measure or otherwise aggregate data here. In this example you’re using all of the fields in dataset in the chart; if you left out Product, however, you wouldn’t see aggregated sales for all products you would just see data for one (the last?) row in the table for each month.

Finally, set a title for the chart:

image

You now have a dashboard that gets updated automatically and shows the data from the Sales table in the Excel workbook:

image

When you change the data in Excel, after the Power Query query has run in the background every minute, the new data will appear in the chart.

[Be aware that it might take a few minutes for everything to start working when you first create a new tile]

Other ways of visualising the data

There are other types of data visualisation your can use such as line charts that are all very straightforward. One thing that did confuse me was the card visual: it shows one number, but which number? In this example if you create a card and link it to the Sales field in the dataset, it will always display the value from the last row in the table:

image

Again, it would be really nice if there was a way of creating a measure here…

The Historic Data Analysis option

You may remember the Historic Data Analysis option from an earlier step. What happens if you turn it on? Basically, instead of storing just one copy of the table you push through the API it stores multiple copies of the table (although it doesn’t store everything – I guess it’s subject to this retention policy or something similar). For example, consider the following variation on the streaming dataset above:

image

There’s a new field called UpdateDateTime (which is of type text, not datetime, because I found this worked better in reports) and the Historic data analysis switch is turned on.

Here’s an updated version of the Power Query query that populates the UpdateDateTime field with the date and time that the query was run:

let
    Source = Excel.CurrentWorkbook(){[Name="Sales"]}[Content],
    CurrentTime = DateTime.ToText(DateTime.FixedLocalNow()),
    AddUpdateDateTime = Table.AddColumn(Source, "UpdateDateTime", 
                         each "Data Update: " & CurrentTime),
    ChangedType = Table.TransformColumnTypes(
                   AddUpdateDateTime ,
                   {
                    {"Month", type text}, 
                    {"Product", type text}, 
                    {"Sales", Int64.Type},
                    {"UpdateDateTime", type text}
                   }),
    ConvertToJson = Json.FromValue(ChangedType),
    SendToPowerBI = Web.Contents(PowerBIStreamingDatasetURL,
                    [Content=ConvertToJson, 
                    ManualStatusHandling={400,404}]),
    GetMetadata = Value.Metadata(SendToPowerBI),
    GetResponseCode = GetMetadata[Response.Status],
    Output = #table({"Status"}, 
              {{
              if GetResponseCode=200 then 
               "Data updated successfully at " & CurrentTime 
              else 
               "Failure at " & CurrentTime}})
in
    Output

You can download a demo workbook with this second example query in here.

The dashboards now work in more or less the same way. The Time Window To Display option that we set to 1 Second above can be used to control the number of copies of the pushed table that are displayed. For example, setting it to five minutes shows data from all of the copies of the table pushed in the last five minutes:

image

[Incidentally, if you’re using the Power Query query above you’ll see that every time the query runs, the web service is actually called twice! This is a feature of Power Query and M in general – there’s no guarantee that the web service will be called just once even if the query itself is executed once. This is why the dev team always tells people never to use Power Query to update data in a data source (see here for another example of this)]

You now also get a new option to create a report from a streaming dataset on the Streaming Data screen – you need to click the small graph icon next to the name of the streaming dataset:

image

image

So now you can create reports that show how the data in your Excel table has changed over time, and slice by the values in the UpdateDateTime field:

image

It’s important to realise that unlike the dashboards, reports connected to a streaming dataset don’t refresh automatically – you have to click the Refresh button on the report.

Conclusion

Although the examples in this post are perhaps not all that practical, hopefully they show what’s possible with the streaming API and some M code. It would be great if we could do data modelling-type stuff like add measures in the web interface, in the same way that we can in Power BI Desktop, because that would open the door to doing even more cool things with streaming data.