Power Query Comes To Azure Data Factory With Wrangling Data Flows

One of the many big announcements at Build this week, and one that caused a lot of discussion on Twitter, was about Wrangling Data Flows in Azure Data Factory. You can read the blog post here:

https://azure.microsoft.com/en-us/blog/analytics-in-azure-remains-unmatched-with-new-innovations/

…but what isn’t clear from this is that it’s basically Power Query Online integrated into ADF. You can see it in action by watching the following video – the demo of Wrangling Data Flows starts at around the 21 minute mark:

https://mybuild.techcommunity.microsoft.com/sessions/76997

image

As the presenter says, the Power Query Online editor generates M in the background as you would expect and “we are going to take this M and translate it into Spark and run it over big data”. Query folding to Spark, basically. More technical detail about all this is available here:

https://github.com/gauravmalhot/wranglingdataflow

…including a document discussing which M functions currently support query folding and which ones as yet don’t. Obviously, this feature will only work well if as much query folding as possible takes place.

This feels like a much more significant win for team Power Query than the integration with SSIS that was announced recently, if only because SSIS is a bit legacy and ADF is the cool new thing. I wonder if this opens up the possibility of integration between Power BI dataflows and ADF in the future, as another example of how self-service BI solutions can be easily transitioned into centrally-managed, enterprise-grade BI solutions? If that happens I hope someone sorts out the dataflow/data flow naming mess.

You can sign up for the preview of Wrangling Data Flows here.

Extracting All The M Code From A Power BI Dataset Using The DISCOVER_M_EXPRESSIONS DMV

DMVs (Dynamic Management Views) are, as the Analysis Services documentation states, “queries that return information about model objects, server operations, and server health”. They’re also available in Azure Analysis Service, Power BI and Power Pivot and are useful for a variety of reasons, for example for generating documentation.

Several as-yet undocumented DMVs have appeared in Power BI recently and one that caught my eye was DISCOVER_M_EXPRESSIONS. Unfortunately, when I tried to run it in DAX Studio against an open Power BI file I got an error saying it was only available in the Power BI Service:

image

Luckily, now that XMLA Endpoints are now in preview and SQL Server Management Studio 18 has been released (which supports connections to Power BI via XMLA Endpoints) we can test it against a published dataset stored in a Premium capacity. The following query can be run from a DAX query window in SQL Server Management Studio:

select * from
$system.discover_m_expressions

…returns a list of all the Power Query queries  in the selected dataset and their M code:

image

If you don’t have Premium you can run the same query from an Excel table against any published dataset using the technique I blogged about here:

image

I know there are other methods for doing this (for example using copy/paste) it’s useful to be able to do this via a DMV because it means you can automate the process of extracting all your M code easily.

Some of the other new DMVs look like they are worthy of a blog post too – I can guess what most of them do from their names, but others are more mysterious and perhaps hint at features that have not been announced yet.

DAX Machine Learning Functionality Used By The Key Influencers Visual In Power BI

I’m one of those people who can’t resist peeking behind the scenes, and so when the Key Influencers visual appeared in Power BI I couldn’t help wondering how it worked its machine learning magic. Using DAX Studio to look at the DAX queries generated by the visual proved to be very revealing: it turns out that it uses a number of new DAX functions that are undocumented and probably not meant to be used outside Microsoft. For example, the following screenshot shows a DAX query generated by the Key Influencers visual that uses functions called AI.SampleStratified, AI.Train, AI.KeyDrivers and AI.ExtractProfileFilters:

image

Using Profiler (in a similar way to what I describe in this post) to go into even more detail about what happens when these queries run, shows that they raise the DAX Extension events that I’ve been wondering about for a long time now:

image

image

So Power BI can train and query machine learning models inside its own database engine – which, when you think about it, is pretty darned cool. And then I thought: hold on, other visuals have had machine learning features for a long time. For example, the Line Chart visual can create forecasts, but although DAX Studio shows yet another undocumented function called SampleAxisWithLocalMinMax() this does not actually seem to perform the forecasting, which I assume must be done inside the code of the visual itself:

image

My guess is that the functionality used by the Key Influencers visual is new functionality in the engine.

A fascinating insight into how Power BI works, but is this any practical use to us? Let me be clear: I don’t think you should be using any of these functions yourself in a real-world report. I’m sure all this would be documented and publicised if Microsoft did want us to use it ourselves! Another consideration is that these new functions return tables and that makes them awkward to use in regular .pbix Power BI reports – I guess we could create calculated tables although that’s not as flexible as returning a table from a query as shown above. That said, even though we can’t write our own DAX queries in regular Power BI reports, we can write our own DAX queries in Paginated Reports and we can now create Paginated Reports that use a Power BI dataset as a data source. I tested putting one of the queries generated by the Key Influencers visual into a Paginated Report connected to the same dataset and it worked ok (even after publishing). You can also embed DAX queries connected to a published dataset in Excel too, as I show here. Hmm, plenty to think about then…

Using The “Skip Test Connection” Option For Power BI Data Sources To Avoid Scheduled Refresh Failures

One of the most popular blog posts I have written in recent years is this post on how the RelativePath and Query options for the Web.Contents() M function can be used to allow datasets that use web services as data sources to refresh successfully in the Power BI Service. It’s very frustrating when you write some complex M code in Power BI Desktop and find that your dataset can’t be refreshed after it has been published! While, at the time of writing, this problem is still present, yesterday’s announcement of the new “Skip Test Connection” option for data sources in the Power BI Service does mean there are more scenarios where your datasets can be made to refresh.

As I mentioned in that original post, the trick of using the RelativePath and Query options with Web.Contents() only works if the first parameter passed to Web.Contents() is a url that itself can be called on its own without an error. For example, the following expression:

Web.Contents(
	"http://www.myservice.com",
	[RelativePath="hello",
	 Query=[q="SearchTerm"]
	])

…will only refresh successfully in the Power BI Service if http://www.myservice.com can be reached as a url on its own without returning an error. I believe this is because this url is used when the Power BI Service is preparing to run the query to check whether the credentials it has stored are valid.  As the blog post says there is a workaround where dummy query parameter values can be used in that first parameter, but this won’t work in all cases. Instead, now you can turn on the “Skip Test Connection” option on the data source in the Power BI Service and the dataset will refresh even if http://www.myservice.com on its own returns an error.

Similarly this new option helps when you are using API keys passed in through query parameters for authentication. In Power BI Desktop and Excel Get&Transform/Power Query, you have the option to use the Web API authentication type to store sensitive API keys that need to be passed through url parameters securely rather than embed them in your M code. Unfortunately this is not supported in the Power BI Service yet and (as I found out on a recent consultancy engagement) there are other complications with using Anonymous authentication. For example, say you need to call a web service with an M expression something like this:

Web.Contents("http://www.myservice.com?key=password123")

…where the key query parameter contains an API key used for authentication. If you set this data source to use Anonymous authentication in Power BI Desktop or Excel it will refresh with no problems. However, after you publish to the Power BI Service you will not be able to refresh your dataset unless you turn on the “Skip Test Connection”, because the Power BI Service apparently strips out any query parameters from the url when it is testing the connection. In this case, if it strips out the key parameter, then obviously the call will fail because it has removed the means of authentication.

This new option does not mean every dataset can now be refreshed. As far as I can see, if you dynamically generate a url for use with Web.Contents() and do not use the RelativePath and/or Query options you will not be able refresh your dataset because the Power BI Service will not be able to analyse your code to discover what urls are present in it. I suspect that if you see the “Some data sources may not be listed because of hand-authored queries” warning in the Data Source Settings dialog in Power BI Desktop (which I blogged about here) then you will not be able to refresh your dataset in the Power BI Service, but I need to do more testing to be sure.

As always, if you test this yourself please let me know about any interesting things you find by leaving a comment!

Power Query Memory Usage, Dataflow Container Size And Refresh Performance

When Power BI dataflows were officially released a few weeks ago there was a new setting for Premium capacities mentioned briefly in the announcement blog post: Container Size.

The blog post only says the following:

We’re introducing a new dataflow workload on premium capacity settings, enabling you to optimize dataflow workload performance for processing more complex, compute-heavy dataflows. This setting is available in the Capacity Admin portal, Dataflow workload settings.

…which does not tell you much at all. Pedro Fernandes contacted me to see if I knew more and because I didn’t, I started investigating. This resulted in me learning lots of new information around how the Power Query engine in Power BI Desktop and Excel uses memory, how things are different in the Power BI service, and how all of this can have an impact on query refresh performance.

If you’ve read this blog post, and this related thread on the Power Query forum, you’ll know that when a Power Query query is evaluated the work is done by a Microsoft.Mashup.Container process, visible in tools such as Task Manager and Resource Monitor. A single refresh operation in Excel or Power BI Desktop might result in multiple evaluations of multiple queries for different reasons, so it’s not uncommon to see multiple Microsoft.Mashup.Container processes.

image

In another thread from the Power Query forum, Curt Hagenlocher of the dev team explains there is a 256MB limit on the amount of physical RAM that each Microsoft.Mashup.Container process can use, although there is is no limit on the amount of virtual memory that can be used. The thread is about how using Table.Buffer can be bad for refresh performance but the details are more widely applicable. Here are the highlights:

Certain operations force the rows of a table value to be enumerated. If the enumeration operation itself is expensive, then using Table.Buffer can be a performance optimization because we store the values in memory so that second and subsequent enumerations of the rows go against memory.

If the table is only being enumerated once (which is the most common scenario) or if the underlying enumeration is fast anyway, then Table.Buffer won’t help performance.

Table.Buffer can actually hurt performance in some cases, because we cap RAM usage of the query at 256 MB — which means that a query which uses more than 256 MB is now forced to page RAM to/from disk. Enough paging, and the performance cost can be quite dramatic.

Currently, “table at a time” operations like joins, sort, many groupings, pivot, unpivot, etc., all happen in RAM (unless folded). For large tables, these will consume a lot of memory.

 

The 256MB limit is also mentioned briefly on this thread.

Here’s a screenshot from Resource Monitor showing this: the Microsoft.Mashup.Container process in this case is evaluating a query that reads data from a large (900,000ish rows) csv file and does a Pivot, and while the Working Set value has peaked at around the 256MB limit you can see the Commit value is much higher, and so paging must be taking place.

image

Currently there is no way to change this 256MB in Power BI Desktop or Excel although someone has already posted a suggestion on the Ideas site to allow us to change it. How much of an impact does this actually have on refresh performance though? Without the ability to change this setting it’s hard to say, but I suspect it could be significant and that a lot of Power Query performance problems could be explained by this behaviour.

The situation is different in the Power BI service, where I understand there is a limit on the overall amount of memory that a single Power Query query evaluation can use. This makes a lot of sense in the context of Power BI Pro and shared capacity because Microsoft could not allow one user to run lots of complex, expensive Power Query queries that might affect other users inside or outside the same tenant. With Power BI Premium, which gives you your own dedicated capacity in the Power BI service, there is no chance that anything you do will affect other tenants and so Microsoft gives you more control over how resources are used. As a result, the new Container Size setting for a dataflow in a Premium capacity lets you configure the amount of memory that can be used for a single entity refresh within a dataflow – and refreshing a single entity in a dataflow is, as far as I understand it, the equivalent of what a Microsoft.Mashup.Container process does on the desktop.

I did some (not very scientific) testing and it looks like increasing the Container Size setting can have a noticeable impact on the performance of memory-intensive queries. Using the technique I blogged about here I measured the execution time of the query mentioned above that does a Pivot on data from a large, local csv file in Power BI Desktop: it took 98 seconds on my laptop. I then used a dataflow to load the source data from the csv file into an entity in a dataflow on an A4 capacity, without making any changes. This took 132 seconds; you can get the time taken for a dataflow refresh by clicking on the Refresh History option, as Matthew Roche shows here. I then created a computed entity that used this new entity as its source and which did the same Pivot operation as the original query on the desktop. The following table shows the time taken to refresh this computed entity in the A4 capacity with different Container Size settings:

Container Size (MB) Refresh Time (seconds)
700 61
2000 57
5000 52

700MB is the default setting for Container Size and the smallest value that you can use; I guess the maximum value you can set will depend on the size of the capacity you’re using. There are two conclusions that I think you can draw from these results:

  • Even with the default setting for Container Size, it was faster for the computed entity to read the data from the source entity (which, remember, stores its data in Azure Data Lake Gen2 storage) and do the Pivot in the Power BI service than it was for Power Query in Power BI Desktop to read the data from the csv file and do the same Pivot operation on my laptop.
  • Increasing the Container Size setting reduced refresh time quite significantly.

So, as the blog post I referenced at the very beginning of this post states, if you are doing memory-intensive operations such as group bys, sorts, pivots, unpivots and joins against non-foldable data sources in a dataflow, and if that dataflow is on a Premium capacity, then increasing the Container Size property is probably a good idea because it may reduce refresh times. If you can reproduce this on your own Premium capacities please let me know by leaving a comment – I would be very interested to hear about your experiences.

[Thanks to Curt Hagenlocher and Anton Fritz for providing information for this blog post]

BI Survey 19

The BI Survey is the largest annual survey of BI users in the world, and every year I get a free copy of the results (which are always very interesting) in return for publicising it here on my blog. If you take part, you’ll also get a summary of the results and be entered into a draw for some Amazon vouchers. Here’s the link to take the survey:

https://www.efs-survey.com/uc/BARC_GmbH/f42f/?a=101

Power BI did very well last year and I’m sure it will do even better this year!

Table.Buffer() Does Not Buffer Nested Values

Here’s yet another entry in the list of useful things I learned from Ehren von Lehe on the Power Query MSDN forum: Table.Buffer() does not buffer nested table, record or list values inside the cells of a table. From this thread:

Buffering is shallow. It will force the evaluation of any scalar cell values, but will leave non-scalar values (records, lists, tables, etc.) as-is.

It’s actually really difficult to come up with a simple demo query to prove this though (the Power Query engine is too clever about not evaluating things it doesn’t need for the final output of a query), but it’s fairly easy to understand the principle. Whenever you have an expression that returns a table something like this:

image

…if you use Table.Buffer() on this table it will only buffer the scalar values (in this case the text values in the Name and Signature columns). The nested table values, as in the highlighted cell, will not be buffered in memory and if you try to access the contents it may result in another call back to the underlying data source.

%d bloggers like this: