Using Process Monitor To Find Out How Much Data Power Query Reads From A File

This post is really just a quick follow-on from my post earlier this week on using Process Monitor to troubleshoot Power Query performance issues with file-based data sources, which I suggest you read before carrying on. I realised, after playing around with Process Monitor some more, that the ReadFile operation actually tells you how much data is being read from a file when a Power Query query is running. For example, here’s a sample of some of the ReadFile operations captured while running the unoptimised version of the query I talked about in my last post:

image

Since Process Monitor can export captured events to a CSV file, it’s pretty easy to load the events into Power BI, filter the events down to only the ReadFile operations, parse the Detail column to extract the Offset values (which I’m sure you can work out how to do if you’re reading a post like this), and then draw a graph showing how much data gets read from a file when a query is run. Here’s what the graph looks like for the unoptimised version of the query from my previous blog post, with relative time on the x axis and the amount of data read  in bytes on the y axis:

image

In that post I noted that there were six reads of the file – and while that’s clear from the graph above, it’s also possible to see that the first read does not read the whole contents of the file while the next five do (the file is 149MB). So maybe I was right that there is one complete read of the file for each row in the output query? What is that first, partial read for, I wonder?

Using Process Monitor To Troubleshoot Power Query Performance Issues With File Data Sources

Troubleshooting Power Query performance issues in Power BI and Excel can be difficult because it’s a bit of a black box: there’s nothing in the UI to tell you what’s going on inside the Power Query engine and the diagnostic logs are very difficult to interpret. With relational data source like SQL Server you can use tools like SQL Server Profiler to see the queries that are being run by Power Query, and I blogged recently about using Fiddler to troubleshoot OData performance issues; but what about file-based data sources, which often present the most challenges regarding performance?

Process Monitor, a free tool from Microsoft, allows you to monitor file system activity in real-time and even having spent a limited amount of time using it I can already tell that it can provide a lot of information to help identify performance issues with file-based data sources. Take, for example, the scenario I described in my recent post on improving the performance of merge operations. In that post (which I suggest you read before you carry on) I mentioned that it looked as though the Power Query engine was reading data from one of the source files multiple times and Process Monitor confirms that this indeed the case.

Disclaimer: I am not going to pretend to be an expert on Process Monitor, the Windows file system or the internals of Power Query. The rest of this post contains the results of my experiments and the conclusions I have drawn from them, which may or may not be correct!

There are a lot of resources online describing Process Monitor (I also took a look at this book, which is very helpful, as is the help file) and it’s also fairly intuitive, so I’m not going to attempt to explain the basics of how to use it, but with a bit of trial-and-error I set up a simple filter consisting of:

  • Operation is CreateFile
  • Operation is CloseFile
  • Path is <the full path of the main csv file used as the data source>

image

…to watch what happened when the unoptimised query refreshed:

image

Some things to note about the output shown above:

  • I had the “Enable data preview to download in the background” option turned off just to make sure I only saw activity relating to the refresh
  • For the Power Query engine it seems as though the Process Name is Microsoft.MashupContainer.NetFX40.exe.
  • There are obvious pairs of CreateFile/CloseFile operations
  • Some of these pairs, where the detail for the CreateFile operation is “Desired Access: Read Attributes”, seem to be for reading file metadata and are very fast
  • The pairs where the CreateFile operation has a detail of “Desired Access: Generic Read”, seem to be where the contents of the file are read (the first of these pairs is highlighted in the screenshot). The Relative Time column shows the time elapsed since Process Monitor started capturing events, and the difference between the Relative Time values for the CreateFile and CloseFile operations in these pairs is around 5-10 seconds.
  • It looks like the Power Query engine reads some or all of the data from the csv file six times in total. This tallies roughly with the values shown in the Power BI UI for the amount of data read from disk, although it does not tally with my guess that it was reading the data from the file once for each of the five rows in the output query.
  • The difference in the Relative Time values of the first and last events is around 45 seconds; the query itself takes around 55 seconds to run.

With the optimisation described in the blog post (ie doing a “Remove Duplicates” on one of the columns) in place, Process Monitor shows the following instead:

image

As you can see, it looks like Power Query now only reads from the file twice and the difference in time between the first and last event is now only around 8 seconds.

This is clearly useful: we can now see when Power Query opens a file, how often it opens a file, and how long it holds the file open for each time. It also raises a number of other questions which I can’t answer properly yet:

  • Why does Power Query need to open a file multiple times? Actually this isn’t a surprise to anyone who has spent a lot of time with Power Query and has a vague understanding of how it works, but it seems fair to say that the unoptimised query in this example was slow because Power Query had to open and read from the file multiple times.
  • What is Power Query doing exactly when it has the file open? Is it reading some or all of the data in the file? What is it doing for the rest of the time?
  • Are some file formats inherently slower than others as far as Power Query is concerned? It seems so: for example Excel files seem to be  a lot slower than csv files.

So yet more research is required… but still very interesting, don’t you think? If I’ve made any mistakes here, or if you use Process Monitor to see what happens when you run your queries and you find something else useful, please leave a comment below.

Combining Data From Multiple Worksheets In The Same Excel Workbook Using Power BI

It’s very common that you need to combine data from multiple worksheets in the same Excel workbook when you’re using Power BI or Power Query/Get&Transform in Excel. Indeed a lot of people have blogged about how to solve this problem, but none of the solutions I’ve found on the internet work in more complex scenarios when the data on each sheet needs some kind of transformation before it can be combined. I was asked to explain how to do this recently while teaching a Power BI class, so in this blog post I’m going to walk through a worked example and point out a few issues that might trip up even experienced Power BI users.

First of all, the source data. Let’s say you have an Excel workbook with four worksheets: Q1, Q2, Q3 and Q4. On each worksheet is some sales data for the three months in each quarter; for example the Q1 worksheet looks like this:

image

…the Q2 worksheet looks like this:

image

…and so on. The required output for Power BI should be a table that looks like this:

image

Now most of the blog posts that describe this problem, such as Ken Puls’s post here, assume each worksheet has a table with the same column names on it. If each sheet has the same columns, this means you can just connect to the Excel workbook and get a table containing the contents (Miguel Escobar has a great post describing how to do this here) and then click the Expand/Aggregate button:

image

However in this particular case it doesn’t solve the problem, because we get this:

image

Aha, you may say, we have to transform the data before we can combine it and so we need to create a function and call it for every worksheet – the technique I’ve already blogged about here. And yes, that is basically what needs to happen, but the devil’s in the detail.

Here’s the solution, step-by-step:

Step 1: Get a table with all the worksheets listed

In Power BI connect to your Excel file as normal, then in the Navigator pane right-click on the name of the Excel workbook and select Edit rather than selecting any of the individual worksheets:

image

The result will be a table that looks something like this:

image

If you need to, filter out any rows that do not contain “Sheet” in the Kind column and also filter out any worksheets that you don’t want to combine data from.

Step 2: Create your template query

Duplicate the query above and call the new query Template.

Now, in the Template query, select one of the worksheets to use to build the query whose logic will be applied to all the other worksheets, and filter the table above so it only contains the row for that worksheet. In this case I’m using the worksheet called Q1:

image

Then – and this is important – remove all the other columns in the table except the Data column:

image

Doing this changes the M code generated for the next thing you’ll do; removing all these columns changes the way the row is referenced (see the section on “The effect of primary keys” in this post) and makes sure the name of the worksheet won’t be hard-coded anywhere.

After that click the Table link inside the cell, and you’ll see the contents of the worksheet:

image

There will probably be a Changed Type step in the query that sets the data types for each of the columns, and you will need to delete it:

image

You can now perform any other transformations you need on this query, but you will need to avoid any transformations that generate M code referring to any columns on the original worksheet that aren’t present on other worksheets. Remember, these transformations will need to be applied to the other worksheets and they will fail if they refer to columns that aren’t present – this is why you had to delete the Changed Type step earlier, because it sets the types on the January, February and March columns, and you’ll probably need to delete any other Changed Type steps that are created elsewhere in the query. Open up the Advanced Editor and check the M code for the whole query just to be sure.

In this case all I need to do is unpivot the month columns  by selecting the Product column and using the Unpivot Other Columns button on the Transform tab, and then renaming the columns appropriately:

image

Step 3: Create a function

Next you need to create a new parameter by clicking the Manage Parameters/New Parameter button, call the parameter Worksheet, set the data type to text and have it return the name of the worksheet you chose in the previous step:

image

Now, go back to the Template query, find the step called Filtered Rows towards the beginning where you filtered down to a single worksheet, and click the gear icon next to the step to edit it:

image

Then, edit the step so it uses the value returned by the parameter to filter by instead of the hard-coded value you entered earlier. To do this, click on the button shown below, select Parameter and then select the Worksheet parameter in the next dropdown box along:

image

Finally, go to the Queries pane on the left-hand side of the screen and right-click on the Template query and select Create Function..

image

You’ll be prompted to give the new function a name; call it GetData:

image

Step 4: Invoke the function and combine the data

Finally, go back to the duplicate copy of the original query created at the beginning of step 2. Then go to the Add Column tab on the ribbon and click the Invoke Custom Function button and invoke the GetData function, passing in the contents of the Name column to the function’s only parameter:

image

Last of all, click the Expand/Aggregate button on the new column and expand the nested tables:

image

After removing any unnecessary columns, you’ll see the data from all the worksheets combined into a single table as desired:

image

Don’t forget to set the data types on each column.

You can download the Excel workbook used in this post here and the sample Power BI Desktop file here.

How The New IsAvailableInMDX Property For Analysis Services Tabular Can Reduce Memory Usage And Speed Up Processing

Following on from my post the other week about the new RowsetSerializationLimit server property, I thought it would be a good idea to write about why the new IsAvailableInMDX property (announced in the same blog post) is so important. In fact, I would say that everyone using Analysis Services Tabular 2017 (CU7 or higher) or Azure Analysis Services should spend some time investigating it because the potential benefits in terms of reduced memory usage and faster processing times are significant, especially for larger models.

First of all, what does it actually do? As the blog post says, it allows you to stop attribute hierarchies from being built on columns when you don’t need them. But what are attribute hierarchies? They are structures that are used only when you are querying your Tabular model using MDX; Excel PivotTables, for example, generate MDX queries when they are connected to Analysis Services Tabular whereas Power BI always generates DAX queries. An attribute hierarchy allows a column on a table to be used on the rows or columns axis of an MDX query, and in Excel that means you will be able to drag that field onto the rows or columns area of a PivotTable. Attribute hierarchies are used by some DAX functionality too – for example the TreatAs() function (at least for now) needs them to be present to work. Frustratingly, the DAX functionality that does need attribute hierarchies is not documented.

To give you an example, consider a Tabular model that contains a table with three columns, Product, Customer and Sales, and a measure that sums up the values in the Sales column.

image

 

I can query this Tabular model in Power BI, for example by creating a Matrix visualisation:

image

I can also get the same values out using an Excel PivotTable:

image

Now the reason I can create this PivotTable is that Analysis Services Tabular has created attribute hierarchies on the Customer and Product columns. However, the important thing to understand is that Analysis Services Tabular creates attribute hierarchies on every column on every table by default, including the Sales column. This allows me to create a PivotTable like this, with the distinct values from Sales on the rows of the PivotTable:

image

image

You’re probably thinking, why would I ever want to use Sales – a measure column – like this? And the answer is you probably wouldn’t, even though Tabular allows this by default. What’s more, building the attribute hierarchy for Sales makes processing slower and the resulting hierarchy uses memory, so all this comes as a cost. The IsAvailableInMDX property is therefore very useful because it allows you to stop attribute hierarchies from being built on columns like Sales where they serve no real purpose.

Unfortunately at the time of writing SSDT doesn’t allow you to set the IsAvailableInMDX property but the good news is that the latest versions of Tabular Editor do:

image

Setting IsAvailableInMDX to false for the Sales field has no impact at all in Power BI, so long as you are not using functionality like TreatAs() that needs it. In Excel, it just means that it is no longer possible to drag Sales onto rows or columns in a PivotTable – the Sales Amount measure still works:

image

As a result, there are two recommendations that can be made:

  • If you are not using any client tools that generate MDX queries (such as Excel) or you want to prevent your users from using them, and you can be sure that it does not affect any of your existing Power BI reports or DAX calculations, you can set IsAvailableInMDX to false on every column of every table
  • If you are using client tools that generate MDX you can still probably set IsAvailableInMDX to false on every measure column and not lose any important functionality

How much of an impact will doing this have on processing times and memory usage? It depends, but it could be a lot. The anecdotal evidence on Twitter is promising:

image

image

I did my own (not particularly scientific) test using a table with five million rows and ten columns, each of which contained the integers between one and five million. Here’s the M query to generate such a table without the need for an external data source:

let
    Source =
	Table.FromColumns(
		List.Repeat(
			{{1..5000000}},
			10
		)
	),
    #"Changed Type" =
	Table.TransformColumnTypes(
		Source,
		List.Transform(
			Table.ColumnNames(Source),
			each {_, Int64.Type}
		)
	)
in
    #"Changed Type"

On my laptop, with IsAvailableInMDX set to true for all ten columns, a full process on this table took around 105 seconds and the table size reported by Vertipaq Analyzer was 381MB. After changing IsAvailableInMDX to false for all ten columns, the time for a full process went down to around 81 seconds and the table size was down to 191MB.

In summary, this is one of those seemingly obscure technical changes that turns out to be way more useful than you might think. If you test out setting IsAvailableInMDX on your Tabular model, please leave a comment letting me know what kind of impact it had!

[Thanks to Daniel Otykier for providing a lot of information for this post]

OData Performance Improvements In The June 2018 Power BI Desktop Release

In the June 2018 release of Power BI Desktop there were a number of improvements made to the way the Power Query engine handles OData data sources. You can read about them here:

https://powerbi.microsoft.com/en-us/blog/power-bi-desktop-june-2018-feature-summary/#oData

However, while testing them out, I noticed one important point that the announcement didn’t make: an existing Power Query query will only benefit from these changes if you make a small change to its M code, adding the Implementation=”2.0” option to the OData.Feed() function.

Take the following M query, running on the UK Houses of Parliament OData API:

let
    Source = OData.Feed("https://api.parliament.uk/odata"),
    GovernmentOrganisation_table =
    Source{[
        Name="GovernmentOrganisation",
        Signature="table"
        ]}[Data],
    #"Expanded GroupHasPosition" = 
        Table.ExpandTableColumn(
            GovernmentOrganisation_table, 
            "GroupHasPosition", 
            {"PositionName"}, 
            {"PositionName"}),
    #"Filtered Rows" = 
        Table.SelectRows(
            #"Expanded GroupHasPosition", 
            each [PositionName] = "Chancellor of the Exchequer")
in
    #"Filtered Rows"

It queries a table called GovernmentOrganisation, expands a column called GroupHasPosition and then filters on one of the expanded columns, PositionName, to only return the rows where PositionName contains the text “Chancellor of the Exchequer”.

image

Using Fiddler in the way I describe here, I can see that when this query runs the engine first tries to fold the filter on “Chancellor of the Exchequer” and then when this request returns an error, it defaults to a very slow approach that involves making multiple requests to the API:

image

However, if you change the code above so that the OData.Feed() function uses the Implementation=”2.0” option like so:

let
    Source = OData.Feed(
        "https://api.parliament.uk/odata", 
        null, 
        [Implementation="2.0"]),
    GovernmentOrganisation_table =
    Source{[
        Name="GovernmentOrganisation",
        Signature="table"
        ]}[Data],
    #"Expanded GroupHasPosition" = 
        Table.ExpandTableColumn(
            GovernmentOrganisation_table, 
            "GroupHasPosition", 
            {"PositionName"}, 
            {"PositionName"}),
    #"Filtered Rows" = 
        Table.SelectRows(
            #"Expanded GroupHasPosition", 
            each [PositionName] = "Chancellor of the Exchequer")
in
    #"Filtered Rows"

…then Fiddler shows a completely different request made:

image

In this case it does not try to fold the filter, but it at least does the expansion in a single request. The performance of the resulting query is a lot better.

It looks like all new code generated by the Power Query Editor uses the Implementation=”2.0” option for OData.Feed() so it will get the benefits described in the post on the Power BI blog. Existing M code won’t have this option set though so you will need to update it appropriately. As always you should test thoroughly after making these changes to make sure you do indeed make your query run faster and get the same behaviour as you had before.

More Details On The RowsetSerializationLimit Server Property For Analysis Services

The recent blog post on the Analysis Services team blog about new memory options contained information about some very interesting functionality that has just been added. The first of the new features I decided to try was the RowsetSerializationLimit server property, which restricts the number of rows returned by a query and which is intended to stop rogue users running queries that return very large amounts of data. It certainly works but there are two things to point out about it that aren’t immediately obvious.

First of all, an example of it in action. In Azure Analysis Services, using a database that contains one very simple table with one column and 5000 rows, the following DAX query returns all 5000 rows from the table:

EVALUATE 'MyTable'

image

To change the RowsetSerializationLimit server property, open SQL Server Management Studio, right-click on your server name in the Object Explorer pane and select Properties. Then go to the General page, check the Show Advanced (All) Properties box, and you’ll see the property listed under OLAP\Query\RowsetSerializationLimit:

image

The default value is –1, meaning no limit is placed on the number of rows returned by a rowset. Changing it to 100 and then running the above query results in the following error:

image

Executing the query …
Obtained object of type: Microsoft.AnalysisServices.AdomdClient.AdomdDataReader
The maximum number of rows (100) was reached.

A query that returns less than 99 rows, for example

EVALUATE TOPN(99,'MyTable')

…is ok though.

The important thing to understand is that although this applies to both DAX and MDX queries, it only applies to queries that are returned as rowsets (a tabular format) and not as cellsets (a multidimensional format). Most client tools that generate MDX, including Excel, use cellsets so this property will not apply to them unfortunately.

For example, the following MDX query run from a SQL Server Management Studio MDX query window (which also returns a cellset) on the same database runs ok and returns 5000 rows:

SELECT
{[Measures].[Number Of Rows]} ON 0,
[MyTable].[Column1].[Column1].MEMBERS ON 1
FROM
[Model]

 

image

However the same query run from Power BI Desktop to import data from the same database:

image

…runs, but returns only 100 rows and then an error:

image

Something else to point out is that this applies to all queries that return rowsets, even Discover requests and DMV queries. As a result, setting this property to a very low value can cause problems in tools like SQL Server Management Studio: for example if you set the property to 10 and you had 11 tables in a database, you would see an error when you tried to expand the Tables node of a database in the Object Explorer pane!

Even though this property only applies to rowsets I think it’s still going to be very useful, especially in scenarios where Power BI Desktop users are importing vast amounts of data from Analysis Services and causing major problems on the server.

Line Breaks And Auto-Indent In The Power BI DAX Formula Bar

The other day I discovered something new (at least to me) while writing the DAX for a measure in Power BI Desktop: when you insert a new line in your DAX expression using SHIFT-ENTER it also auto-indents the code. I asked a few people if this was new because I was sure I hadn’t seen it before, even though I always put line breaks in my code; of course Marco had and said he thought it had been around for a while. Anyway, Marco then commented that most people didn’t know you could even put line breaks in DAX and I thought to myself I should probably write a short blog post about all this, because of course line breaks and indentation make your code much more readable.

Here’s what it looks like to write a DAX measure in Power BI Desktop using line breaks and auto-indent:

DAXIndent

To summarise:

  • Press the SHIFT and ENTER keys on the keyboard simultaneously to insert a line break in your DAX code when you are working in the Power BI DAX formula bar
  • Do this after the opening bracket of each function and after the comma of each function argument and the auto-indent will kick in
  • Bonus tip: while typing DAX, if the dropdown list has the function, table, column or measure selected that you want to use, just hit the TAB key to auto-complete. If the dropdown list hasn’t selected what you are looking for and you don’t want to keep typing, use the up and down arrow keys on the keyboard to move up and down the list until you have selected what you want.

Of course there’s always Marco and Alberto’s excellent DAX Formatter service to format your DAX code but this is a lot more convenient.

%d bloggers like this: