Updated Power BI Video Training Course Now Available

I’m pleased to announce that the updated version of my Power BI video training course is now available via my friends at Project Botticelli. You can find out more and purchase it here:

https://projectbotticelli.com/power-bi-desktop-and-cloud-course-video-tutorial?pk_campaign=cwblog2019&code=crossjoin

If you book using the discount code CROSSJOIN you’ll get 10% off!

I’ve recorded a lot of new content to cover all the new features that were added to Power BI in 2018, and there are also new segments on Azure Analysis Services and Power BI Premium. You can watch a few of the videos for free to get a taste of the course.

Don’t forget that my older MDX and SSAS Multidimensional courses are also still available on the same site.

ODBC Data Sources, The SqlCapabilities Option And Power Query/Power BI Data Refresh Performance

A few weeks ago I received a great tip from Simon Nuss about a Power Query feature relating to ODBC data sources that I hadn’t seen before (although it’s in Power BI Desktop and Excel 365, so it can’t be that new) and which is potentially important for the performance of data refresh and of the Power Query Editor’s preview pane. It’s only visible if you click “Advanced options” when creating your ODBC data source:

image

It’s the “Supported row reduction clauses” option:

image

What does it do? Well, first of all let’s see what happens if you don’t set it. Let’s say you have an ODBC data source that uses the Microsoft ODBC Driver for SQL Server to connect to the Adventure Works DW database. If you create a new query that uses this data source to get the contents of the DimDate table it works pretty much as you’d expect; here’s the M code:

let
    Source =
        Odbc.DataSource(
            "dsn=Test SQL",
            [HierarchicalNavigation=true]
        ),
    #"Adventure Works DW_Database" =
        Source{
            [Name="Adventure Works DW",Kind="Database"]
            }[Data],
    dbo_Schema =
        #"Adventure Works DW_Database"{
            [Name="dbo",Kind="Schema"]
            }[Data],
    DimDate_Table =
        dbo_Schema{[Name="DimDate",Kind="Table"]}[Data]
in
    DimDate_Table

image

However, if you look in Profiler when the contents of the Preview window is refreshed, you’ll see that the Power Query engine is running a query that gets the entire contents of the table:

image

Obviously, if you are connecting to a big table this could make the Power Query Editor slow. What’s more, as this blog post shows, the Power Query Editor only actually wants to display 1000 rows and the fact that the query is requesting all the rows from the table suggests that even basic query folding is not happening on this ODBC data source.

Now let’s see what happens if you do select something from the “Supported row reduction clauses” dropdown. As the screenshot above shows, there are four values in the dropdown that relate to the form of SQL supported by your ODBC source to only get the first N rows from a table. If you don’t know which one to choose you can click the Detect button and it will select one for you. In this case I selected TOP:

image

Having done this, when you connect to the same DimDate table, you’ll see that now the Power Query Editor is only requesting the top 1000 rows from the table:

image

Also, if you look at the M code for the query, the Odbc.DataSource function has the as-yet undocumented SqlCapabilities option set in its second parameter,

Odbc.DataSource(
 "dsn=Test SQL",
 [HierarchicalNavigation=true,
 SqlCapabilities=[LimitClauseKind=LimitClauseKind.Top]
 ]
)

Actually it’s not quite undocumented because it is mentioned here as something that is only available if you’re using Odbc.DataSource in a custom connector; obviously things have changed.

Setting this option can make a big difference to the responsiveness of the Power Query Editor when you’re working with large tables and/or slow data sources – I saw this myself this week when working with one of my customers. It’s also going to be important if you use the Keep Rows/Keep Top Rows button on the ribbon or the Table.FirstN() M function; I suspect there are other, more obscure, reasons why it might speed refresh up even if you aren’t filtering the rows of the table but I’m not 100% sure.

One last thing to mention is that there is another undocumented function which is what, I think, the Detect button in the screenshots above uses to find out how to set the SqlCapabilities option. It’s called Odbc.InferOptions and it can be used like so:

Odbc.InferOptions("dsn=Test SQL")

This function returns a record containing a field called LimitClauseKind that tells you the type of limit clause that is supported:

image

image

Here’s a list of the possible LimitClauseKind values I found using #shared:

image

Data Privacy Settings And Data Refresh Performance In Excel And Power BI

I have known for a long time that data privacy checks can have an impact on the performance of data refresh in Power BI and Excel, but on a recent performance tuning engagement I had the chance to observe just how much of a difference changing these settings can make. Unfortunately I can’t share the M code for the queries I was working with but the scenario is very common:

  • The only data source was a single Excel workbook with multiple worksheets
  • There were multiple Power Query/M queries reading data from these worksheets; the output of these queries was not being loaded into the Power BI dataset
  • There were further Power Query/M queries that referenced these source queries, none of which were being loaded into the Power BI dataset, and they all fed into a single query whose output was being loaded into a dataset
  • Overall data volumes were very small, less than 1MB, and the final query only returned approximately 5000 rows
  • Data refresh times were exceedingly slow, even with the “allow data preview to download in the background” setting turned off, and there was a long period where the Power Query engine was stuck in the “Evaluating” phase of the query during refresh in Power BI Desktop before the dialog showed the amount of data being loaded:

    image

  • Data refresh times also varied by +/-25% (I have a theory why this is happening – if I can confirm it I will blog about this) from one refresh to another

As a benchmark, here are some typical refresh timings for the same table in Power BI Desktop (December 2018 version) and Excel for Office 365 (build 11126.20118 click-to-run) with the default setting of data privacy checks turned on and the data privacy level set for the source Excel file set to Public:

Power BI Desktop Excel
Evaluating Phase (Seconds) 122 173
Data Load Phase (Seconds) 140 113
Total Refresh Time (Seconds) 262 286

Here are the same timings with the Power Query engine set to combine data according to each file’s data privacy settings, and the data privacy checks for the current file turned off:

image

image

Power BI Desktop Excel
Evaluating Phase (Seconds) 69 91
Data Load Phase (Seconds) 108 112
Total Refresh Time (Seconds) 177 203

Clearly there are some substantial query performance gains to be had by turning off data privacy checks and these gains are made in the Evaluating phase. In this case the queries in both Power BI Desktop and Excel were around 30% faster with data privacy checks turned off.

BUT is it safe to turn off data privacy checks? I strongly advise you to read my series on data privacy settings starting here, in particular this post on disabling checks, to understand the issues involved. You should not turn off data privacy checks without understanding what they do because you may end up sending sensitive data to somewhere it shouldn’t go and maybe even break the law. What’s more, if you have published a Power BI dataset you cannot disable data privacy checks for scheduled refresh unless you are using the gateway in Personal mode and have configured it with the “Fast Combine” option turned on, as shown here. Therefore turning off privacy checks is only really useful for Power Query users in Excel or to speed up development in Power BI Desktop.

In this particular case there is only one data source, an Excel workbook, a source which does not support query folding, so as far as I can see there is no risk in turning off data privacy checks because there is no way data from this Excel workbook can be sent anywhere else. Now why couldn’t the Power Query engine work this out?

Power BI And Column Order In M Queries And Dataset Tables

The order of the columns in a table in a Power BI dataset doesn’t matter all that much, especially because the Fields pane in Power BI Desktop ignores the original column order and lists the columns in a table in alphabetical order. However there are a few situations where it is important, for example when you are using the DAX Union() function in a calculated table: as the documentation states, when you use Union() “Columns are combined by position in their respective tables”. You might also find it irritating if the columns you see in the Data or Relationships panes in the main Power BI Desktop window make it hard to browse the data or create relationships. You can control the order of columns in a table in the Power Query Editor but it’s not completely straightforward to get any changes to column order you make in your queries to be applied to the order of the columns in a table after it has already been loaded into a dataset.

Let’s say you have an M query that returns a table with three columns, Column1, Column2 and Column3, that looks like this:

image

Here’s the M code for this query:

#table(
    type table [Column1=number, Column2=number, Column3=number],
    {{1,2,3}}
)

The output you see in the Data pane in the main Power BI window is, unsurprisingly this:

image

However, if you subsequently edit the query above so the same columns are present but in a different order, like so:

image

#table(
    type table [Column2=number, Column3=number, Column1=number],
    {{2,3,1}}
)

…you will find that it does not affect the order of the columns shown in the Data pane in Power BI Desktop, which are fixed in the order they were when they were first created.

To get the column order change to be applied to the table in the dataset, you need to go back into the Query Editor, right-click on the Query and deselect the Enable Load option:

image

When you do this, you get the following warning:

image

If you subsequently click Continue and then Close & Apply, the dire warnings will come true: you will not only break any visuals that use this table but any measures or calculated columns associated with this table will be lost. If you instead click Continue and then click the Enable Load property on the query so loading is enabled again, you will find that when you then click Close & Apply the visuals will still work, the columns have been reordered, but the measures and calculated columns will still be lost. You’ll also see something that I think is a bug: there will be no data in the table in the dataset

image

You will have to manually refresh the query in the main Power BI Desktop screen to see the data in the table:

image

Credentials, Data Privacy Settings And Data Sources In Power Query/Power BI

Recently I’ve been doing some more investigations into how data privacy settings work in Power BI. This is a subject I’ve blogged about in great detail already in a series of posts last year, but this functionality is so complex that there is always more to learn. I don’t have any profound new insights to offer; instead this blog post is a write up of a series of experiments whose results shed light onto how the process of setting data privacy levels works end-to-end.

Consider the following M query:

let
    Source = 
        Json.Document(
            Web.Contents(
                "https://data.gov.uk/api/3/action/package_search?q=cows"
                )
            ),
    result = Source[result],
    results = result[results],
    results1 = results{0},
    id = results1[id],
    output = 
        Json.Document(
            Web.Contents(
                "https://data.gov.uk/api/3/action/package_show?id=" & id
                )
            )
in
    output

The query does the following:

  • Runs a query against the UK government’s open data search API (the same API I use in this post on the RelativePath and Query options for the Web.Contents() function) to get a list of datasets related to the search term “cows” via the Package Search endpoint
  • Gets the first dataset returned by the search and finds its ID
  • Sends this ID to the Package Show endpoint in the same API to get the full JSON representation of this data set. Note that the entire URL is dynamically generated and that the Query option of Web.Contents() is not used here.

It’s a typical scenario where data privacy settings can cause problems: data from one data source, the package_search endpoint, is sent to another data source, the package_show endpoint. My series from last year on data privacy settings provides some useful background information on why this is such an important thing for the Power Query engine.

Assuming that you have never used this API before, when you try to run this query in the Power Query Editor in Power BI Desktop, you’ll see the following prompt to edit the credentials used:

image

Before you click the Edit Credentials button, there are two interesting things to point out. First, in the Query Dependencies view, you see this:

image

Notice that the Package Search endpoint is shown but not the Package Show endpoint.

Second, if you click the Data Source Settings button, you’ll see the following in the dialog that appears:

image

Not only does it only show the Package Search endpoint, there is a warning that says:

“Some data sources may not be listed because of hand-authored queries”

This refers to the output step in the query that calls the Package Show endpoint with the dynamically-generated url.

Closing this dialog and going back to the Query Editor, if you click the Edit Credentials button, you can set credentials for the data source (anonymous access is fine in this case). These credentials can be set at all levels in the path down to https://data.gov.uk/api/3/action/package_search.

image

Setting credentials at the level of https://data.gov.uk means you only get prompted once; however if you select https://data.gov.uk/api/3/action/package_search from the dropdown list and click Connect you will get prompted again to set credentials, this time with a dropdown that shows all paths down to package show:

image

Assuming you set credentials at the level of https://data.gov.uk and click Connect, then Fiddler shows that a call is made to https://data.gov.uk/api/3/action/package_search?q=cows, presumably to check whether the credentials entered actually work and you move back to the Query Editor.

image

Next, in the Query Editor, you see the data privacy settings prompt:

image

Clicking Continue brings up the data privacy levels dialog:

image

You have the choice to ignore privacy levels for this file, but of course you should always try to avoid doing that. You also have two dropdown boxes that both show https://data.gov.uk on the left-hand side and another two dropdown boxes next to them, although only the top one of these is enabled.

In the first column of dropdown boxes, in the first dropdown, you can see all points in the path from https://data.gov.uk to https://data.gov.uk/api/3/action/package_search:

image

In the dropdown box immediately underneath you can see for the first time all points in the path from https://data.gov.uk to https://data.gov.uk/api/3/action/package_show:

image

If you select https://data.gov.uk in the top-left dropdown only the top-right dropdown is enabled, and in the top-right dropdown you can set the data privacy levels Public, Organizational and Private.

image

The meanings of these three levels are described in my earlier series and here, and I won’t go into detail about what they mean in this post. The bottom-right dropdown is disabled because if you set privacy levels for https://data.gov.uk then all urls that start with this path inherit this setting. This is similar to what happens with the None data privacy setting that I describe here, I think.

Setting the data privacy level to Public on https://data.gov.uk in the top-right dropdown means the query runs successfully:

image

The expected activity is shown in Fiddler:

image

And at last the Package Show endpoint is shown in the Query Dependencies view:

image

The Data Source Settings dialog shows the same as it does above in the “Data sources in current file” tab, including the warning about hand-authored queries, but on the “Global permissions” tab there is now an entry for https://data.gov.uk :

image

Although you only set a privacy level for https://data.gov.uk earlier, it’s interesting to note that the entry for https://data.gov.uk/api/3/action/package_search has a privacy level set explicitly to Public and not to None:

image

Stepping back a few steps to the Privacy Levels dialog, if you set a privacy level of Private for https://data.gov.uk like so:

image

…then the query fails with the error “Formula.Firewall: Query ‘Query1’ (step ‘output’) is accessing data sources that have privacy levels which cannot be used together. Please rebuild this data combination.”:

image

From my point of view, this is the first really interesting finding: the two endpoints, https://data.gov.uk/api/3/action/package_search and https://data.gov.uk/api/3/action/package_show, are considered as separate data sources (which tallies with what is shown in the Query Dependencies view) even though they have both inherited their data privacy level setting from https://data.gov.uk. Since they are both Private then data cannot be sent from one source to the other, hence the error.

The second interesting finding becomes apparent if you follow the steps above with a slightly different version of the original query that uses the Query option in the call to the Package Show endpoint:

let
    Source = 
        Json.Document(
            Web.Contents(
                "https://data.gov.uk/api/3/action/package_search?q=cows"
                )
            ),
    result = Source[result],
    results = result[results],
    results1 = results{0},
    id = results1[id],
    output = 
        Json.Document(
            Web.Contents(
                "https://data.gov.uk/api/3/action/package_show",
                [Query=[#"id"=id]]
                )
            )
in
    output

In this case when you look in the Data Source Settings dialog you now see both endpoints listed and you no longer see the “hand-authored queries” warning:

image

It looks like whatever method it is that the Power Query engine searches for data sources inside a query is confused by dynamically generated urls – which might also explain why data sources that use dynamic urls can’t be refreshed after publishing.

BI Survey 18 Highlights

Every year, in return for publicising the BI Survey (the largest survey of BI tool customers and vendors in the world) here on my blog, I get a free copy of the results and the chance to blog about some of the more interesting findings. Here are a few points that stood out for me this year in the BI Survey 18:

  • The top 5 tools acquired by respondents were Power BI, Excel, Qlikview, Tableau and SSRS in that order – so Microsoft has three of the top 5 most popular BI tools. Given that users often have trouble understanding the different types of BI that Power BI, Excel and SSRS are suited to, Microsoft’s long-standing strategy of close ties between Power BI and Excel and the forthcoming integration of SSRS reports into Power BI Premium looks very wise.
  • Power BI has – unsurprisingly – one of the best price-to-value ratios reported of all BI tools. The flip side of this is that customers are less happy with the level of support that Microsoft offers; I guess this is all relative to more expensive BI tools which need armies of people from the vendor or from a BI consultancy to implement them, and who are more likely to bend over backwards for their customers.
  • Power BI is now the number two client tool used for SSAS after SSRS, beating Excel. I find that hard to believe but I guess it’s possible.
  • Compared to its obvious competitors, and indeed to all other BI tools, Power BI gets some very good scores. There is no clear leader (and if there was, I would be very suspicious about the survey’s methodology) but it’s clear that Microsoft is now one of the leading BI vendors and given that it’s sustaining the levels of investment that brought it to this position, I think it Power BI will continue to grow and prosper in 2019.

Complex Filter Conditions In Power BI Reports Using Visual-Level Filters And Measures

While Power BI has a lot of great functionality for filtering the data that appears in a visual using visual-level filters there are going to be situations where you need more control over how filtering works, and in this blog post I’m going to show you how you can use visual-level filters and measures in combination to achieve this. The example I show here may be quite specific but I think the general technique has a lot of other interesting applications.

Let’s say you have a table containing customer purchases, with potentially more than one purchase per customer, and want to display a table in Power BI showing only the most recent purchase for each customer relative to a given date.

Using data from the Adventure Works DW database, here’s the Internet Sales table from my Power BI example dataset that contains the customer purchases. Each purchase has a Sales Order Number and a Sales Order Line Number, and there may be multiple line items in a single sales order.

image

Here’s some data from this table, filtered down to show the purchases of one customer: Abby Subram.

image

[Note: all dates shown are in dd/mm/yyyy format]

The requirement is that the report user should be able to select a date in a slicer and show the details for the most recent sales order that occurs before the selected date. So, for example, if the user selects 1/1/2003 then no rows should appear; if 16/6/2003 is selected then only SO50934 should appear; if 30/9/2003 is selected only the four rows relating to SO54975 (and not SO50934) should appear; if 28/6/2004 is selected only the four rows relating to SO73938 should appear, and so on.

The first step is to create a create a date table that contains the ‘as of’ dates the user can select from, using the disconnected slicer technique: basically a date table that has no relationship at all to the Internet Sales table shown above.

image

Next comes the tricky part. Visual-level filters can be used to filter the contents of visuals in a Power BI report based on the value of a measure, and that measure does not need to be displayed in a report. What you therefore have to do is create a measure that will return the value 1 for data that should be displayed (in this case, the most recent purchase order for the customer) and blank otherwise, then apply a visual-level filter using this measure and filter on this measure equalling 1. Marco and Alberto have a great article going into the details of how measures behave when they are used in visual-level filters here that I recommend you read before carrying on.

For this particular scenario here’s my measure:

Is Latest SO =
VAR CurrentAsOfDate =
    SELECTEDVALUE ( 'As Of Date'[DateKey] )
VAR CurrentSODate =
    SELECTEDVALUE ( 'Internet Sales'[OrderDateKey] )
VAR CurrentCustomer =
    SELECTEDVALUE ( 'Internet Sales'[CustomerKey] )
VAR SameCustomerSOs =
    FILTER (
        ALL ( 'Internet Sales'[CustomerKey], 'Internet Sales'[OrderDateKey] ),
        'Internet Sales'[CustomerKey] = CurrentCustomer
&& 'Internet Sales'[OrderDateKey] <= CurrentAsOfDate
            && 'Internet Sales'[OrderDateKey] > CurrentSODate
    )
VAR NoLaterSOs =
    IF ( COUNTROWS ( SameCustomerSOs ) > 0, BLANK (), 1 )
RETURN
    IF ( CurrentSODate <= CurrentAsOfDate, NoLaterSOs )

The logic here is:

  • Assume that this measure will be used to filter a table visual in a Power BI report, where each row in the table visual displays data from a single row in the Internet Sales table in the dataset.
  • For each row in the Internet Sales table, find the current values for OrderDateKey and CustomerKey, as well as the date selected in the ‘as of date’ slicer, and store them in variables
  • Create a table of all possible combinations of CustomerKey and OrderDateKey ignoring the current filter context, then filter it so you get all the sales orders for the current customer and where the order date is greater than the date of the current sales order and less than or equal to the selected ‘as of’ date. This is stored in the SameCustomerSOs variable.
  • If there are rows in the SameCustomerSOs table then the current row in Internet Sales should not be displayed because it is not the most recent sales order as of the selected date; if SameCustomerSOs is empty then the current row should be displayed, because there are not later sales orders for the current customer before the selected as of date.
  • Finally, return 1 if SameCustomerSOs is blank and if the current row in Internet Sales has an order date before the selected as of date.

You can then create a table (or some other visual) in your Power BI report and, in the visual-level filters for that table, drag in the measure and apply a filter on the value of the measure equalling 1:

image

Here’s the result, showing the data for Abby Subram:

 

MostRecentPurchase

The one thing you do need to be careful of when using this technique is that the DAX you use in your measure must be as efficient as possible: inefficient DAX, complex filters and large data volumes will make your report unusably slow. I’m sure the DAX used in my measure above could be tuned to perform better (it currently takes around half a second to filter the full 60,000 rows in the Internet Sales table; Power BI only queries for and displays 500 rows at a time, which makes things faster) but the real point I wanted to make here is that using measures and visual-level filters in this way allows you to apply almost any complex filter condition you want to the data displayed in a visual.

You can download the sample pbix file for this post here.

%d bloggers like this: