Simulating Slow Data Sources In Power BI

As a postscript to my series on Power BI refresh timeouts (see part 1, part 2 and part 3) I thought it would be useful to document how I was able to simulate a slow data source in Power BI without using large data volumes or deliberately complex M code.

It’s relatively easy to create an M query that returns a table of data after a given delay. For example, this query returns a table with one column and one row after one hour and ten minutes:

let
  Source = Function.InvokeAfter(
    () => #table({"A"}, {{1}}), 
    #duration(0, 1, 10, 0)
  )
in
  Source

Some notes:

  • I’m using #table to return the table without having to connect to a data source. More details on how to use #table can be found here.
  • The delay is achieved using the Function.InvokeAfter M function, with the amount of time to wait for specified using #duration

A more interesting problem is how to create an M query that, instead of waiting for a given duration and then returning a table immediately, returns the rows of a table one at a time with a delay between each row. Here’s a query that does that, returning ten rows one second at a time:

let
  NumberOfRows = 10,
  DaysToWait = 0,
  HoursToWait = 0,
  MinutesToWait = 0,
  SecondsToWait = 1,
  Source = #table(
    {"A"},
    List.Transform(
      {1 .. NumberOfRows},
      each Function.InvokeAfter(
        () => {1},
        #duration(
          DaysToWait,
          HoursToWait,
          MinutesToWait,
          SecondsToWait
        )
      )
    )
  )
in
  Source

Last of all, to simulate a slow SQL Server data source – not being much good at TSQL at all – I borrowed some code from this thread on Stack Overflow to create a function that returns a scalar value after a specified number of seconds:

CREATE FUNCTION [dbo].[ForceDelay](@seconds int) returns int as 
BEGIN DECLARE @endTime datetime2(0) = DATEADD(SECOND, @seconds, GETDATE()); 
WHILE (GETDATE() < @endTime ) 
SET @endTime = @endTime;
return 1;
END

I showed how to call this function from Power Query using a native SQL query here.

Power BI/Power Query Data Privacy Settings And Errors Caused By Nested Values

Over the past few years I’ve blogged and presented extensively on the subject of Power Query’s data privacy settings (see here for a post with links to all this content). I thought I knew everything there was to know… but of course I didn’t, and I’ve recently learned about an issue that can cause mysterious errors.

As always it’s easiest to show an example of how it can occur. Here’s a table of airport names taken from the TripPin public OData feed:

Note how the Location column contains nested values of data type Record, and note that I have not expanded this column.

Here’s another query with sales data for these airports coming from Excel:

Now let’s say we want to join these two queries together using a Merge operation in the Power Query Editor. When you do this, because you are combining data from two different sources and because OData supports query folding, you will be prompted to set data privacy settings on these sources (unless you have already done so at some point in the past) because a Merge could result in data being sent from Excel to the OData source.

If you set the data privacy levels to Private on each source, like so:

…you are telling Power Query that it should never send data from these sources to any other source. As a result, Power Query has to load the data from both sources, buffer that data in memory, and do the Merge inside its own engine.

When you do the Merge everything looks normal at first:

But when you expand the Airports column you’ll see that the nested values in the Location column have been turned into the text “[Record]” and as a result can no longer be expanded.

This is because Power Query has had to buffer the values in the Airports query but it is unable to buffer nested values (I wrote about this here).

There are two ways to fix this. First of all, you can change the data privacy settings or turn them off completely. I don’t recommend turning them off completely because this is only possible in Power BI Desktop and not in the Power BI Service, and even changing the data privacy settings can lead to some unexpected issues later on. For example, if you set the data privacy levels for both sources to Public like so:

…then no buffering is needed (because data can now be sent from one source to another) and the nested values in the Location field can be expanded:

…and of course you do so:

However, people always forget that you have to set your data privacy levels again after publishing your dataset to the Power BI Service. And if you or someone else subsequently sets the data privacy levels back to Private you’ll get the following error in the output of the query:

The error message here is:

“Expression.Error: We cannot convert the value “[Record]” to type Record.”

Depending on the data type of the nested field you might get:

“Expression.Error: We cannot convert the value “[Table]” to type Table.”

or

“Expression.Error: We cannot convert the value “[List]” to type List.”

…instead.

The second way to fix the problem is easier and probably safer: you just need to expand the Location column before the Merge operation instead of after it. That way there are no nested fields present when the Merge takes place so all the values can be buffered. Here’s what the Airports table looks like after the Location column has been expanded, before the Merge:

…and here’s the output of the Merge even when the data privacy levels for both sources are set to Private:

Handling “Select All” For Slicers Bound To Dynamic M Parameters In Power BI

Last year I blogged about handling multi-select scenarios for dynamic M parameters. Since then support for “Select All” has been added (see here) but since a regular slicer with no items selected behaves the same as one with all items selected, some extra logic is needed to make a slicer bound to a dynamic M parameter behave like a regular slicer.

To illustrate this, I created a simple dataset with two tables in it:

  1. A DirectQuery table linked to the AdventureWorksDW DimDate table (a simple date dimension table), stored in Snowflake
  2. An Import mode table called Day Name that contains one column and seven rows containing the names of the days of the week

I then created an M parameter called DayParam and bound it to the DayName column of the Day Name table and enabled the Multi-select and Select all options:

Here’s the M code that shows how to use the DayParam parameter to filter the DimDate table on the EnglishDayNameOfWeek column, and handle the scenario where “Select All” has been selected as well as the scenario where no item has been selected:

let
  Source = Snowflake.Databases(
    "xyz.snowflakecomputing.com",
    "DEMO_WH"
  ),
  AWORKS_Database = Source
    {
      [
        Name = "AWORKS",
        Kind = "Database"
      ]
    }
    [Data],
  PUBLIC_Schema = AWORKS_Database
    {[Name = "PUBLIC", Kind = "Schema"]}
    [Data],
  DIMDATE_Table = PUBLIC_Schema
    {[Name = "DIMDATE", Kind = "Table"]}
    [Data],
  #"Filtered Rows" =
    if DayParam = null then
      DIMDATE_Table
    else if Type.Is(
      Value.Type(DayParam),
      List.Type
    )
    then
      if DayParam = {"__SelectAll__"} then
        DIMDATE_Table
      else
        Table.SelectRows(
          DIMDATE_Table,
          each List.Contains(
            DayParam,
            [ENGLISHDAYNAMEOFWEEK]
          )
        )
    else
      Table.SelectRows(
        DIMDATE_Table,
        each [ENGLISHDAYNAMEOFWEEK]
          = DayParam
      )
in
  #"Filtered Rows"

The #”Filtered Rows” step does the following:

  • If nothing is selected in the slicer then the DayParam parameter will contain a null value, and in this case no filter will be applied to the DimDate table
  • If the DayParam parameter is of type list (when there is a multi-select in the slicer) then
    • If it’s a list containing a single text value of “__SelectAll__”, the default value returned when the slicer has “Select all” selected, then no filter will be applied to the DimDate table
    • Otherwise List.Contains will be used to filter the DimDate table by all the selected values
  • Otherwise the DayParam parameter will contain a single scalar value (because a single value has been selected in the slicer) and the DimDate table will be filtered by that value

Here’s how it works:

Multi-Value Parameters In Power Query Online

There’s a nice new feature in Power Query Online (the version of Power Query used in Dataflows): parameters of type List. You can see this as a new option in the Type dropdown in the “Manage parameters” dialog:

Why is this interesting? In the past, Power Query parameters were always single values like a date or a string; now a parameter can contain mutliple values.

There’s one other new feature in Power Query Online that goes along with this: In and Not In filters, which can use these new List parameters.

I’m sure there are other cool things you can do with this but I’ll leave them to future blog posts.

Preserving Data Types With SQL Queries In Power Query And Power BI

My post earlier this year on enabling query folding when using SQL queries as a data source in Power Query provoked a lot of interest. This post adds one more useful detail: how to preserve the original data types of the columns in your query when using this technique with SQL Server-related sources.

Consider the DimDate table in the AdventureWorksDW2017 sample database for SQL Server:

Notice that the FullDateAlternateKey column has the data type Date.

If you connect to this table in the normal way, by selecting it in the Navigation pane when you connect to your SQL Server instance, the M code for your Power Query query will look something like this:

let
  Source = Sql.Databases("localhost"), 
  AdventureWorksDW2017 = Source
    {[Name = "AdventureWorksDW2017"]}
    [Data], 
  dbo_DimDate = AdventureWorksDW2017
    {[Schema = "dbo", Item = "DimDate"]}
    [Data]
in
  dbo_DimDate

Unsurprisingly, the FullDateAlternateKey column in the Power Query query also has a data type of Date, as indicated by the calendar icon on the left side of the column header in the Power Query Editor:

However, if you use Value.NativeQuery to run a SQL query to get the same data and set EnableFolding=true, like so:

let
  Source = Sql.Databases("localhost"), 
  AdventureWorksDW2017 = Source
    {[Name = "AdventureWorksDW2017"]}
    [Data], 
  Q = Value.NativeQuery(
    AdventureWorksDW2017, 
    "Select * From DimDate", 
    null, 
    [EnableFolding = true]
  )
in
  Q

…you’ll see that the FullDateAlternateKey column comes through as a DateTime type instead:

The same thing would happen with a column of type Time too, ie it would come through as a DateTime.

If you want the types in the output of Value.NativeQuery to match the types in the output of the first Power Query query above there’s an extra option you need to add: PreserveTypes=true.

let
  Source = Sql.Databases("localhost"), 
  AdventureWorksDW2017 = Source
    {[Name = "AdventureWorksDW2017"]}
    [Data], 
  Q = Value.NativeQuery(
    AdventureWorksDW2017, 
    "Select * From DimDate", 
    null, 
    [
      PreserveTypes = true, 
      EnableFolding = true
    ]
  )
in
  Q

In the output of this query, FullDateAlternateKey has the data type Date again:

This option is only available for the SQL Server connector (and connectors related to it) at the time of writing.

[Thanks to Curt Hagenlocher for this information]

Connecting To REST APIs With OAuth2 Authentication In Power Query/Power BI

There are a lot of articles and blog posts out there on how to handle OAuth2 authentication when connecting to REST APIs from Power Query in Power BI. However there is also a lot of confusion and contradictory information too so in this post I want to give you the definitive, Microsoft-endorsed answer to this question, which is:

If want to connect from Power BI to a REST API that uses OAuth2 authentication then you need to build a custom connector. You can find documentation on how to implement an OAuth2 flow in a custom connector here.

The only exception is that you can connect to some APIs that use AAD authentication using the built-in web or OData connectors, as documented here.

A quick web search will turn up several examples of how to implement an OAuth2 credential flow in regular Power Query queries without needing a custom connector. This is not recommended: it’s not secure and it’s not reliable. In particular, hard-coding usernames/passwords or client ids/client secrets in your M code is a really bad idea. What’s more requesting a new token every time a query runs isn’t great either.

Unfortunately Excel Power Query doesn’t support custom connectors at the time of writing. Also, if you use a custom connector in the Power BI Service then you’ll need to use an on-premises gateway. Finally, there’s an article here explaining why it isn’t easy to connect Power BI to the Microsoft Graph API.

[Thanks to Curt Hagenlocher and Matt Masson for the information in this post]

Dynamic M Parameters, Snowflake Native SQL And Paginated Reports

There were a couple of new features and enhancements to existing features in the June 2021 Power BI Desktop release that don’t seem to have much to do with each other but which I think can be combined to do cool things. They are:

  1. The new paginated report visual
  2. Native SQL support in the Snowflake connector
  3. Improvements to dynamic M parameters

Let me give you an example of what I mean…

First of all, let’s start with native SQL support in the Snowflake connector. I deal with a lot of customers who use Snowflake and Power BI together and I know just how much people have wanted this. What does it allow you to do? Well, you have always been able to use the Power Query Editor to transform data coming from Snowflake in either Import mode or DirectQuery mode. Now, though, you can write your own native SQL query and use it as the source for a Power Query query (something that has always been possible with some other connectors, such as the SQL Server connector). Incidentally, this also means that the EnableFolding=true option for Value.NativeQuery that I blogged about recently also now works for Snowflake too.

The main reason you’d want to use a native SQL query when connecting to Snowflake, or indeed any database, is to do something that’s possible in SQL but not in Power Query. One example of this is to use regular expressions to filter data. I have the AdventureWorks DW DimCustomer table loaded into Snowflake and I can use Snowflake’s REGEXP function to filter on the LASTNAME column something like this:

SELECT 
DISTINCT FIRSTNAME, LASTNAME, ENGLISHOCCUPATION 
FROM "AWORKS"."PUBLIC"."DIMCUSTOMER" 
WHERE LASTNAME REGEXP 'To.*'

So that’s useful. I can use a query like this as the source of a table in DirectQuery mode in Power BI, but wouldn’t it be useful if end users of my report could change the regular expression used to filter the data? This is where dynamic M parameters come in. Assuming I have a table of pre-defined regular expressions:

And an M parameter:

…I can write an M query like this that uses the M parameter to return the regular expression used in the WHERE clause of the SQL query:

let
  Source = Value.NativeQuery(
    Snowflake.Databases(
      "mysnowflake.com", 
      "DEMO_WH"
    ){[Name = "AWORKS"]}[Data], 
    "SELECT DISTINCT FIRSTNAME, LASTNAME, ENGLISHOCCUPATION 
    FROM ""AWORKS"".""PUBLIC"".""DIMCUSTOMER"" 
    WHERE LASTNAME REGEXP '"
      & pRegEx
      & "'", 
    null, 
    [EnableFolding = true]
  )
in
  Source

…and then turn this into a dynamic M parameter in the Power BI diagram pane:

…and get a report that does this:

One limitation of dynamic M parameters in regular Power BI reports today is that the values you pass into them have to come from a column somewhere inside your dataset, so all of these values have to be pre-defined. Wouldn’t it be useful if the end user could enter any regular expression that they wanted though? That may not be possible in a regular Power BI report but it is possible with a paginated report, because with paginated reports you can write whatever DAX query you want – and therefore pass any value you want to a dynamic M parameter – and also, in a paginated report, you have the option of creating parameters where the user can enter whatever value they want.

I blogged about how to write DAX queries that contain dynamic M parameters here. Here’s an example of a parameterised DAX query (yes, I know, so many types of parameters…) that takes a regular expression and the name of an occupation and returns a table of customers whose last names match the regular expression and whose occupations match the one entered:

DEFINE
    MPARAMETER pRegEx = @DAXRegExParam
EVALUATE
FILTER (
    Customers,
    'Customers'[ENGLISHOCCUPATION] = @DAXOccupationParam
)

This can be used in a paginated report dataset connected to the Power BI dataset created above (yes, I know, so many types of datasets…) like so:

….which can then be used to build a paginated report that does this:

And of course, with the new paginated report visual, this paginated report can be embedded in a regular Power BI report:

All this is very much a proof-of-concept and not something I would recommend for production (I would be worried about SQL injection attacks for a start). There are more enhancements to these features still to come too. However, I do think it’s interesting to see how these features can be put together now and to imagine how they could be used in the future. What do you think?

Data Type Conversions For SQL Server Sources And Query Folding In Power Query

It’s surprisingly easy to stop query folding happening in Power Query by changing the data type of a column. This is mentioned in the docs here, and it’s something several people have blogged about already (for example here). However there is something new to note: an option that will allow you to convert text columns to number or date columns in a foldable way for SQL Server data sources.

Consider the following table in a SQL Server database that consists of a single nvarchar(50) column containing numeric values:

Here’s an M query that converts this column into a numeric column and which folds:

let
  Source = Sql.Databases(
    "localhost",
    [UnsafeTypeConversions = true]
  ),
  FoldingTest1 = Source
    {[Name = "FoldingTest"]}
    [Data],
  dbo_NumberFoldingTest = FoldingTest1
    {
      [
        Schema = "dbo",
        Item   = "NumberFoldingTest"
      ]
    }
    [Data],
  #"Added Custom" = Table.AddColumn(
    dbo_NumberFoldingTest,
    "ConvertedNumber",
    each Number.From([NumberAsText]),
    Int64.Type
  )
in
  #"Added Custom"

Here’s the output of the query, where a new custom column called ConvertedNumber contains the converted numeric values:

Here’s the resulting SQL generated by Power Query:

select [_].[NumberAsText] as [NumberAsText],
    convert(float, [_].[NumberAsText]) as [ConvertedNumber]
from [dbo].[NumberFoldingTest] as [_]

There are three important things to point out about the M query above:

  1. I have set the (relatively new) UnsafeTypeConversions property on the Sql.Databases function to true
  2. In the custom column I have used the Number.From function to convert the text in the NumberAsText column to numbers
  3. I have used the optional third parameter of Table.AddColumn to set the data type of the new custom column to the Int64 type

All these three things are necessary to get a properly typed numeric column in your Power Query query – if you vary from this too much then folding won’t happen.

It’s also possible to use this technique to convert text to datetime values. Here’s another SQL Server table, this time with dates stored in an nvarchar(50) column:

Here’s another M query that does the conversion and folds:

let
  Source = Sql.Databases(
    "localhost",
    [UnsafeTypeConversions = true]
  ),
  FoldingTest = Source
    {[Name = "FoldingTest"]}
    [Data],
  dbo_DateFoldingTest = FoldingTest
    {
      [
        Schema = "dbo",
        Item   = "DateFoldingTest"
      ]
    }
    [Data],
  #"Added Custom" = Table.AddColumn(
    dbo_DateFoldingTest,
    "ConvertedDate",
    each DateTime.From([DateAsText]),
    type datetime
  )
in
  #"Added Custom"

And here’s the resulting SQL:

select [_].[DateAsText] as [DateAsText],
    convert(datetime2, [_].[DateAsText]) as [ConvertedDate]
from [dbo].[DateFoldingTest] as [_]

Why, you ask, is this new property on Sql.Databases called “UnsafeTypeConversions”? As the name suggests, it allows you to do something that is potentially unsafe. Consider this SQL Server table that has an nvarchar(50) column containing some numeric values and one non-numeric value:

If you connect to this table and set the data type on this column to be Whole Number using the dropdown in the column header (they normal way to change the data type of a column), something like the M code below will be generated:

let
  Source = Sql.Databases("localhost"),
  FoldingTest = Source
    {[Name = "FoldingTest"]}
    [Data],
  dbo_NumberFoldingErrorsTest
    = FoldingTest
    {
      [
        Schema = "dbo",
        Item = "NumberFoldingErrorsTest"
      ]
    }
    [Data],
  #"Changed Type"
    = Table.TransformColumnTypes(
    dbo_NumberFoldingErrorsTest,
    {{"MixedTextNumbers", Int64.Type}}
  )
in
  #"Changed Type"

Here’s the output of this query:

Note how this query returns four rows and the third row contains the error value shown.

If, however, you try to use the UnsafeTypeConversions approach here using something like the following M:

let
  Source = Sql.Databases(
    "localhost",
    [UnsafeTypeConversions = true]
  ),
  FoldingTest = Source
    {[Name = "FoldingTest"]}
    [Data],
  dbo_NumberFoldingErrorsTest
    = FoldingTest
    {
      [
        Schema = "dbo",
        Item = "NumberFoldingErrorsTest"
      ]
    }
    [Data],
  #"Added Custom" = Table.AddColumn(
    dbo_NumberFoldingErrorsTest,
    "ConvertedToNumber",
    each Number.From([MixedTextNumbers]),
    Int64.Type
  )
in
  #"Added Custom"

You get the following result:

Notice now that there is an error value in both columns and, more importantly, only three rows are returned – the fourth has been lost. So, if you are going to use the UnsafeTypeConversions you need to be 100% sure that it will work and that you don’t have problems with your data quality.

[Thanks to Curt Hagenlocher for the information in this post]

A Look At Lobe – A Free, Easy-To-Use Tool For Training Machine Learning Models

A few months ago I heard about a new tool from Microsoft called Lobe which makes it easy to train machine learning models. It’s nothing to do with Power BI but I find anything to do with self-service data analytics interesting, and when I finally got round to playing with it today I thought it was so much fun that it deserved a blog post.

You can download it and learn more at https://www.lobe.ai/ and there’s a great ten minute video describing how to use it here:

The most impressive thing about it is not what it does but how it does it: a lot of tools claim to make machine learning easy for non-technical users but Lobe really is easy to use. My AI/ML knowledge is very basic but I got up and running with it extremely quickly.

To test it out I downloaded lots of pictures of English churches and trained a model to detect whether the church had a tower or a spire. After I labelled the pictures appropriately:

…Lobe was able to train the model:

I could test it inside the tool. The model was able to tell whether a church had a tower:

…or a spire:

…very reliably!

If I have one criticism it’s that when you want to use your model things get a lot more technical, at least compared to something like AI Builder for Power Apps and Power Automate, but I guess that’s because it is just a tool for training models. There have been some recent improvements here though (see this blog post) and Lobe does provide a local API for testing purposes that can be consumed in Power BI with some custom M code.

Here’s an example of how to call the local API in Power Query:

let
  Source = Folder.Files("C:\Churches"),
  #"Removed Other Columns"
    = Table.SelectColumns(
    Source,
    {"Content", "Name"}
  ),
  #"Added Custom" = Table.AddColumn(
    #"Removed Other Columns",
    "CallAPI",
    each Text.FromBinary(
      Web.Contents(

        //Insert Lobe Connect URL here                              
        "http://localhost...",
        [
          Content = Json.FromValue(
            [
              image = Binary.ToText(
                [Content],
                BinaryEncoding.Base64
              )
            ]
          ),
          Headers = [
            #"Content-Type"
              = "application/json"
          ]
        ]
      )
    )
  ),
  #"Parsed JSON"
    = Table.TransformColumns(
    #"Added Custom",
    {{"CallAPI", Json.Document}}
  ),
  #"Expanded CallAPI"
    = Table.ExpandRecordColumn(
    #"Parsed JSON",
    "CallAPI",
    {"predictions"},
    {"predictions"}
  ),
  #"Expanded predictions"
    = Table.ExpandListColumn(
    #"Expanded CallAPI",
    "predictions"
  ),
  #"Expanded predictions1"
    = Table.ExpandRecordColumn(
    #"Expanded predictions",
    "predictions",
    {"label", "confidence"},
    {"label", "confidence"}
  ),
  #"Pivoted Column" = Table.Pivot(
    #"Expanded predictions1",
    List.Distinct(
      #"Expanded predictions1"[label]
    ),
    "label",
    "confidence",
    List.Sum
  ),
  #"Changed Type"
    = Table.TransformColumnTypes(
    #"Pivoted Column",
    {
      {"Tower", type number},
      {"Spire", type number}
    }
  ),
  #"Removed Columns"
    = Table.RemoveColumns(
    #"Changed Type",
    {"Content"}
  )
in
  #"Removed Columns"

You can export models to a variety of other places for production use, including Azure Functions and Azure Machine Learning.

Definitely something to keep an eye on, especially because it will soon be able to do object detection and data classification as well as image classification.

Optimising The Performance Of Combining Data From Multiple Parquet Files In Power Query/Power BI

In all the testing I’ve done recently with importing data from Parquet files into Power BI I noticed something strange: loading data from a folder containing multiple Parquet files seemed a lot slower than I would expect, based on the time taken to load data from a single file. So I wondered – is there something that can be optimised? It turns out there is and in this blog post I’ll show you what I did.

If you import data from a folder containing Parquet files – whether it’s a local folder or a folder in ADLSgen2 storage – you’ll see a series of queries created for you in the Power Query Editor window that looks like this:

The query called Query1 shown in the screenshot iterates over all the files in the folder you’ve chosen and calls a function that reads the data from each Parquet file. It returns a table that contains a column with the name of the original source file in (which isn’t all that interesting for Parquet files) and all the columns from the Parquet files you’re containing.

Using the Parquet files from my series of posts on importing data from ADLSgen2 as a source, here’s the M code Power Quey generates for this query which I have modified to remove the column with the source file name in:

let
  Source = Folder.Files("C:\MyFolder"),
  #"Filtered Hidden Files1"
    = Table.SelectRows(
    Source,
    each [Attributes]?[Hidden]? <> true
  ),
  #"Invoke Custom Function1"
    = Table.AddColumn(
    #"Filtered Hidden Files1",
    "Transform File (3)",
    each #"Transform File (3)"(
      [Content]
    )
  ),
  #"Renamed Columns1"
    = Table.RenameColumns(
    #"Invoke Custom Function1",
    {"Name", "Source.Name"}
  ),
  #"Removed Other Columns1"
    = Table.SelectColumns(
    #"Renamed Columns1",
    {"Transform File (3)"}
  ),
  #"Expanded Table Column1"
    = Table.ExpandTableColumn(
    #"Removed Other Columns1",
    "Transform File (3)",
    Table.ColumnNames(
      #"Transform File (3)"(
        #"Sample File (3)"
      )
    )
  ),
  #"Changed Type"
    = Table.TransformColumnTypes(
    #"Expanded Table Column1",
    {
      {"TransDate", type date},
      {"GuestId", type text},
      {"ProductId", type text},
      {"NetAmount", type number}
    }
  )
in
  #"Changed Type"

Here’s the output:

On my PC this query took an average of 102 seconds to refresh.

Apart from this query being slower than I expected, I also noticed that there is a “Changed Type” step at the end – which I thought was unnecessary because unlike CSV files, Parquet has typed columns. If you connect to a single Parquet file in Power Query then it recognises the column types, so why not here? Well, it’s because of the way it’s combining files by expanding table columns, and there is a way to work around this that I blogged about here:
https://blog.crossjoin.co.uk/2017/09/25/setting-data-types-on-nested-tables-in-m/

Setting a type on the table column before expanding it did indeed improve performance, but this led me to another optimisation.

I know that using the Table.Combine M function can perform differently to the Table.ExpandTableColumn function used in the original version of the query (although it does not always perform better). Therefore I made the following change to the query above: using Table.Combine to return a single table with all the data in (note that setting a type on the table column is not necessary for this optimisation). Here’s the new version:

let
  Source = Folder.Files("C:\Myfolder"),
  #"Filtered Hidden Files1"
    = Table.SelectRows(
    Source,
    each [Attributes]?[Hidden]? <> true
  ),
  #"Invoke Custom Function1"
    = Table.AddColumn(
    #"Filtered Hidden Files1",
    "Transform File",
    each #"Transform File"([Content])
  ),
  #"Renamed Columns1"
    = Table.RenameColumns(
    #"Invoke Custom Function1",
    {"Name", "Source.Name"}
  ),
  #"Removed Other Columns1"
    = Table.SelectColumns(
    #"Renamed Columns1",
    {"Source.Name", "Transform File"}
  ),
  Combine = Table.Combine(
    #"Removed Other Columns1"[
      Transform File
    ]
  )
in
  Combine

This version of the query took, on average 43 seconds to refresh – a massive improvement.

If you’ve been following my series on ADLSgen2 refresh you may remember that I blogged about importing from a folder of Parquet files there too: in this post I noted that it took on average 72 seconds to load the same data from an ADLSgen2 folder in the Power BI Service using the original code; that was with the Source File column in and removing that column made no different to performance. This new version of the query took on average 49 seconds.

The conclusion is obvious: if you need to load data from a folder of Parquet files then you should use this new approach because the performance benefits are substantial. I know what you’re thinking: does this technique work for other file types apart from Parquet like CSV? The answer is no, because these file types don’t have typed columns like Parquet so it won’t work unfortunately.

%d bloggers like this: