Loading Data From Multiple Excel Workbooks Into Power BI–And Making Sure Data Refresh Works After Publishing

I can hear you yawning already – yet another blog post on getting data from multiple Excel workbooks in Power Query and Power BI. Just about everyone who has ever written a blog post on Power BI has written about this subject, including me. However there’s a twist this time: what if your Excel workbooks are stored in SharePoint or OneDrive For Business? If they are, then your dataset may not refresh successfully after you have published unless you load your data in a particular way.

Some background first. A few weeks ago I was contacted by a reader who had seen my post on data refresh errors and the Web.Contents() function and was experiencing the same issue when using Web.Contents() to get data from multiple Excel workbooks stored in SharePoint. Up until recently the Web.Contents() function – which is used by the From Web option in Power BI’s Get Data experience – was the only way to get data reliably from single Excel files stored in SharePoint or OneDrive For Business. However the limitations on Web.Contents(), M functions and data refresh described in my blog post meant that it wasn’t possible to use Web.Contents() to get data from multiple Excel files stored in SharePoint or OneDrive For Business.

The April 2016 Power BI Desktop update introduced a new way of getting data from Excel files stored in SharePoint: the SharePoint Files data source, based on the SharePoint.Files() M function. Both Mike Carlo and Ken Puls have already blogged about this in depth and so I won’t repeat what they’ve written; their posts have a lot of good information on how to construct the URLs to find your files in SharePoint. How do you use it to combine data from multiple Excel workbooks though?

Imagine you have four identically-structure Excel workbooks stored in a folder in OneDrive For Business:

image

Each one has a table called Table1 with some sales data in it:

image

In Power BI Desktop, create a new query and click the Get Data button. In the Get Data dialog, go to Files and click on SharePoint Folder:

image

Enter the URL for your OneDrive For Business site:

image

You’ll then see all the files in all your folders in OneDrive For Business:

image

Filter the folders in the Folder Path column so you only see the files in the folder containing your Excel workbooks:

image

Right-click on the Content column and select Remove Other Columns to get rid of all but the Content column. Then click the Add Custom Column button and add the following expression:

Excel.Workbook([Content])

image

This tells Power BI to treat each file in this folder as an Excel Workbook. Click OK, and then right-click on the Content column and select Remove (you won’t need this any more). Next, click on the Expand icon in the top right-hand corner of the Custom column and click OK on the flyout:

image

image

This will give you a table listing all of the contents of each workbook:

image

Filter this table so you only have the four tables from each workbook:

Next, right-click on the Data column and select Remove Other Columns, then finally click OK on the Expand icon again:

image

You’ll now have a table containing all of the data from the tables called Table1 in each workbook in the folder:

image

Don’t forget to set appropriate data types for each column (columns will have the data type Any by default, which will be treated as text later on)! You can now build your report and publish it:

image

In order for refresh to work, you’ll need to configure the credentials used by the Power BI service to connect to your data. In the browser, click on the ellipses for the Dataset for your report and select Schedule Refresh:

image

This will take you to the Datasets tab on the Settings page. You can schedule refresh here if you want, but the important thing is to click the Edit Credentials link:

image

If like me you have an Office 365 subscription and use SharePoint Online, then the dialog choose oAuth2 in the Authentication Method dropdown box and click Sign In:

image

You’ll see the Windows Organizational Account sign-in page appear briefly while you are signed in but you shouldn’t need to do anything. And that’s it!

You can now edit the data in any of your Excel workbooks and, once the dataset has refreshed, those changes will show up in the report. If you want to do a manual refresh of the data, clicking the Refresh button at the top of the report won’t do any good; you have to click on the Refresh Now option on the dataset (seen in the screenshot above, just below the Schedule Refresh option).

Using The CubeSet() Function to Combine Selections From Multiple Slicers in Excel Power Pivot Reports

Since my last post on the Excel cube functions proved to be very popular, I thought I’d write a follow-up that shows off some more little-known but interesting functionality: how to use the CubeSet() function to combine selections from multiple slicers.

The Problem

Consider the following Power Pivot model based on data from the Adventure Works DW database:

image

The Internet Sales table contains one row per sales transaction; Product and Date are dimension tables that provide more information on the product bought in the transaction and the date the order was taken on. There’s one measure called Distinct Customers that returns the distinct count of customers; its definition is as follows:

Distinct Customers:=DISTINCTCOUNT('Internet Sales'[CustomerKey])

With this model you could build the following simple report by converting a PivotTable to cube functions:

image

Obviously this will tell you how many distinct customers bought the products selected in the slicer each year. You could also duplicate the slicer and the cube functions and have two reports side-by-side so you could compare the number of customers that bought two different groups of products:

image

The question here is, though, how can you create a third report that shows the number of distinct customers that bought any of the products selected across both slicers? So, using the example above, the report on the left shows the number of customers that bought Mountain-200 Black 38, Mountain-200 Black 42 or Mountain-200 Black 46; the report on the right shows the number of customers that bought Mountain-200 Silver 38, Mountain-200 Silver 42 or Mountain-200 Silver 46; the aim is to create a third report that shows the number of distinct customers that bought Mountain-200 Black 38, Mountain-200 Black 42, Mountain-200 Black 46, Mountain-200 Silver 38, Mountain-200 Silver 42 or Mountain-200 Silver 46 without having to select all those products in a third slicer.

The Solution

Step 1

The first thing to point out is that you can use the CubeSet() function to return the set of items selected in an Excel slicer. This is a widely-known technique – Rob Collie has a great post describing it here. What I didn’t realise until recently though was that one CubeSet() function can point to other cells containing CubeSet() functions and union all the items returned by those CubeSet() functions. This allows you to combine the selections made in multiple slicers, assuming that those slicers refer to the same field in your Power Pivot model. Here are the formulas in three cells that do this:

image

In cell I2 the formula uses the CubeSet() function to capture the selection made in the left-hand slicer in the report above, captioned Product Selection A, which has the name Slicer_Product. In cell I3 there is a similar formula to capture the selection made in the slicer on the right in the report, captioned Product Selection B, which has the name Slicer_Product1. The formula in I4 then uses a third CubeSet() function to combine the sets returned by the last two formulas into a third set.

Step 2

Now that you have a single CubeSet() formula in I4 that combines the selections in both slicers, you can use the trick I showed in my last post and reference that cell in CubeValue() formulas to get the number of distinct customers that bought any of the products listed in either slicer:

image

Here’s what the report looks like at this point:

image

Step 3

Notice that, thanks to the power of Power Pivot, customers are not double-counted in the new combined selection report on the right-hand side. For example, in 2003 841 customers bought one or more products selected in the slicer Product Selection A while in 2003 796 customers bought one or more products listed in the slicer Product Selection B. 841+796=1637, but in the combined selection report correctly states that only 1588 distinct customers bought products listed in either Selection A or Selection B. We can use these numbers to do one final, useful calculation: 1637-1588=49, so there were 49 customers who must have bought products in both Selection A and Selection B. The formula to do this is pretty straightforward and looks like this:

=IFERROR(($C19+$G19)-$J19, 0)

image

image

You can download the sample workbook for this post here.

Creating OR Filters With Cube Functions For Power Pivot And Analysis Services Reports In Excel

The Excel Cube Functions are incredibly powerful, and I’m still amazed at the kind of problems they can solve. This post describes how they can be used to build a report with a complex OR filter using data from Power Pivot (it’s equally applicable to SSAS) that shows a number of advanced uses of these functions. No knowledge of MDX or DAX is needed but if you’re new to the Excel Cube Functions I recommend that you watch this video of a presentation by Peter Myers, which provides an excellent introduction to them.

The Problem

Imagine you’ve got a very simple Power Pivot model that looks like this:

image

There’s a Date table with dates, months and years in, and an Internet Sales table with sales data in and a measure called Sales Amount. Here’s what the data looks like in a PivotTable:

image

Now, imagine that you want a report with the Sales Amount measure on columns and Years on rows, and you want to filter the data so that you only see values for Mondays in July or Wednesdays in September. Using the Fields, Items and Sets functionality you could filter the data to only show the day/month combinations you need for each year, but since you can’t put a named set into the Filter area of a PivotTable you would have to use Excel formulas to sum up the combinations to get the totals you need:

image

Lukcily it is possible to build the report you need using the Cube Functions! Here’s how:

Step 1: Build Your Combinations Using CubeMember()

The first thing to point out is that the CubeMember() function does not have to just return a member, it can return a combination of members (in MDX this is known as a tuple). These combinations can be built in several ways, one of which is by using cell references to other cells that themselves contain CubeMember() functions. It’s probably easier to explain this by showing a worksheet that contains six cells with CubeMember() functions in. Here it is with the formulas visible:

image

Cells B5 and B10 contain references to days of the week; cells B6 and B11 contain references to months. Cells B7 and B12 contain CubeMember() functions that return the combinations we want to filter by: Mondays in July and Wednesdays in September respectively.

Here’s what the formulas above return:

image

You’ll notice that the ‘combination’ cells only show the month names, not the day/month combinations – this is just a feature of the CubeMember() function and can be a bit misleading, but rest assured they do return the combinations you need.

Step 2: Build A Set Using CubeSet()

The CubeSet() function is also able to build sets using cell references to cells containin CubeMember() functions. In this case I want a set containing the two ‘combination’ CubeMember() functions from B7 and B12. I can do this by using the formula:

=CUBESET($B$2,($B$7,$B$12), "The set of combinations")

Here are the formulas on the worksheet at this point:

image

And here’s the output:

image

Step 3: Reference The CubeSet() Function In Your Report

Now you have a CubeSet() function that returns the two day/month combinations, you can use this in a cube function report. When you reference a cell containing the CubeSet() function in a CubeValue() formula, the CubeValue() formula will return the aggregated value of all of the combinations in the CubeSet(). So for example, here’s a report with the Sales Amount measure on columns, Years on rows, and displaying the Sales Amount for each year filtered by the two day/month combinations:

image 

And here’s the actual output:

image

Compare the numbers from the report at the bottom with the values calculated from the PivotTable in the screenshot earlier in this post, and you’ll see that we have indeed shown just the combined Sales Amount for Mondays in July and Wednesdays in September, broken down by Year.

You can download the example Power Pivot workbook for this post here.

Excel 2016 PivotTable MDX Changes Lead To Big Query Performance Gains

Yesterday, the Excel team announced a bunch of performance improvements in Excel 2016 for PivotTables connected to Analysis Services (Multidimensional or Tabular), Power Pivot and Power BI. If you haven’t read the official blog post already, here it is:

https://blogs.office.com/2016/07/07/faster-olap-pivottables-in-excel-2016/

In that post, Microsoft point out that how much of a performance increase you get will depend on a number of factors. I guess they have to do this to manage expectations. However I’m going to come right out and say that these changes are probably the most important thing that has happened in the last five years for Analysis Services or Power Pivot query performance and if you are using Excel PivotTables with SSAS or Power Pivot for reporting, you should upgrade to Excel 2016 (you’ll need the Office 365 click-to-run version) just for these changes.

The key improvement that Microsoft have made here is to stop Excel bringing back unnecessary subtotal values in the MDX queries that it generates. This has been a gigantic problem for years and several people have blogged about it: I did here, and among other people Rui Quintino and Richard Lees have posts on the subject that are worth reading. A lot of my consultancy work involves SSAS and MDX performance tuning and I think I see around five to ten customers every year who are seriously affected by this – and I’m sure this is just the tip of the iceberg. Even if you don’t think you are affected, you may have users who are putting up with slow refresh times for Excel-based reports without your knowledge.

Let’s go into some technical detail, see what the original problem was and what has changed in Excel 2016. I’m going to provide some examples using Excel 2013/2016 against SSAS Multidimensional but as I said the same problems occur (although less severely) with SSAS Tabular and Power Pivot.

Consider the following Excel 2013 PivotTable connected to the Adventure Works cube:

image

image

It’s typical of the kind of thing an end user wants to build, and there are two things to point out:

  1. Notice there are three fields that have been placed on the Rows axis of the PivotTable: the Gender and Marital Status hierarchies from the Customer dimension, and the Category hierarchy from the Product dimension. It’s pretty common for users to put many more fields together on an axis when they want to display detailed data.
  2. The user has also turned off grand totals and subtotals in the PivotTable so that only the detailed data is shown. This often happens in combination with (1) because if the user only wants to see the detailed data the subtotals and grand totals are a distraction.

In this case the PivotTable layout has been switched Tabular and the Repeat All Item Labels option is selected to make the PivotTable look like a table, but these are just aesthetic changes.

This PivotTable contains 12 rows, but if you look at the MDX query that is generated by Excel 2013 to populate it, it returns 36 rows of data (actually the MDX in this case returns 36 columns, but that’s a quirk of Excel) because the subtotals and grand total that Excel is not displaying are still being returned. Here’s the query:

SELECT 
NON EMPTY 
CrossJoin(
CrossJoin(
Hierarchize(
{DrilldownLevel(
{[Customer].[Gender].[All Customers]}
,,,INCLUDE_CALC_MEMBERS)}), 
Hierarchize(
{DrilldownLevel({[Customer].[Marital Status].[All Customers]}
,,,INCLUDE_CALC_MEMBERS)})), 
Hierarchize(
{DrilldownLevel({[Product].[Category].[All Products]}
,,,INCLUDE_CALC_MEMBERS)})) 
DIMENSION PROPERTIES PARENT_UNIQUE_NAME,HIERARCHY_UNIQUE_NAME 
ON COLUMNS  
FROM 
[Adventure Works] 
WHERE ([Measures].[Internet Sales Amount]) 
CELL PROPERTIES VALUE, FORMAT_STRING, 
LANGUAGE, BACK_COLOR, FORE_COLOR, FONT_FLAGS

And here’s a sample of what it returns, with the unwanted values highlighted:

image

You can probably guess that these unwanted subtotals and grand totals make the query slower, and the more fields you put together on the rows or columns of a PivotTable the greater the number of subtotals/grand totals are returned and slower things get. PivotTables connected to SSAS or Power Pivot often become completely unusable with more than six or seven fields put together on the same axis, and the only workarounds before Excel 2016 are either to write custom MDX in a named set (I show how to do this here) or if you are using SSAS Multidimensional try to use member properties instead of separate fields (as Richard Lees shows here), but neither are really great alternatives.

Building the same PivotTable using Excel 2016, however, gives the following MDX:

SELECT 
NON EMPTY 
CrossJoin(
CrossJoin(
Hierarchize(
{[Customer].[Gender].[Gender].AllMembers}), 
Hierarchize(
{[Customer].[Marital Status].[Marital Status].AllMembers})), 
Hierarchize(
{[Product].[Category].[Category].AllMembers})) 
DIMENSION PROPERTIES PARENT_UNIQUE_NAME,HIERARCHY_UNIQUE_NAME 
ON COLUMNS  FROM [Adventure Works] 
WHERE ([Measures].[Internet Sales Amount]) 
CELL PROPERTIES VALUE, FORMAT_STRING, 
LANGUAGE, BACK_COLOR, FORE_COLOR, FONT_FLAGS

Which returns just the data needed for the PivotTable:

image

What has changed? Focusing on just the Gender hierarchy, the difference between the two queries is that in Excel 2013 the selection on Gender is given using the DrillDownLevel() function on the All Member of the hierarchy. Isolated and put in its own query, the set expression used looks like this:

SELECT 
{[Measures].[Internet Sales Amount]}
ON COLUMNS,
DrilldownLevel(
{[Customer].[Gender].[All Customers]}
,,,INCLUDE_CALC_MEMBERS)
ON ROWS
FROM
[Adventure Works]

image

Notice how the All Member, used by Excel to return subtotals and grand totals, is returned by the set. Excel 2016 instead just asks for the members on the Gender level of the Gender hierarchy, not including the All Member. Again, put inside its own query, you can see what it returns:

SELECT 
{[Measures].[Internet Sales Amount]}
ON COLUMNS,
{[Customer].[Gender].[Gender].AllMembers}
ON ROWS
FROM
[Adventure Works]

image

Apart from the fact that no All Member is returned, it’s also a lot cleaner and easier to read.

I’ve been very excited about this change since I first heard it was happening, and I think it will benefit a lot of people. A big thanks to Alexander Lahuerta and the rest of the Excel team for making this happen!

Generating Fixed-Width Text Files In Excel With Power Query

While it’s fairly common to need to load fixed-width files using Power Query or Power Query (and there’s a nice walkthrough of how to do this here), occasionally you might want to use Power Query and Excel to create a fixed-width output for another system, or maybe to create some test data. You might not want to do it often but I can imagine that when/if Power Query is integrated into SSIS this will be a slightly less obscure requirement; at the very least, this post should show you how to use a couple of M functions that are under-documented.

Before we begin, a quick tip: when working with any fixed width file always check the Monospaced box on the View tab in the Query Editor window, to make it easier to just the widths. You’ll notice I’ve done this in all the screenshots in this post.

image

For source data, let’s take the following expression which returns a table:

#table(
type table[Month=text,Product=text,Sales=text,Comments=text],
{
{"Jan","Apples","1","Not good"},
{"Feb","Pears","2","Better"},
{"March","Oranges","50000","Great!"}
})

 

image

Of course this could be any table of data that you choose, it’s just that using an expression like this makes the examples easier to follow. The only thing to notice here is that all the columns have a data type of text, even the Sales column: if your table doesn’t, I suggest you convert all the data types to text before you start – it will save you having to do it later.

The next thing you’ll need to do is to create a custom column that returns a list containing all the values from each of the columns from the current row. The expression to use here is:

Record.FieldValues(_)

For more information about what the underscore means here, see my previous post; basically it refers to the values in the current row.

Finally, you need to concatenate all the values in this list (which should all be text already, of course) into a single piece of fixed-width text. To do this we’ll need to use a Combiner function – a function that returns a function that, in turn, combines multiple pieces of text together in a specific way. I’ve already blogged about the Combiner.CombineTextByDelimiter() function here but in this case we’ll be using Combiner.CombineTextByRanges() which is a little more complex. You can’t use Combiner.CombineTextByRanges() to combine text directly, you have to call it to get a function that combines text in the way you want and then use this function on your data. To do this you need an expression like:

Combiner.CombineTextByRanges({{0,3},{9,10},{19,10},{29,255}})

How does this work? For each line, remember, we now have a value of type list that contains all of the values from the four columns in our table. The expression above returns a function that takes a list and constructs a single line of text from it. The list passed to the first parameter consists of a series of lists of two integers, and each value pair gives the position to place each value on the line and the number of characters to allow. So, in this example, the first value in the input list is put at position 0 on the line and is given a space of 3 characters; the second value in the input list is put at position 9 and given a space of 10 characters; the third value in the input list is put at position 19 and given a space of 10 characters; and the fourth value in the input list is put at position 29 and given a space of 255 characters.

This function can now be called in a second custom column to get the  combined text for each line. Here’s the full M code for the query:

let
    Source = #table(
              type table
              [Month=text,Product=text,Sales=text,Comments=text],
              {
              {"Jan","Apples","1","Not good"},
              {"Feb","Pears","2","Better"},
              {"March","Oranges","50000","Great!"}
              }),
    ListOfValues = Table.AddColumn(
                    Source, 
                    "List Of Values", 
                    each Record.FieldValues(_)),
    MyCombinerFunction = Combiner.CombineTextByRanges({
                          {0,3},
                          {9,10},
                          {19,10},
                          {29,255}
                          }),
    OutputColumn = Table.AddColumn(
                    ListOfValues, 
                    "Output Column", 
                    each MyCombinerFunction([List Of Values]))
in
    OutputColumn

And here’s the output:

image

One thing to notice: on the third line, the month name March has been truncated to Mar because we only specified a space of three characters for the month name.

image

There’s an optional second parameter to Combiner.CombineTextByRanges() that also needs some explanation. By default, the function that Combiner.CombineTextByRanges() returns will place your text on an otherwise blank line. However you can get it to place your text on a line that contains other characters apart from blank space. For example, if you alter the MyCombinerFunction step in the query shown above to be:

    MyCombinerFunction = Combiner.CombineTextByRanges({
                          {0,3},
                          {9,10},
                          {19,10},
                          {29,255}
                          }
                          ,Text.Repeat("#",285)
                         ),

You get:

image

The Text.Repeat() function, as used in the expression above, returns a piece of text with the # character repeated 285 times, and that piece of text is used as the ‘background’ on which the values are superimposed.

Of course, now you’ve got your desired output all you need to do to create a fixed width file is to remove all other columns apart from Output Column in your query, load your query to a table in the Excel worksheet, turn off the header row for the table by unchecking the Header Row option on the ribbon:

image

…and then save the workbook in a .txt file format.

You can download the sample workbook for this query here.

Finding All Selected Items In A Slicer In Excel 2016 Using TextJoin()

When you are using slicers with an Excel PivotTable it’s often useful to be able to get a comma-delimited list of the items selected in that slicer for use in a report title. It’s not easy to do though, and in fact this is one of those topics that lots of people have blogged about over the years: here’s my MDX approach, here’s Erik Svensen’s post on using the new DAX ConcatenateX() function, and there are also posts by Rob Collie like this one. None of these techniques are ideal though: my personal favourite is the ConcatenateX() approach, but that only works with SSAS Tabular 2016 (and then only if you can create a measure on the model) or Power Pivot in Excel 2016, and not at all if you’re using SSAS Multidimensional or earlier versions of SSAS Tabular.

However, after discovering the new TextJoin() function in Excel 2016 the other week I realised that this would provide yet another way to solve this problem. Here’s a simple example using a PivotTable and slicer connected to a Power Pivot model:

image

The highlighted cell F3 showing a comma-delimited list of all the items selected in the slicer has the following Excel formula:

=TEXTJOIN(
    ", ",
    TRUE,
     IFERROR(
      CUBERANKEDMEMBER(
        "ThisWorkbookDataModel",
        Slicer_Product,
        ROW(
         INDIRECT("1:"&CUBESETCOUNT(Slicer_Product))
        )
       ), 
      "")
     )

Important: this needs to be entered as an array formula, so instead of hitting Enter after typing in the formula you need to hit Ctrl+Shift+Enter. You’ll see the formula surrounded by braces {} in the formula bar when you do this:

image

image

This formula relies on the fact that the selection in a slicer (in the example above the slicer has the name Slicer_Product) can be treated the same as the output of the Excel CubeSet() function, which means that you can use the CubeSetCount() function to find the number of items selected and the CubeRankedMember() function to get the name of any single item in the selection. It also uses the Row()/Indirect() trick described here to create an array of numbers from 1 to the number of items selected in the slicer, which in turn provides the rank values to pass to the CubeRankedMember() function.

The beauty of this approach is that it works for Power Pivot and all versions of SSAS Tabular and Multidimensional, and doesn’t require any measures to be created on your models/cubes. It even works in Excel Online, so it will work inside Power BI, although it doesn’t seem to be possible to create array formulas in Excel Online yet so you need to create the formula on the desktop before you deploy. Of course you need the latest build of Excel 2016 for all this to work, and at the time of writing most people don’t have Excel 2016 and even if they do they probably won’t have a build (Version 16.0.6568.2025 or higher) with TextJoin() in it yet. But this will be a great solution in the distant future when everyone has Excel 2016, I promise!

You can download the sample Excel 2016 workbook here.

I also have to acknowledge the help of David Hager in writing this formula – we had a conversation about how TextJoin() behaves in array formulas in the comments of my earlier post and in doing so he provided the basic approach for me.

Working With CSV Files That Contain Rogue Line Breaks In Power Query And Power BI

CSV files often contain text fields, and text fields often contain line breaks that shouldn’t be there (especially if the text is entered by end users). For example, here is are the contents of a simple CSV file that should contain three fields(Product, Comment and Sales) and six rows:

Product,Comment,Sales
Apples,This is a commment,10
Oranges,Another comment,20
Pears,A comment with a
line break in,30
Grapes,A normal comment,40
Bananas,A comment
with
two line breaks,50
Pineapples,A normal comment,60

However, as you can see, the Comment field contains line breaks in some cases. If you try to load this file into Power Query/Power BI, you’ll see that it doesn’t handle things well:

image

The easiest way of solving this problem is to make sure you have double quotes around the text in the Comment field, so that Power Query/Power BI knows to ignore the line breaks:

Product,Comment,Sales
Apples,"This is a commment",10
Oranges,"Another comment",20
Pears,"A comment with a
line break in",30
Grapes,"A normal comment",40
Bananas,"A comment
with
two line breaks",50
Pineapples,"A normal comment",60

image

[By the way, no text in the screenshot above has been truncated. The line breaks are still there in the Comment field; you can see the full text in the bottom pane of the Query Editor by clicking on a cell in the table.]

But what if you can’t fix the source data? I was asked this question the other week, and since I had been asked about it before and not come up with a good answer, I decided to spend some time researching the problem.

What I found was that it was relatively easy to write some M code that gave me the correct results, but very hard to write code that performed acceptably well on a large data set (I was testing on a CSV file containing almost half a million rows). Here’s the code for the function I ended up with:

(FileName as text, DelimiterCharacter as text, DelimitersPerLine as number, UseFirstRowAsHeaders as logical) =>
let
    // Load file and buffer in memory
    BufferedBinary = Binary.Buffer(File.Contents(FileName)),
    // Convert to a single block of text
    Source = Text.FromBinary(BufferedBinary),
    // Find the position of all line breaks
    LineBreaks = List.Buffer(Text.PositionOf(Source, "#(cr,lf)", Occurrence.All)),
    // Get a list of numbers from 0 to the number of line breaks-1
    LinePositions = List.Positions(LineBreaks),
    // Count the number of line breaks
    TotalLines = List.Count(LineBreaks),
    // Split the text by line breaks
    FirstSplit = Text.Split(Source, "#(cr,lf)"),
    // On each line, count the number of column delimiter characters
    DelimiterCounts = List.Buffer(
                           List.Transform(
                            LinePositions, 
                            each List.Count(Text.PositionOf(FirstSplit{_}, 
                            DelimiterCharacter, 
                            Occurrence.All)))),
    // Loop through each line and
    // a) Calculate a running total of the number of column delimiter characters on each line
    // b) Divide the running total by the expected number of delimiters per line, and round down to the nearest integer
    RunningSums = List.Buffer(
                           List.Generate(
                            ()=>[Counter=0, RunningTotal=0], 
                            each [Counter]<TotalLines, 
                            each [Counter=[Counter]+1, RunningTotal=[RunningTotal]+DelimiterCounts{[Counter]}],
                            each Number.IntegerDivide([RunningTotal]+DelimiterCounts{[Counter]},DelimitersPerLine))),
    // If the result of the division calculated in the previous step is
    // the same as on the previous line, then the line break can be ignored
    // so return null, otherwise return the position of the line break
    FindFullLines = List.Transform(LinePositions, each if _=0 or RunningSums{_}<>RunningSums{_-1} then LineBreaks{_} else null),
    // Remove nulls from the list
    RemoveNulls = List.RemoveNulls(FindFullLines),
    // Split the original text by the positions of the line breaks left
    LineSplitterFunction = Splitter.SplitTextByPositions(List.Combine({{0},RemoveNulls})),
    ActualLines = LineSplitterFunction(Source),
    // Convert the result to a table
    ConvertToTable = Table.FromList(ActualLines, Splitter.SplitTextByDelimiter(DelimiterCharacter), null, null, ExtraValues.Ignore),
    // Use the first rows as headers if desired
    PromoteHeaders = if UseFirstRowAsHeaders then Table.PromoteHeaders(ConvertToTable) else ConvertToTable
in
    PromoteHeaders

Assuming that you called this function LoadFile(), here’s an example of how to call it:

LoadFile(
//File name
"C:\MyFile.csv", 
//Column delimiter character - in this case a comma
",", 
//The expected number of commas to be found per line
//ie one less than the number of columns
2, 
//If true then first row contains header values
true)

The comments in the code should give you an idea of how it all works. A few other things to note:

  • If any of your lines contain extra columns, this approach won’t work
  • I’ve noticed a difference between how the Power BI Desktop and Excel 2016 behave with this code: the current Excel 2016 Query Editor will keep any line breaks after splitting on them, whereas Power BI Desktop does not. It’s easy to fix the Excel output though: just use the Clean transform to remove the line breaks.

I’m not really sure why it was so hard to come up with code that performed well, even after liberal use of List.Buffer(). It seemed like any time I referenced the original text (as returned by the Source step) in the List.Generate() function then performance died – so you’ll notice in the code above that I’m only referencing the list that contains the number of delimiters found per line.