“In the Previous” Date Filters In Power BI/Get&Transform/Power Query

The Query Editor in Power BI/Excel Get&Transform/Power Query has a number of built-in ways to filter data in date columns relative to the current date, such as the “In the Previous” option. However these filters behave in a way I find non-intuitive (and I’m not alone) and it’s not obvious how to get the behaviour I think most people would expect. In this post I’ll show you what the built-in relative date filters actually do and how you can get change them to do something more useful.

Let me give you a simple example. Imagine you’re using the following table of dates (in DD/MM/YYYY format) in an Excel table as a data source:

image

Now, let’s also assume that the today’s date is January 8th 2018 and you only want to load data from the last six months. If you load the data into Power BI in a new query:

image

…and then click on the dropdown menu in the top right-hand corner of the Date column (highlighted), you can select Date Filters/In the Previous:

image

…and then set up the filter for “Keep rows where ‘Date’ is in the previous 6 months” as shown here:

image

…you get the following table back:

image

Six out of the seven dates in the original table are returned, but not the six I would expect. Remember that today’s date is January 8th 2018, and notice that January 1st 2018 is not present in the filtered table and July 1st 2017 is present! I don’t know about you, but I would say that January 1st 2018 should be considered as being “in the previous 6 months” and July 1st 2017 should not be.

The reason this is happening is that the M code generated by the UI uses the Date.IsInPreviousNMonths function, so as a result the filter is getting all the dates that are present in the six months before the month that today’s date is in. Hmmmmm.

In many cases you can get a “last six months” filter of the type I would expect quite easily, by altering the filter dialog box shown above to filter by the last 5 months and including an Or condition that also filters by the current month, like so:

image

This returns the following table:

image

You’ll see now that January 1st 2018 is present and July 1st 2017 is not present. However you will need to be careful with this: if your source data contains dates that are after today’s date but in still in the current month, these dates will now also be included! For example, if the source data is changed to include a new row for January 31st 2018:

image

This new filter will include January 31st 2018 because it is in the same month as today’s date:

image

What if you want to exclude dates that are after today but in the current month? This is where things get tricky, and where you’ll need to write some M code. Let’s imagine that you want to get all the dates that occur in the range July 9th 2017 (the day after the date that is six months before today) and January 8th 2017 (today). You can do this by editing the original query as follows:

let
    Source = 
	Excel.Workbook(
		File.Contents("C:\SixMonths.xlsx"), 
		null, 
		true
	),
    Source_Table = 
	Source{[Item="Source",Kind="Table"]}[Data],
    ChangedType = 
	Table.TransformColumnTypes(
		Source_Table,
		{{"Date", type date}}
	),
    EndDate = 
	Date.From(DateTime.FixedLocalNow()),
    StartDate = 
	Date.AddDays(Date.AddMonths(EndDate,-6),1),
    FilteredRows = 
	Table.SelectRows(
		ChangedType, 
		each [Date]>=StartDate and [Date]<=EndDate
	)
in
    FilteredRows

In this query, the EndDate step returns today’s date using DateTime.FixedLocalNow(), the StartDate step returns the day after the date that is six months before today’s date, and the FilteredRows step filters the dates so that only those that occur between StartDate and EndDate are returned. And yes, I checked, if you do this with a SQL Server data source then query folding does occur.

With this query, you finally get the dates you’d expect from your filter:

image

To be honest, though, I don’t think it should be this hard!

Removing Punctuation From Text With The Text.Select M Function In Power BI/Power Query/Excel Get&Transform

A new, as-yet undocumented, M function appeared in the December 2017 release of Power BI Desktop (I assume it will appear in Excel soon): Text.Select. Here’s the documentation from the Query Editor:

 TextSelect

It’s very easy to use: the first parameter takes a text value, the second parameter takes either a text value containing a single text value or a list of single characters, and it returns the text from the first parameter minus all characters that are not in the second parameter. For example, the expression:

Text.Select("Hello", "l")

…returns the text value “ll”:

image

…and the expression:

Text.Select("Hello", {"H","e","o"})

…returns the text value “Heo”:

image

There are a lot of scenarios where Text.Select will be useful, and the one that I immediately thought of was to remove punctuation from text. In one of my earliest M posts on this blog I used Text.Remove to do this while trying to find Shakespeare’s favourite words, but the problem with this approach is that you have to explicitly specify all the characters you want to remove from your text – and there could be a lot of characters that need to be excluded. Text.Select is a much better option here because it allows you to specify the characters you want to keep.

The first step to doing this is to understand how to construct the list of the characters you do want to keep. You can do this very easily in M when declaring a list using the range technique I blogged about here, so you should read that post before carrying on. The following expression returns a list containing all 26 uppercase and lowercase letters in the alphabet plus a space:

List.Combine({{"A".."Z"},{"a".."z"},{" "}})

image

Of course depending on the scenario or language you’re working with you may want to include other characters, for example apostrophes or letters with accents, too. Here’s a slightly more complex example of how this list can be used with Text.Select:

let
    SourceText = "Hi! Stop, please. What is your name?",
    CharsToInclude = List.Combine({{"A".."Z"},{"a".."z"},{" "}}),
    RemovePunc = Text.Select(SourceText, CharsToInclude)
in
    RemovePunc

The query above takes the text “Hi! Stop, please. What is your name?” and returns the text “Hi Stop please What is your name”.

image

Finally, because I couldn’t read my old M code without cringing a little bit, here’s an updated version of my query that gets the top 100 words from the Complete Works Of Shakespeare (direct from the Project Gutenberg website):

let
  URL = "http://www.gutenberg.org/cache/epub/100/pg100.txt",
  Source = Text.FromBinary(Web.Contents(URL)),
  Lowercase = Text.Lower(Source),
  RemovePunctuation = Text.Select(Lowercase, 
	List.Combine({{"a".."z"},{" "}})),
  SplitText = Splitter.SplitTextByWhitespace(QuoteStyle.None),
  SplitIntoWords = SplitText(RemovePunctuation),
  RemoveBlanks = List.Select(SplitIntoWords, each _<>" "),
  TableFromList = Table.FromColumns({RemoveBlanks},
	type table [Word=text]),
  FindWordCounts = Table.Group(
	TableFromList, 
		{"Word"}, 
		{{"Count", each Table.RowCount(_), type number}}),
  SortedRows = Table.Sort(
	FindWordCounts,
	{{"Count", Order.Descending}}),
  KeptFirstRows = Table.FirstN(SortedRows,100)
in 
  KeptFirstRows

Here they are as a word cloud (yes I know it’s not good dataviz practice, but it’s for fun):

image

You can download the .pbix file with this example in here.

BONUS FACT: another new M function appeared recently too: Function.From. You can read all about it on this thread on the Power Query forum.

Making Sure All Columns Appear When You Combine Data From Multiple Files In Power BI/Power Query M

Here’s a really common problem that occurs when combining data from multiple files, or indeed any type of data source, in Power BI/Power Query/Excel Get&Transform. Imagine you have a folder with two Excel files in, and each Excel file contains a table called SalesTable:

image

image

image

You use the “From Folder” data source to combine all the data from all the Excel files in this folder, you get a table like this:

image

…and you’re happy. Then, at some later date a third file is added to the folder that has an extra column in its SalesTable table called Comments:

image

You refresh your query, though, and you don’t see the Comments column anywhere:

image

Why not? If you look at the query that has been generated and go back to the “Removed Other Columns1” step you’ll see a table containing a column containing table values:

image

…and you’ll also see that the next step in the query, “Expanded Table Column1”, uses the Table.ExpandTableColumn function – the M function that gets called if you click the Expand/Aggregate button in the column header highlighted in the previous screenshot – to flatten these nested tables out. And the problem is that Table.ExpandTableColumn needs to know in advance the names of the columns you want to expand.

Now this is an extremely common, and powerful, Power Query/M pattern. Apart from the “From Folder” functionality for automatically combining data from multiple files it’s something I find myself building manually all the time: write a function, for example to make a single call to a web service; create a table containing one row for each call to the web service that I want to make, use the Invoke Custom Function button to call my function for each row, and then expand to get all the data from all the function calls. And the more I use this pattern, the more I run into situations where I don’t see columns I’m expecting to see because I’ve done an Expand in an earlier step that has a hard-coded list of column names in it (it’s a very similar problem to the one that Ken Puls blogged about here). It’s a pain to have to keep changing this list, and the real problem comes when you don’t actually know in advance what the names of the columns to expand are.

One solution would be to do something similar to what I show in this post: iterate through all the tables in the table column, find a distinct list of column names, and then use this list with Table.ExpandTableColumn. However, there is an easier way to handle this: use Table.Combine instead of Table.ExpandTableColumn. The great thing about Table.Combine is that it will always return all of the columns from all of the tables it’s combining.

Here’s a function that shows how it can be used:

(TableColumn as list, optional SourceNameColumn as list) =>
let
  AddIDs = 
    if 
    SourceNameColumn=null 
    then 
      TableColumn 
    else 
      let
        ZipNames = 
          List.Zip({TableColumn, SourceNameColumn}),
        AddColumnFunction = 
          (ListIn as list) => 
          Table.AddColumn(ListIn{0}, "Source", each ListIn{1}),
        AddColumns = 
          List.Transform(ZipNames, each AddColumnFunction(_))
      in
        AddColumns,
  Combine = Table.Combine(AddIDs)
in
  Combine

This function takes a list of tables and, optionally, a list of text values that contain a name for each table (this optional parameter accounts for the majority of the code – without it all you would need is the Combine step). If you paste this code into a new query called, say, CombineTables, you can either call it by adding some M code to an existing query or more easily just call it direct from the UI. In the latter case when you click on the function query in the Query Editor window you’ll see this:

image

Assuming you already have a query like the one shown above that contains a column with table values in it and another column containing the original Excel file names, you need to click the Choose Column button for the TableColumn parameter and select the column that contains the table values in the dialog that appears:

image

…and then do the same thing for the SourceNameColumn parameter:

image

…and then click the Invoke button back in the Query Editor, and you’ll get a table containing all of the data from the SalesTable table in each workbook, including the Comments column from the third Excel workbook:

image

With no hard-coded column names you’ll now always get all of the data from all of the columns in the tables you’re trying to combine.

Power BI Video Training Now Available

If you can’t make it to one of my classroom-based courses in London next year, today I’m pleased to announce that my new Power BI video training course is now available via my friends at Project Botticelli. It covers the following topics:

  • What is Power BI? (Free)
  • Connecting to Data Sources
  • Data Import Concepts
  • Transforming Data with the Query Editor
  • Advanced Data Loading
  • Modelling Data
  • Basic DAX
  • Power BI Desktop vs Excel
  • Cloud and Desktop Power BI Dashboards and Reports
  • Building Reports in Excel
  • Data Refresh (Free)
  • Administration and Auditing (Free)

Several of the shorter videos are free to view. Full details can be found here:

https://projectbotticelli.com/power-bi-desktop-and-cloud-course-video-tutorial?pk_campaign=tt2017a

Other courses available on the site include my MDX and SSAS Multidimensional courses, plus Marco Russo’s DAX course and many others.

If you’re quick, you can use the code CHRIS2017CYBER15 to get a 15% discount on subscriptions before Tuesday November 28th 2017!

In-Person Power BI And Azure Analysis Services Training In London

I’m pleased to announce two in-person training dates in London next year.

First of all, I’ll be running a three-day “Introduction to Power BI” course from January 29th to January 31st 2018.  Suitable for BI pros, analysts and anyone who needs to use Power BI to build reports, I’ll be covering data loading, data modelling, a bit of M, a lot of DAX, report design, publishing, security and administration. Full details and registration can be found here:

http://technitrain.com/coursedetail.php?c=84&trackingcode=CWB

Not long after, on February 22nd 2018, I’ll be teaching a full-day preconference seminar at SQLBits in London on Azure Analysis Services. The agenda can be found here:

http://sqlbits.com/information/event17/Introduction_to_Azure_Analysis_Services/trainingdetails.aspx

…but, to be honest, there’s likely to be a lot of cool new functionality released for Azure SSAS between now and then so that will all have to be fitted in too. SQLBits is, of course, the biggest SQL Server/Microsoft data platform conference in Europe, ridiculously good value for money and loads of fun. Pricing and registration details can be found here:

http://sqlbits.com/information/pricing

UPDATE: the SQLBits team wanted me to make a video to promote my precon, so I outsourced the job to my daughter. Here’s the result:

The Extension.Contents() M Function

Following on from my post last week about M functions that are only available in custom data extensions, here’s a quick explanation of one of those functions: Extension.Contents().

Basically, it allows you to access the contents of any file you include in the .mez file of your custom data connector. Say you have a text file called MyTextFile.txt:

image

If you create a new Power BI Custom Data Connector project using the SDK, you can add this file to your project in Visual Studio like any other file:

image

Next, select the file and in the Visual Studio Properties pane set the Build Action property to Compile:

image

Setting this property means that when your custom data connector is built, this file is included inside it (the .mez file is just a zip file – if you unzip it you’ll now find this file inside).

Next, in the .pq file that contains the M code for your custom data connector, you can access the contents of this file as binary data using Extension.Contents(“MyTextFile.txt”). Here’s an example function for use in a custom data connector that does this:

[DataSource.Kind="ExtensionContentsDemo", 
Publish="ExtensionContentsDemo.Publish"]
shared ExtensionContentsDemo.Contents = () =>
    let
        GetFileContents = Extension.Contents("MyTextFile.txt"),
        ConvertToText = Text.FromBinary(GetFileContents)
    in
        ConvertToText;

image

In the let expression here the GetFileContents step returns the contents of the text file as binary data and the ConvertToText step calls Text.FromBinary() to turn the binary data into a text value.

When this function is, in turn, called it returns the text from the text file. Here’s a screenshot of a query run from Power BI Desktop (after the custom data connector has been compiled and loaded into Power BI) that does this:

image

BI Survey 17: Power BI Highlights

Every year, in return for publicising it to my readers, I get sent a copy of the findings of the BI Survey – the largest survey of BI product users in the world. As always they make for very interesting reading indeed, and although I can’t give away all the details I have been allowed to blog about a few of the highlights for Power BI:

  • Power BI is now the third most frequently considered product in competitive evaluations, after QlikView and Tableau.
  • Indeed, based on responses from vendors and resellers, Microsoft is now the third ‘most significant competitor’ after Tableau and Qlik and is in a much stronger position than it was two years ago – clearly as a result of the impact that Power BI has made, although Excel, SSAS and SSRS contribute to this too.
  • Unsurprisingly Power BI’s exceptional price/performance ratio is the main reason that organisations purchase it. Nonetheless it gets very high satisfaction ratings too.
  • Power BI is also now the third most frequently used front-end for SSAS, after SSRS (which is in the #1 spot and quite a way out in front) and Excel (either on its own or with an AddIn).

Overall it’s a very strong showing for Microsoft. If you’re conducting a competitive evaluation of BI tools, or if you’re a BI vendor, it’s probably worth buying a copy of the full findings.