Complex Filter Conditions In Power BI Reports Using Visual-Level Filters And Measures

While Power BI has a lot of great functionality for filtering the data that appears in a visual using visual-level filters there are going to be situations where you need more control over how filtering works, and in this blog post I’m going to show you how you can use visual-level filters and measures in combination to achieve this. The example I show here may be quite specific but I think the general technique has a lot of other interesting applications.

Let’s say you have a table containing customer purchases, with potentially more than one purchase per customer, and want to display a table in Power BI showing only the most recent purchase for each customer relative to a given date.

Using data from the Adventure Works DW database, here’s the Internet Sales table from my Power BI example dataset that contains the customer purchases. Each purchase has a Sales Order Number and a Sales Order Line Number, and there may be multiple line items in a single sales order.

image

Here’s some data from this table, filtered down to show the purchases of one customer: Abby Subram.

image

[Note: all dates shown are in dd/mm/yyyy format]

The requirement is that the report user should be able to select a date in a slicer and show the details for the most recent sales order that occurs before the selected date. So, for example, if the user selects 1/1/2003 then no rows should appear; if 16/6/2003 is selected then only SO50934 should appear; if 30/9/2003 is selected only the four rows relating to SO54975 (and not SO50934) should appear; if 28/6/2004 is selected only the four rows relating to SO73938 should appear, and so on.

The first step is to create a create a date table that contains the ‘as of’ dates the user can select from, using the disconnected slicer technique: basically a date table that has no relationship at all to the Internet Sales table shown above.

image

Next comes the tricky part. Visual-level filters can be used to filter the contents of visuals in a Power BI report based on the value of a measure, and that measure does not need to be displayed in a report. What you therefore have to do is create a measure that will return the value 1 for data that should be displayed (in this case, the most recent purchase order for the customer) and blank otherwise, then apply a visual-level filter using this measure and filter on this measure equalling 1. Marco and Alberto have a great article going into the details of how measures behave when they are used in visual-level filters here that I recommend you read before carrying on.

For this particular scenario here’s my measure:

Is Latest SO =
VAR CurrentAsOfDate =
    SELECTEDVALUE ( 'As Of Date'[DateKey] )
VAR CurrentSODate =
    SELECTEDVALUE ( 'Internet Sales'[OrderDateKey] )
VAR CurrentCustomer =
    SELECTEDVALUE ( 'Internet Sales'[CustomerKey] )
VAR SameCustomerSOs =
    FILTER (
        ALL ( 'Internet Sales'[CustomerKey], 'Internet Sales'[OrderDateKey] ),
        'Internet Sales'[CustomerKey] = CurrentCustomer
&& 'Internet Sales'[OrderDateKey] <= CurrentAsOfDate
            && 'Internet Sales'[OrderDateKey] > CurrentSODate
    )
VAR NoLaterSOs =
    IF ( COUNTROWS ( SameCustomerSOs ) > 0, BLANK (), 1 )
RETURN
    IF ( CurrentSODate <= CurrentAsOfDate, NoLaterSOs )

The logic here is:

  • Assume that this measure will be used to filter a table visual in a Power BI report, where each row in the table visual displays data from a single row in the Internet Sales table in the dataset.
  • For each row in the Internet Sales table, find the current values for OrderDateKey and CustomerKey, as well as the date selected in the ‘as of date’ slicer, and store them in variables
  • Create a table of all possible combinations of CustomerKey and OrderDateKey ignoring the current filter context, then filter it so you get all the sales orders for the current customer and where the order date is greater than the date of the current sales order and less than or equal to the selected ‘as of’ date. This is stored in the SameCustomerSOs variable.
  • If there are rows in the SameCustomerSOs table then the current row in Internet Sales should not be displayed because it is not the most recent sales order as of the selected date; if SameCustomerSOs is empty then the current row should be displayed, because there are not later sales orders for the current customer before the selected as of date.
  • Finally, return 1 if SameCustomerSOs is blank and if the current row in Internet Sales has an order date before the selected as of date.

You can then create a table (or some other visual) in your Power BI report and, in the visual-level filters for that table, drag in the measure and apply a filter on the value of the measure equalling 1:

image

Here’s the result, showing the data for Abby Subram:

 

MostRecentPurchase

The one thing you do need to be careful of when using this technique is that the DAX you use in your measure must be as efficient as possible: inefficient DAX, complex filters and large data volumes will make your report unusably slow. I’m sure the DAX used in my measure above could be tuned to perform better (it currently takes around half a second to filter the full 60,000 rows in the Internet Sales table; Power BI only queries for and displays 500 rows at a time, which makes things faster) but the real point I wanted to make here is that using measures and visual-level filters in this way allows you to apply almost any complex filter condition you want to the data displayed in a visual.

You can download the sample pbix file for this post here.

Implementing Basic Query Folding On A Web Service In Power Query/M And Power BI

The more advanced Power Query/M developers among you will know about query folding, the way that the Power Query engine pushes as much of the heavy-lifting of a query back to a data source.  You may also know that it’s possible to implement query folding yourself inside a custom data connector, and there’s a very detailed (but perhaps a little intimidating) tutorial on how to do it here. I’ve been doing a lot of custom data extension development recently and have been learning how to implement query folding myself so I thought another, simple, demonstration of how it works with web services might be useful for those of you in the same situation.

For this example I’m going to use the web service I used in my blog post on Web.Contents() and the Query and RelativePath options, a blog post you should reread before you carry on here because it provides some useful background. It’s the metadata API for the UK government’s open data website and allows you to search for open data datasets. It doesn’t require any authentication so you’ll be able to run all the queries in this post yourself. Note that all the example M code in this post works in Power BI Desktop and does not need to be used in a custom data connector.

Consider the following M function, which I’m going to call SearchData:

(query as text, optional rowlimit as number) as table =>
let
    QueryRecord = 
        [q=query],
    AddRowLimit = 
        if 
            rowlimit=null 
        then 
            QueryRecord 
        else 
            Record.AddField(
                QueryRecord,
                "rows",
                Text.From(rowlimit)
                ),
    CallWebService = 
        Web.Contents(
        "https://data.gov.uk/api", 
            [
                RelativePath="3/action/package_search", 
                Query= AddRowLimit
            ]
        ),
    Source = 
        Json.Document(CallWebService)[result][results],
    ToTable = 
        Table.FromList(
            Source, 
            Splitter.SplitByNothing(), 
            null, 
            null, 
            ExtraValues.Error
            ),
    Expand = 
        Table.ExpandRecordColumn(
            ToTable, 
            "Column1", 
            {"title"},
            {"title"}
            ),
    ChangeType = 
        Table.TransformColumnTypes(
            Expand,
            {{"title", type text}}
            )
in
    ChangeType

 

The function takes two parameters:

  • query, the search term to use
  • rowlimit, an optional parameter that limits the number of rows returned by the search

It then calls the API, gets the search results in JSON format and converts it to a table with just one column containing the title of the datasets returned.

Calling the function with the search term “data” and now row limit, like so:

SearchData("data")

…returns a table of ten rows, the default maximum number of rows returned:

image

[Note to self: I really need to check out the “Treasure data” dataset]

Using Fiddler to inspect the calls make from the Power Query engine back to the web service (I describe how to do this here) when the query is loaded into a Power BI dataset reveals the following:

image

Everything is pretty much as you would expect: every time this query is used, no matter how it is used, the same request is used to get data.

Query folding is implemented using the Table.View() M function, and here’s an example of how it can be used with the table above (let’s say this new query is called SearchForDataWithFolding):

Table.View(
        null, 
        [
        GetType = () => 
                type table [title = Text.Type],
        GetRows = () => 
                SearchData("Data"),
        OnTake = (count as number) => 
                SearchData("Data", count)
        ]
        )

In the second parameter of Table.View() in this example there are three records in the handler field:

  • GetType, which is called when the Power Query engine needs to know about the data types of the columns of the table returned by this expression. In this case it’s a table with one text column.
  • GetRows, which is called when the Power Query engine wants all the rows from the table (for example when it’s loading data into the dataset)
  • OnTake, which is called when the Power Query engine only wants the top n rows from the table; in this case it provides the top n through the count parameter, and I’m passing that back to my SearchData function via the rowlimit parameter.

Other handlers can be implemented too, but for this web service it only really makes sense to implement OnTake because that’s the only operation that can be folded back.

Here’s what it returns in the Query Editor:

image

The first thing to point out is that, in the Query Editor, it returns more than ten rows – it returns one thousand rows. Fiddler confirms this:

image

As the official documentation states at the bottom of this page:

The Power Query experience will always perform an OnTake of 1000 rows when displaying previews in the navigator and query editor, so your users might see significant performance improvements when working with larger data sets.

Similarly, using the Table.FirstN() function on the rows of this table, as follows:

Table.FirstN(SearchForDataWithFolding,3)

Shows a row limit of three passed back to the web service:

image

There’s something else interesting to note when the query is loaded into the dataset. Fiddler now shows two calls to the web service:

image

Two calls to the web service are being made: the first asks for zero rows, the second asks for all the data with no row limit. In this case the following change to SearchForDataWithFolding stops the first call happening and results in only one call to the web service:

Table.View(
        null, 
        [
        GetType = () => 
                type table [title = Text.Type],
        GetRows = () => 
                SearchData("Data"),
        OnTake = (count as number) => 
                if count=0 
                then 
                #table(type table [title = Text.Type], {}) 
                else 
                SearchData("Data", count)
        ]
        )

But why is the Power Query engine making this call? Why didn’t it make it on the other query? Did it make the same call twice in the other query but did it cache the result of the first call and then reuse it? Is it trying to find out what columns this query returns? Hmm, a subject for future research I think.

You can download the sample pbix file for this post here.

Using Function.ScalarVector() To Optimise Calls To M Functions

One of the most common issues faced when calling web services in M is that the the easiest way of doing so – creating a function that calls the web service, then calling the function once per row in a table using the Invoke Custom Function button – is very inefficient. It’s a much better idea to batch up calls to a web service, if the web service supports this, but doing this forces you to write more complex M code. It’s a problem I wrestled with last year in my custom connector for the Cognitive Services API, and in that case I opted to create functions that can be passed lists instead (see here for more information on how functions with parameters of type list work); I’m sure the developers working on the new AI features in dataflows had to deal with the same problem. This problem is not limited to web services either: calculations such as running totals also need to be optimised in the same way if they are to perform well in M. The good news is that there is a new M function Function.ScalarVector() that allows you to create functions that combine the ease-of-use of row-by-row requests with the efficiency of batch requests.

I’m going to try to keep things as simple as possible while showing how it works. Consider the following M function, called SingleValueUpper:

(InputText as text) => 
    Text.Upper(InputText)

It takes a single text value and returns the same text converted to upper case. Now consider the following query called Product that returns a table of fruit names in lower case:

#table(
    type table [Fruit=text],
    {{"apples"},{"oranges"},{"pears"}}
    )

image

The function can be called on this table in a third query with the following expression (which is basically what the Invoke Custom Function button produces):

Table.AddColumn(
    Product, 
    "Fruit Upper", 
    each SingleValueUpper([Fruit])
    )

The output of this is:

image

In this example the function is called three times – once for each row – and as I said if you were calling a web service in this way, for example, it would be very inefficient.

Now consider the following M function, called ListUpper:

(InputText as list) => 
    List.Transform(
        InputText, 
        each Text.Upper(_)
        )

This function takes a list of text values, converts each of the text values to upper case, and returns a list of the results. Calling the function with the contents of the Fruit column of the table returned by the Product query, like so:

ListUpper(Product[Fruit])

…returns the following list:

image

If you were calling a web service like this – perhaps by making a single POST request with all of these values in the body – it would be much more efficient.

Function.ScalarVector allows you to call a function like this through the Invoke Function Button. The following function, called ScalarVectorUpper in my example, shows how:

Function.ScalarVector(
    type function(Fruit as text) as text,
    (InputTable) =>
        let
            BufferTable = Table.Buffer(InputTable),
            InputList = BufferTable[Fruit],
            CallFunction = ListUpper(InputList)            
        in
            CallFunction
)

image

Function.ScalarVector is given two arguments:

  • A function type whose parameters match the columns from the input table that we’re interested in using. In this case the input table is the Product table and it has one text column called Fruit; we want to pass the contents of the Fruit column to this function.
  • A function that is passed a table (in this case called InputTable) that has the same columns as those listed in the function type. The documentation says this function “must not enumerate its input table more than once” so it uses Table.Buffer() to make sure this is the case; it then takes the contents of the Fruit column from this table and passes it to the ListUpper function defined above, and returns a list of upper case text values.

Basically, it creates a function that looks like it is being called on a row-by-row basis but which is actually called once for the whole table.

The result of calling this function on the table returned by the Product query:

Table.AddColumn(
    Product, 
    "Fruit Upper", 
    each ScalarVectorUpper([Fruit])
    )

…is the same as before, but potentially a lot more efficient:

image

This may seem obscure, but personally I think this is incredibly useful and important. Anyone writing a custom connector in the future (although Function.ScalarVector() isn’t available in the current release of the Power Query SDK I assume it will appear soon) is likely to want to use it, and remember also that custom connectors can be used to create M function libraries and I believe this scenario will be officially supported too.

You can download the example pbix file for this post here.

20 Years Of Analysis Services

Today marks the 20th birthday of Analysis Services: it was released (as OLAP Services) on November 16th 1998. There’s a celebratory blog post and video over on the Power BI blog here:

https://powerbi.microsoft.com/en-us/blog/analysis-services-is-20-years-old/

I’m one of the interviewees on the video, and in it I tell the story of my involvement with Analysis Services and MDX – I’ve been working with it almost full-time for a little over 20 years, right from the first betas. I’ve enjoyed every minute of it, and I’d like to take the opportunity here to thank all the people who have helped me over the years at IMS Health, Microsoft Consulting Switzerland, in the SSAS and Power BI community, and in my career as an independent consultant and trainer. If you had told me in 1998 that I would still be making a living with this product (even still writing some MDX) I’m not sure I would have believed you.

Finally, if your Bingling skills have failed you, here’s the OLAP Jokes post that is mentioned in the birthday video:

https://blog.crossjoin.co.uk/2005/08/25/olap-jokes/

It was for many years the most popular post on my blog. I should point out that I didn’t write all those jokes: my colleagues at the time, Jon Axon and Colin Hardie, deserve some of the blame too.

Nested Display Folders In Power BI

The ability to add columns and measures to display folders in the new Modelling view is one of my favourite features of the November 2018 release of Power BI Desktop: it makes complex models a lot more user-friendly. Being an old-school SSAS developer, I immediately wondered if they worked in the same way that they do in SSAS – and the answer is yes and no.

The good news is that display folders can be nested. Say you have a table called MyTable with three columns called A, B and C. This is what it looks like in the Fields pane in Power BI Desktop by default:

image

If you select column A in the Fields pane and enter a name (in this case “My Folder”) in the Display folder property in the new modelling view, you’ll see column A is now displayed in a folder in the Fields pane:

image

That’s obvious. It is also possible to create nested display folders though, and you do this by entering multiple folder names in the Display folder property separated by a backslash \ character. For example, entering “My Folder\My Subfolder” in the Display folder property for column A like so:

image

…makes column A appear in a folder called My Subfolder which is itself in a folder called My Folder.

To give another example, if column B is given a Display folder property of “My Folder\My Other Subfolder” the result is this:

image

Note that while the screenshots above are from the new Modelling view, in the Fields pane in other places such as the Report view the folders are displayed in alphabetical order:

image

You may need to use numbers in your folder names to ensure that everything gets sorted the way you want:

image

[I’ve just noticed that in the screenshot above column C looks like it’s in the same folder as column B, which it isn’t – that needs fixing and I’ll report it as a bug]

Unfortunately it does not seem to be possible to make a column or measure appear in multiple folders at once (which was possible in SSAS by using a semi-colon delimited list of folder names), but I don’t think many people would want to do that anyway.

Using OpenApi.Document() To Create A Power BI Custom Connector For The Power BI REST API

The idea of creating a Power BI custom connector for the Power BI REST API is not a new one: Miguel Escobar wrote one earlier this year (see here for the source code and documentation). However the Power BI REST API has a lot of endpoints so writing code to support them all would be extremely time-consuming, and there is another option: using the new OpenApi.Document() M function – only available for custom connectors, and not in Power BI Desktop or Excel – to read the API definition from an Open API (aka Swagger) definition file.

Full documentation and samples for using OpenApi.Document are available here, and an Open API definition file for the Power BI REST API is available here. The Power BI REST API uses Azure Active Directory authentication and handling AAD authentication in custom connectors is well documented here; Miguel’s sample shows how to adapt this code for Power BI.

After that, just download the Power BI REST API definition, save it as a .json file and add it to your Visual Studio custom connector project and set the Build Action property of the file to Compile:

image

You can then use the Extension.Contents() M function to read the contents of the file in code and pass that to OpenApi.Document(). The last problem to solve is that OpenApi.Document() only supports basic and anonymous authentication by default, so support for OAuth2 has to be handled using the ManualCredentials option. Here’s the M function definition from my demo project showing how to do this:

[DataSource.Kind="OAuth2Demo", Publish="OAuth2Demo.Publish"]
shared OAuth2.OpenAPIDemo = () =>
    let
        OAuthCredential = Extension.CurrentCredential(),
        OAuthToken = OAuthCredential[access_token],        
        SecurityHeaders = [ Authorization = "Bearer " & OAuthToken],
        PBISwagger = Extension.Contents("PBISwagger.json"),
        CallWebService = 
 	 OpenApi.Document(
	  PBISwagger,
	  [ Headers = SecurityHeaders, ManualCredentials = true ]
	 )
    in
        CallWebService;

Once you build your custom connector, you see all of the endpoints (or at least those that support GET and POST requests) exposed as functions in the Navigation table:

image

It looks super-easy to build a custom connector this way, but is it a actually good idea? I can see a few pros and cons:

  • As you’ll find if you try this, not everything ‘just works’ – there are a few functions that return errors.
  • In this case I embedded the Open API definition file in the project but it’s also possible to download it dynamically using Web.Contents(). This means that whenever the published Open API definition file is updated the custom connector also updates automatically. Are you sure the published Open API definition file will get updated when the API changes, though? What if it doesn’t, or there is an error in it?
  • Connecting to an API is all very well but it’s very important that a custom connector delivers data in a format that is modelled appropriately for Power BI, and very few APIs do this on their own. Pretty much every custom connector I have built has a lot of code in it to transform the data output by the API into something like a star schema, and the need to do this cancels out all the magic automatic stuff that OpenApi.Document() does for you.

Azure Data Studio Should Support Analysis Services And Power BI Premium Capacities

I’m at the PASS Summit this week, and in this morning’s keynote there was a demo of the newly-released Azure Data Studio  – a modern, cross-platform tool for managing and querying SQL Server, Azure SQL Database and other Azure data services (it’s carefully described as “complementary to” SQL Server Management Studio rather than a replacement for it; this blog post has a detailed discussion of this question).

This video is provides a good, short overview of what it is:

I think it’s pretty cool, BUT… it doesn’t support Analysis Services. I had a moan about this and the generally poor state of Analysis Services tooling on Twitter, was invited to meet some of the developers and was told that if enough people request Analysis Services support it might happen.

What would support for Analysis Services involve? The following springs to mind:

  • I’d like to be able to connect to and manage Analysis Services Multidimensional and Tabular on-premises and Azure Analysis Services; if that’s too ambitious I could settle for supporting only Analysis Services Tabular 2016+ and Azure Analysis Services.
  • Since we will soon be able to connect to a Power BI Premium capacity as if it was an Analysis Services instance via XMLA endpoints, I would want to be able to connect to Power BI Premium capacity too.
  • I’d want to be able to run DAX and M queries, and ideally MDX queries too.
  • I would also want to be able to work with ASSL and TMSL for scripting and editing objects.
  • Azure Data Studio has a Profiler extension that works on xEvents; it would be great if that worked with Analysis Services xEvents too.
  • DAX and M Jupyter notebooks would be really useful!
  • It would make sense for some of the functionality of existing tools like DAX Studio and BISM Normalizer being turned into extensions.

If you want to see Analysis Services support in Azure Data Studio, go to the following issue on the Azure Data Studio GitHub repository:

https://github.com/Microsoft/azuredatastudio/issues/1026

…and click the thumbs-up icon on the first post:

AzureDataStudio

Let’s make our voices heard!

 

 

%d bloggers like this: