Tuning MDX Calculations That Use The Root() Function

Something that I have vaguely known about for years (for example from Mosha’s post here), but somehow never blogged about, is that using the Root() function in MDX calculations is not great for performance. I’m pretty sure that someone once told me that it was intended for use with defining subcubes for SCOPE statements and not inside calculations at all, which maybe why it hasn’t been optimised. Anyway, here’s an example of the problem and how to work around it.

Take the following query on the Adventure Works cube:

WITH
MEMBER MEASURES.DEMO AS
([Measures].[Internet Sales Amount],
ROOT([Date]))

SELECT 
{[Measures].[Internet Sales Amount],
MEASURES.DEMO} 
ON 0,
NON EMPTY
[Date].[Calendar].[Calendar Year].MEMBERS
*
[Customer].[Customer].[Customer].MEMBERS
ON 1
FROM
[Adventure Works]

It returns sales for all customers by year, and the calculated measure returns the sales for each customer across all dates using the Root() function.

image

On a warm SE cache (which means the amount of time taken by the query will be dependent on how quickly SSAS can do the calculation and evaluate the Non Empty) on my laptop this takes a touch over 7 seconds:

image

Notice also that the Calculate Non Empty End event tells us that the Non Empty filter alone took 2.8 seconds (see here for some more detail on this event).

Now if you rewrite the query, replacing the Root() function with the All Member on the Calendar hierarchy like so:

WITH
MEMBER MEASURES.DEMO AS
([Measures].[Internet Sales Amount],
[Date].[Calendar].[All Periods])

SELECT 
{[Measures].[Internet Sales Amount],
MEASURES.DEMO} 
ON 0,
NON EMPTY
[Date].[Calendar].[Calendar Year].MEMBERS
*
[Customer].[Customer].[Customer].MEMBERS
ON 1
FROM
[Adventure Works]

The query returns the same results, but in just over 5.5 seconds and with the Non Empty taking about 10% of the time it previously took.

image

I’m making a big assumption here though: the Root() function in the first query returns a tuple containing every All Member from every hierarchy on the Date dimension, not just the All Member from the Calendar hierarchy, so while these two queries return the same results the calculations are not equivalent. You can still get a performance improvement, though, by replacing the Root() function with the tuple it returns, although the resulting MDX will look very messy.

First, to find what the Root() function returns just use a query like this:

WITH
MEMBER MEASURES.ROOTRETURNS as 
TUPLETOSTR(ROOT([Date]))
SELECT {MEASURES.ROOTRETURNS} ON 0
FROM
[Adventure Works]

Run it in SQL Server Management Studio and you can copy/paste the tuple from the query results:

image

Here’s the tuple I get from my (somewhat hacked around) Date dimension:

([Date].[Fiscal].[All Periods],[Date].[Calendar].[All Periods],[Date].[Calendar Weeks].[All Periods],
[Date].[Fiscal Weeks].[All Periods],[Date].[Fiscal Year].[All Periods],[Date].[Date].[All Periods],
[Date].[Calendar Quarter].[All Periods],[Date].[Fiscal Quarter].[All Periods],
[Date].[Calendar Semester].[All Periods],[Date].[Fiscal Semester].[All Periods],
[Date].[Day of Week].[All Periods],[Date].[Day Name].[All Periods],
[Date].[Day of Month].[All Periods],[Date].[Day of Year].[All Periods],
[Date].[Calendar Week].[All Periods],[Date].[Month Name].[All Periods],
[Date].[Calendar Year].[All Periods],[Date].[Fiscal Semester of Year].[All Periods],
[Date].[Calendar Semester of Year].[All Periods],[Date].[Fiscal Quarter of Year].[All Periods],
[Date].[Calendar Quarter of Year].[All Periods],[Date].[Month of Year].[All Periods],
[Date].[Fiscal Week].[All Periods],[Date].[Calendar Week of Year].[All Periods],
[Date].[Fiscal Week of Year].[All Periods],[Date].[Current Date].[All Periods],
[Date].[Is2002].[All Periods],[Date].[Month Day].[All Periods])

Yuck. Anyway, with this gigantic tuple inserted into our calculation like so:

WITH
MEMBER MEASURES.DEMO AS
([Measures].[Internet Sales Amount],
[Date].[Fiscal].[All Periods],[Date].[Calendar].[All Periods],[Date].[Calendar Weeks].[All Periods],[Date].[Fiscal Weeks].[All Periods],[Date].[Fiscal Year].[All Periods],[Date].[Date].[All Periods],[Date].[Calendar Quarter].[All Periods],[Date].[Fiscal Quarter].[All Periods],[Date].[Calendar Semester].[All Periods],[Date].[Fiscal Semester].[All Periods],[Date].[Day of Week].[All Periods],[Date].[Day Name].[All Periods],[Date].[Day of Month].[All Periods],[Date].[Day of Year].[All Periods],[Date].[Calendar Week].[All Periods],[Date].[Month Name].[All Periods],[Date].[Calendar Year].[All Periods],[Date].[Fiscal Semester of Year].[All Periods],[Date].[Calendar Semester of Year].[All Periods],[Date].[Fiscal Quarter of Year].[All Periods],[Date].[Calendar Quarter of Year].[All Periods],[Date].[Month of Year].[All Periods],[Date].[Fiscal Week].[All Periods],[Date].[Calendar Week of Year].[All Periods],[Date].[Fiscal Week of Year].[All Periods],[Date].[Current Date].[All Periods],[Date].[Is2002].[All Periods],[Date].[Month Day].[All Periods])

SELECT 
{[Measures].[Internet Sales Amount],
MEASURES.DEMO} 
ON 0,
NON EMPTY
[Date].[Calendar].[Calendar Year].MEMBERS
*
[Customer].[Customer].[Customer].MEMBERS
ON 1
FROM
[Adventure Works]

The query is a little slower – just over 6 seconds- but still faster than the first query using Root(), and the Non Empty filter is still fast:

image

Something else to watch out for with Root(), and another good reason not to use it in calculations, is that it returns an error in certain multiselect scenarios as Richard Lees describes here.

The Power Query Branding Problem

A few years ago I started blogging about Power Query. Back then life was simple: I put “Power Query” in the title of a post and everyone knew what I was writing about, because Power Query was an Excel add-in you could download and install. Now, however, the technology has been renamed “Get & Transform” in Excel 2016 and is a native feature of Excel; the name “Power Query” only applies to the add-in for Excel 2010 and 2013. What’s more, the same technology is used in Power BI’s Query Editor and it’s also now in Azure Analysis Services, Analysis Services 2017 Tabular and the Common Data Service. This is obviously a good thing – I think Power Query is one of the best things to come out of Microsoft in the last decade – but it also presents me with a problem. How can I write about this technology if it doesn’t have a single, official, easily identifiable name?

In more recent times I’ve written posts with unwieldy names like “Introduction to Insert Topic Name Here in Power Query/Power BI/Excel 2016 Get & Transform” and in the future I suppose this will have to grow to “Introduction to Insert Topic Name Here in Power Query/Power BI/Excel 2016 Get & Transform/Analysis Services Data Loading/Common Data Service”. Tagging and categorising blog posts can help here, I know, but it’s the title of a blog post that’s the main determining factor as to whether it gets read or not when someone is looking at a list of search results. It’s getting ridiculous, but how else can I ensure that someone searching for the solution to a data loading problem in Excel 2016 Get & Transform will find a post I’ve written that contains the answer but shown in Power BI?

Inside Microsoft I understand that the team that builds this technology is known as the Power Query team. I certainly think about this technology as being called Power Query, as do a lot of other people in the community. However, my argument is that I can’t just use the name “Power Query” when I’m writing or speaking about this technology because most of its users – especially those who are new to it and who need the most help – don’t think of it as “Power Query”. They think of it as Excel 2016 Get & Transform, the Query Editor in Power BI Desktop and so on, the specific instances of it.

Maybe I’m making too big a deal of this, but in my opinion this is a problem not just for me but for Microsoft too. We all know how much developers rely on internet searches to find solutions to problems, and not having a single name for this technology makes it much harder to search successfully. This in turn makes it less likely that when a developer runs into a problem they will be able to solve it, which in turn means they are less likely to want to use this technology in future.

What’s the answer? It has to be to make the “Power Query” brand visible somewhere in the UI of all the products that use Power Query technology. I know there’s a risk of confusing users instead of helping them here (am I using Power Query or Power BI?), but it could be as simple as making a few small changes like renaming the “Query Editor” window to be the “Power Query Editor”:

image

I think that would be enough to let people know that “Power Query” is a technology in its own right and that content referring to “Power Query” is relevant to Excel, Power BI, SSAS and everywhere else that Power Query is used. It would also be nice if, now that M is the official name of the M language (and not Power Query Formula Language), the Advanced Editor window and the Custom Column dialog let users know that the code they were writing in them was in a language called M and not some mysterious, nameless scripting language.

What do you think? I’m interested to hear your comments and opinions…

UPDATE: victory is ours! See this comment from Faisal Mohamood of the Power Query team below
Hey there Chris – what you are saying makes complete sense. Power Query is the name of this capability and we will highlight the name of this capability as such in experiences where you are working with Power Query (and M).

The Diagnostics.ActivityId() M Function

I’ve blogged a few times about the tracing functionality that is built into Power Query/Get&Transform and Power BI (see here and here). The trace files themselves clearly contain a lot of interesting information, but there’s no official documentation about what they contain and the format seems to have changed several times. I guess they are meant for Microsoft internal use but that doesn’t stop us from taking a peek at their contents…

Whenever I’ve looked at the contents of a trace file, one problem I have faced is working out which events relate to the query that I’ve just executed. Today, though, I discovered an M function that can help with this: Diagnostics.ActivityId(). It’s not listed in the online M function reference but here’s the documentation from the function itself:

image

Calling the function does indeed return “an opaque identifier for the currently-running evaluation”. Using the following code in a query:

Diagnostics.ActivityId()

…returns the following:

image

Every time the query is refreshed a different value is returned.

Now, consider the following query that runs a query against a SQL Server database:

let
    Source = Sql.Database("localhost", "Adventure Works DW"),
    Test = Value.NativeQuery(
            Source, 
            "SELECT DISTINCT CalendarYear FROM DimDate")
in
    Test

image

How can you find the trace events that relate to a particular execution of this query? One way is to add a custom column to this query that returns the value returned by Diagnostics.ActivityId():

image

let
    Source = Sql.Database("localhost", "Adventure Works DW"),
    Test = Value.NativeQuery(
            Source, 
            "SELECT DISTINCT CalendarYear FROM DimDate"),
    #"Added Custom" = Table.AddColumn(
	Test, 
	"ActivityId", 
	each Diagnostics.ActivityId())
in
    #"Added Custom"

Then, after the query has been loaded into the Data Set you can copy the ActivityID from the table in the Data pane:

image

…and then search for the same value in the trace files:

image

Of course you’re now on your own trying to make sense of what you find in the trace file, but I hope this helps a little bit!

Power BI Custom Data Connector For Language Detection, Key Phrase Extraction And Sentiment Analysis

I’m pleased to announce that I’ve published my first Power BI custom data connector on GitHub here:

https://github.com/cwebbbi/PowerBITextAnalytics

Basically, it acts as a wrapper for the Microsoft Cognitive Services Text Analytics API and  makes it extremely easy to do language detection, sentiment analysis and to extract key phrases from text when you are loading data into Power BI.

Full documentation for the Text Analytics API can be found here and there is more detailed documentation available for the Detect Language, Key Phrases and Sentiment APIs. You can learn more about Power BI custom data connectors here and here.

Note: you will need to sign up for the Text Analytics API and obtain an access key before you use this custom data connector. You’ll be prompted to enter the access key in Power BI the first time you use the custom data connector. A number of pricing tiers are available, including a free tier that allows for 5000 calls per month. The custom data connector batches requests so that you can send up to 1000 individual pieces of text per call to the API.

Why build a custom data connector for this? Well, first of all, text analysis in Power BI and Power Query is something I’ve been interested in for a long time (see here for example), and I know a lot of other people want to do this too. However, calling any API – and the Microsoft Cognitive Services APIs in particular – involves a lot of tricky M code that is beyond most Power BI users. I certainly didn’t find it easy to write this custom data connector! I know Gil Raviv has blogged about how to use the Sentiment analysis API this data connector calls in two posts (here and here) but he doesn’t handle all the limitations of the API, including the 1MB limit per request, in his examples – which just goes to show what a complex task this is. Wrapping up the code for calling the Text Analytics API in a custom data connector hides this complexity from the developer, makes the code a lot more portable, and the fact that the code is open source means the community can work together to fix bugs and add new features. I welcome any contributions that anyone wants to make and I know there are a lot of improvements that can be made. Certainly the documentation is a bit sparse right now and I’ll be adding to it over the next week or so.

This is not quite a traditional custom data connector in the sense that it doesn’t act as a data source in its own right – you have to pass data to it in order to get data back. It exposes three M functions:

  • TextAnalytics.DetectLanguage(inputtext as list, optional numberoflanguages as number) as table
    This function takes a list of text values and returns a table containing the input text and the language detected in each piece of text
  • TextAnalytics.KeyPhrases(inputtext as list, optional languages as list) as table
    This function takes a list of text values (and an optional list of language identifiers for each piece of text) and returns a table containing the input text and key phrases detected in each piece of text. More than one key phrase may be returned for each piece of text.
  • TextAnalytics.Sentiment(inputtext as list, optional languages as list) as table
    This function takes a list of text values (and an optional list of language identifiers for each piece of text) and returns a table containing the input text and a score representing the sentiment detected for each piece of text.

Here are a few simple examples of how to use these functions:

First, the TextAnalytics.DetectLanguage() function. This query:

let
    input = {"hello all", "bonjour", "guten tag"},
    result = TextAnalytics.DetectLanguage(input)
in
    result

Returns the following table:

image

For the TextAnalytics.KeyPhrases() function, the following query:

let
    input = 
        {
        "blue is my favourite colour", 
        "what time it is please?", 
        "twinkle, twinkle little star, how I wonder what you are"
        },
    result = TextAnalytics.KeyPhrases(input)
in
    result

Returns this table:

image

And for the TextAnalytics.Sentiment() function, the following query:

 let
     input = 
        {
        "this is great", 
        "this is terrible", 
        "this is so-so"
        },
     result = TextAnalytics.Sentiment(input)
in
    result

Returns this table:

image

Because the first parameter of each of these functions is a list, it’s super-easy to pass in columns of data from existing tables. For example, here’s the output of a query that gets the last ten comments from the comments RSS feed of this blog:

image

If this query is called Comments, the following single line of code is all that’s needed to call the TextAnalytics.Sentiment() function for the Comment Text column on this table:

TextAnalytics.Sentiment(Comments[Comment Text])

image

You can download a .pbix file containing several examples of how to call these functions, including all the examples above and many more, here.

I hope you enjoy using these functions, and if you have any questions, find any bugs or want to make suggestions for how they can be improved please let me know via the Issues page on GitHub. Finally, this is my first time using GitHub and if I’ve done something really dumb while publishing the code please let me know what I need to do to fix it!

Obscure MDX Month: Deselecting Members In An Excel PivotTable Leads To Missing Rows

Here’s some interesting (and borderline buggy) Excel PivotTable behaviour I learned about today from Charles-Henri Sauget, as well as the workaround for it courtesy of the great Greg Galloway.

Say you have a large dimension attribute hierarchy with 200,000 members on it in SSAS MD (or the equivalent in Tabular or Power Pivot) and drop it onto the rows of an Excel PivotTable. As you would expect, you get a PivotTable with 200,000 rows in it:

image

However if you then deselect just one member on rows like so:

image

…you’ll find that the PivotTable does not have 199,999 rows – in Excel 2016 it only has 32,000 rows:

image

(different versions of Excel may return different numbers of rows here, but still not the full number).

If you look at the MDX generated by Excel it consists of all of the member unique names that are still selected, and unsurprisingly it’s a gigantic query:

image

However, it turns out you can make Excel do the sensible thing and use the Except() function to return everything apart from the deselected member by going to the Field Settings dialog and selecting “Include new items in manual filter”:

image

image

This then gives you the expected number of rows in the PivotTable:

image

I suspect the reason Excel is generating the crazy-long MDX statement by default is that it’s the only way to prevent new members being added to the PivotTable if they are added to the attribute hierarchy in future. On a really large attribute hierarchy, though, the risk is that the resulting MDX query might exceed the maximum length of a query, so Excel has to truncate the number of members returned to make the query shorter. With “Include new items in manual filter” selected, though, it’s ok if new members do get added to the PivotTable in the future so it’s ok to use the Except() function in the query.