How To Optimise The Performance Of MDX Queries That Return Thousands Of Rows

One problem I encounter on a regular basis is how to optimise the performance of MDX queries that return thousands, hundreds of thousands, or even millions of rows. The advice I give is always the same:

Reduce the number of rows that your query returns!

Yes, there are some things you can change in your queries and cube design to improve performance, but these are the same things I’d suggest for any query (build aggregations, rewrite MDX, partition etc etc). In my opinion, if you have a query that returns a ridiculously large number of rows you are doing something fundamentally wrong.

There are three reasons why SSAS developers write this kind of query:

  1. They are doing a data-dump from SSAS to another system. Mostly the developer doesn’t realise this though, since the other system is Excel-based and the end user has disguised their requirement as a report. In most cases, user education about how to use Excel with SSAS results in an approach that doesn’t require dumping thousands of rows of data to an Excel worksheet.I will admit that I have seen a few cases where developers need to dump data out of SSAS for other purposes, and have no option but to use SSAS because they have to add complex calculations that can only feasibly be implemented in MDX. These are very rare though, and most of the time using SQL queries against the underlying relational database works a lot better.
  2. The end users have specified a report that returns lots of data, because that’s just what they want, dammit! Often this is to recreate a report built in a previous system that, at some point in the 1970s, was printed out into a gigantic book every month. My argument here is that a report should return no more data than can be seen on a screen without scrolling. If you need to scroll in a report, you probably should be giving the end user more parameters to filter that report so they can find the data they want to see more easily instead.Of course it’s one thing to know what you should be doing, it’s another thing entirely to tell the CFO that their requirements are stupid. If you can’t convince your end users that you know better than them, you have my sympathy. Usually I find that having to choose between the poor performance of what they want and the better performance of a different approach helps them come to their senses.
  3. Finally, the way that SSRS handles drilling down in reports often leads report developers to bring back vast amounts of data. The advice to increase the number of parameters for filtering is equally relevant here, but you can also use MDX techniques like this one to implement drill down in a much more efficient way.

At the end of the day, SSAS just isn’t optimised for returning large resultsets – it was designed to return PivotTable-style queries, which are always relatively small. You can get good performance for large resultsets if you know what you’re doing, you have the time, and you’re lucky, but you’ll usually be better off rethinking your requirements or choosing a different tool.

MDX Scoped Assignments Outside The Granularity Of A Measure Group

If you’re an SSAS Multidimensional developer, you’ll know that not every dimension has to have a relationship with every measure group in your cube. You may also know that by setting the Granularity Attribute property of a regular relationship, you can join a dimension to a measure group using an attribute that isn’t the dimension’s key attribute. What happens when you make a scoped assignment to a non-calculated measure outside the granularity of a measure group?

The simple answer is that, unlike what happens when you assign to a non-calculated measure inside the granularity of a measure group, your assigned value does not aggregate up. For example, consider a dimension called Demo Dim with one user hierarchy, where there is just one member on each level:

image

If you add this dimension to a cube but don’t define any relationships to any measure groups (and don’t change the IgnoreUnrelatedDimensions property of the measure group) you’ll see the value of the All Member on the hierarchy repeated across all of the other members of the hierarchy:

image

If you use a scoped assignment to change the value of the member D for a regular, non-calculated measure M1, like so:

SCOPE([Measures].[M1], [Demo Dim].[L4].&[D]);
    THIS = 999;
END SCOPE;

You’ll see that D changes value, but the value isn’t aggregated up:

image

The same thing happens if you make an assignment below the granularity attribute of a dimension. This all makes sense when you think about it, in my opinion, and it means that in this scenario at least non-calculated measures and calculated measures behave in the same way.

One last word of warning: whenever I’ve done this, I’ve found that query performance hasn’t always been as good as I would have liked.

Loading Twitter Archive Data In Power Query

If you’re a Twitter user (I’m @Technitrain if you aren’t following me already) you may be aware that you can download your entire Twitter history – all of the data from all of your tweets – as a series of .js files. All the details on how to do this are here:

https://support.twitter.com/articles/20170160-downloading-your-twitter-archive

Since Power Query can work with JSON data I thought it would be easy to use it to analyse my Twitter archive… but while it’s possible, it’s not entirely straightforward. The problem seems to be that Power Query doesn’t like the first line in each .js file that Twitter gives you. Removing that first line isn’t too difficult but it requires some M code, so here’s a function (I’ve called it GetTwitterArchiveFile) that handles that problem. You give it the binary data from the file and it returns a table containing all the data from that file:

(TwitterFile as binary)=>
let
    //Read data from file and interpret as Lines
    Source = Lines.FromBinary(TwitterFile),
    //Remove the first line
    RemoveFirstRow = List.Skip(Source,1),
    //Convert back to Binary
    ConvertBackToBinary = Lines.ToBinary(RemoveFirstRow),
    //Now convert to JSON
    ConvertToJSON = Json.Document(ConvertBackToBinary),
    //Flatten to a table
    ConvertToTable = Table.FromList(ConvertToJSON, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
    //Expand the first set of columns
    ExpandColumns = Table.ExpandRecordColumn(ConvertToTable, "Column1", {"source", "entities", "geo", "id_str", "text", "id", "created_at", "user", "retweeted_status", "in_reply_to_status_id_str", "in_reply_to_user_id", "in_reply_to_status_id", "in_reply_to_screen_name", "in_reply_to_user_id_str"}, {"source", "entities", "geo", "id_str", "text", "id", "created_at", "user", "retweeted_status", "in_reply_to_status_id_str", "in_reply_to_user_id", "in_reply_to_status_id", "in_reply_to_screen_name", "in_reply_to_user_id_str"})
in
    ExpandColumns

Here’s an example of how to use the above function: it’s another function (called GetTwitterFullArchive) which, when you pass it the path of your tweets folder (this will be wherever you unzipped the download that you get from Twitter) returns the combined contents of all of the .js files in that format by calling GetTwitterArchiveFile() for each one:

(TweetsFolderPath as text) =>
let
    //Connect to Tweets folder
    Source = Folder.Files(TweetsFolderPath),
    //Remove everything but Content column
    #"Removed Other Columns" = Table.SelectColumns(Source,{"Content"}),
    //Use Custom Column to call GetTwitterArchiveFile for each .js file in the folder
    #"Added Custom" = Table.AddColumn(#"Removed Other Columns", "Custom", each GetTwitterArchiveFile([Content])),
    //Remove the Content columns
    #"Removed Columns" = Table.RemoveColumns(#"Added Custom",{"Content"}),
    //Expand all columns in the Custom column
    #"Expanded Custom" = Table.ExpandTableColumn(#"Removed Columns", "Custom", {"source", "entities", "geo", "id_str", "text", "id", "created_at", "user", "retweeted_status", "in_reply_to_status_id_str", "in_reply_to_user_id", "in_reply_to_status_id", "in_reply_to_screen_name", "in_reply_to_user_id_str"}, {"source", "entities", "geo", "id_str", "text", "id", "created_at", "user", "retweeted_status", "in_reply_to_status_id_str", "in_reply_to_user_id", "in_reply_to_status_id", "in_reply_to_screen_name", "in_reply_to_user_id_str"})
in
    #"Expanded Custom"

Invoking this function in a query, for example like this:

let
    Source = GetFullTwitterArchive("C:\Users\Chris\Downloads\TwitterArchive\data\js\tweets")
in
    Source

Gives you the following output:

image

As you can see, there are plenty of other columns that can themselves be expanded, but this is a good starting point for any analysis.

There’s nothing really ground-breaking in what I’ve done here – it’s a fairly standard example of how you can use Power Query functions to combine data from multiple files, very similar to this example of combining data from multiple Excel files.

There’s absolutely loads of interesting data that you can play with here, but to start with here’s a query that finds the top 10 people I have replied to on Twitter:

let
    Source = GetFullTwitterArchive("C:\Users\Chris\Downloads\TwitterArchive\data\js\tweets"),
    #"Removed Other Columns" = Table.SelectColumns(Source,{"in_reply_to_screen_name"}),
    #"Grouped Rows" = Table.Group(#"Removed Other Columns", {"in_reply_to_screen_name"}, {{"Count", each Table.RowCount(_), type number}}),
    #"Filtered Rows" = Table.SelectRows(#"Grouped Rows", each ([in_reply_to_screen_name] <> null)),
    #"Sorted Rows" = Table.Sort(#"Filtered Rows",{{"Count", Order.Descending}}),
    #"Kept First Rows" = Table.FirstN(#"Sorted Rows",10)
in
    #"Kept First Rows"

Here’s the output as a table and chart:

image

image

Looks like Jamie Thomson is the winner by a wide margin!

Here’s an example of a  NodeXL graph I built from this data, showing the relationships between users who I have mentioned together in a single tweet:

image

You can download the sample workbook for this post, containing all the functions (but not my data) here.

Exporting All M Code From Power Query In Excel 2013

UPDATE – there’s now a better way to do this using copy/paste. See
https://blog.crossjoin.co.uk/2017/02/25/exporting-your-queries-m-code-from-power-query-and-power-bi-using-copypaste/

Here’s a tip that I found out about on the Power Query Technet forum that I thought was worth repeating. If you ever need a quick way of exporting all the M code for all of the queries in an Excel 2013 workbook, just open the Power Query window and click the Send a Frown button shown here:

image

Then, when the Send Feedback dialog appears, make sure Include Formulas is ticked then click OK:

image

When you do that, you’ll get an email created for you that contains a whole lot of debugging information, plus all of the M code for your queries:

image

Obviously, don’t send this email to Microsoft!

It’s quite easy to see the individual queries. You then need to go to your new workbook, create a new query by selecting the Blank Query option under Other Sources, and then open the Advanced Editor window and paste the code for each query in. However, when you do that you will need to modify the code a bit. There are three pieces of code you will need to delete:

  • At the beginning of the M code, where it says
    section Section1;
  • At the beginning of each query, a piece of code that contains the original name of the query:
    shared MyOriginalQueryName =
  • At the very end, a semi-colon

Of course in Excel 2016 we’ll have much better options for copying, pasting and moving queries in VBA, but until then we’ll have to live with hacks like this.