Performance Problems With MDX Calculated Measures That Return Constants In Analysis Services Multidimensional

Recently I was working on an MDX calculation that calculated a value at a low granularity and aggregated the result – basically what I described in my blog post here. Here’s a simplified version of the calculation written on a cube built from Adventure Works data running on my local SSAS 2017 MD instance:

SCOPE([Measures].[Tax Amount]);
    SCOPE(
	[Customer].[Customer].[Customer].MEMBERS, 
	[Date].[Date].[Date].MEMBERS, 
	[Product].[Product].[Product].MEMBERS);
        THIS = [Measures].[Sales Amount]*0.08;
    END SCOPE;
END SCOPE;

All it does is take the value of the Sales Amount measure at the lowest granularities of the Customer, Date and Product dimensions, multiply it by 0.08 to find a tax value, and because [Tax Amount] is a real, non-calculated measure, the result of the calculation aggregates up through the cube. [I know that I don’t have to aggregate the result of this specific calculation but remember that this is a simplified example – in the real case I did have to write the calculation using Scope statements – and anyway the best way of handling a basic multiplication like this would be with a measure expression]

The performance was sub-second for my test query and I was happy, but then I realised that the same tax rate was being used in other calculations and may change in the future, so I thought I would store the value 0.08 in a calculated measure:

CREATE MEMBER CURRENTCUBE.[Measures].[Tax Rate] AS  
0.08;

SCOPE([Measures].[Tax Amount]);
    SCOPE(
	[Customer].[Customer].[Customer].MEMBERS, 
	[Date].[Date].[Date].MEMBERS, 
	[Product].[Product].[Product].MEMBERS);
        THIS = [Measures].[Sales Amount]*[Measures].[Tax Rate];
    END SCOPE;
END SCOPE;

Surely a simple change? But no, as soon as I did this my query ran for several minutes and memory usage went through the roof until the query was automatically cancelled:

image

Clearly the SSAS MD Formula Engine could optimise the version with the hard-coded constant value but could not optimise the version with the calculated measure. There was nothing in Profiler to indicate the calculation was being evaluated in cell-by-cell mode though.

So I tried another variation:

CREATE MEMBER CURRENTCUBE.[Measures].[Tax Rate] AS  
IIF([Measures].[Sales Amount]=0, NULL, 0.08);

SCOPE([Measures].[Tax Amount]);
    SCOPE(
	[Customer].[Customer].[Customer].MEMBERS, 
	[Date].[Date].[Date].MEMBERS, 
	[Product].[Product].[Product].MEMBERS);
        THIS = [Measures].[Sales Amount]*[Measures].[Tax Rate];
    END SCOPE;
END SCOPE;

This time the memory usage was completely flat but the query was still so slow had to be cancelled. Next, I thought I’d try setting the NON_EMPTY_BEHAVIOR property:

CREATE MEMBER CURRENTCUBE.[Measures].[Tax Rate] AS  
0.08
, NON_EMPTY_BEHAVIOR = [Measures].[Sales Amount];

SCOPE([Measures].[Tax Amount]);
    SCOPE(
	[Customer].[Customer].[Customer].MEMBERS, 
	[Date].[Date].[Date].MEMBERS, 
	[Product].[Product].[Product].MEMBERS);
        THIS = [Measures].[Sales Amount]*[Measures].[Tax Rate];
    END SCOPE;
END SCOPE;

And guess what? The query went back to being sub-second. Strange. Now the NON_EMPTY_BEHAVIOR property is very dangerous and this is a clear example of how it should not be set – the expression 0.08 is never going to return an empty value, regardless of the value of the [Sales Amount] measure. As a result I would be very wary of using this trick in production in case it ended up returning inconsistent results. It’s also worth noting that the following calculation, which is a correct use of NON_EMPTY_BEHAVIOR, is as slow as the other examples above:

CREATE MEMBER CURRENTCUBE.[Measures].[Tax Rate] AS  
IIF([Measures].[Sales Amount]=0, NULL, 0.08)
, NON_EMPTY_BEHAVIOR = [Measures].[Sales Amount];

Finally I created a dummy table in my DSV with one row and one column to hold the 0.08 value, and then created a real, non-calculated measure from the column. When I used this measure in my calculation then performance of my test query was also sub-second.

So, to sum up, it looks like hard-coding constant values in calculated measures is a bad idea, at least in cases like this, and either using the values themselves in your MDX calculations or creating a table and non-calculated measure specifically to hold the value is better for performance.

A Quick Look Some Power BI And SSAS-Related Products And Books

I  don’t like writing reviews of books or products here on my blog for a couple of reasons, the main one being that I don’t usually have the time to read/test/understand something properly so I can write a thorough review. That said I do get sent a lot of free books and evaluation licences for products that deserve a wider audience, so I thought I would write a post rounding up some of them along with a few thoughts of my own.

Custom Visuals

One of the most interesting questions related to Power BI is whether third-party software companies will be able to build businesses selling extensions to it. The most obvious way that Power BI can be extended is through custom visuals and there are several companies that have paid-for (as opposed to free) custom visuals. Zebra BI is one such company and I’ve been really impressed by what they have produced for visualising financial data:

image

I also saw recently that OKViz (part of the Marco and Alberto/SQLBI family) now have a paid-for version of their excellent Smart Filter visual with some premium features – see here for more details; similarly new features in Klaus Birringer’s Ultimate Waterfall and Ultimate Decomposition Tree visuals are only available in the paid version.

I know many Power BI users who use custom visuals have suffered with various bugs and limitations in functionality over the past few users, and I think buying commercial custom visuals rather than relying on free equivalents is one way of dealing with reliability and support problems. It’s certainly in Microsoft’s interests to have a thriving partner community in this space given that flashy visuals are a major selling point of the product. But will Power BI users want to pay for visuals when so much is available out of the box for free, especially when the cost of the visuals seems relatively high when compared to the overall cost of Power BI? I guess we’ll see.

Custom Connectors

A lot of what I’ve just said about custom visuals also applies to custom connectors, although custom connectors are a lot less mature (at the time of writing, support for custom connectors in the on-premises gateway is still in preview). However I was pleased to see this announcement from CData software that they now have over 100 custom connectors available for Power BI. It looks like what they have done is wrapped their existing ODBC providers, and as a result some of their connectors are for sources that are already available in Power BI, but even so there are a lot of new data sources here.

Incidentally, I got very, very excited when I realised that the CData connectors for Excel and Excel Online supported DirectQuery mode as well as import mode. Why, I hear you ask? Well, just think about a planning/budgeting solution where users can enter data into an Excel spreadsheet and when the numbers change in Excel, the numbers change in Power BI too; think also how this could work with Composite Models. I tried this with CData’s Excel on-premises connector and unfortunately it returned errors when the source Excel worksheet was open; I did get it to work with the Excel Online connector but it was painfully slow, even with a small amount of data. If I can get it to work better (and I may be missing some optimisations within the connector) I’ll blog about it.

Books

One of the few Power BI-related books that have been published recently is Phil Seamark’s “Beginning DAX with Power BI”. He was kind enough to send me a review copy; it’s a good introduction to the subject and I particularly like the way he introduces DAX variables early one. Definitely worth a look if you’re just starting to learn DAX.

I was also sent a copy of a slightly older book, David Parker’s “Mastering Data Visualization with Visio 2016”. David knows pretty much all there is to know about using Visio for BI (his blog is great) and while this book doesn’t cover the most exciting new development in this area – the Visio custom visual for Power BI – if you want to learn all the advanced features of Visio that you could take advantage of in Power BI then this is the book to get.

Other Products

I’ve been a big fan of SentryOne’s SSAS monitoring tool, BI Sentry, for years now but up until recently it only supported SSAS Multidimensional. It now supports SSAS Tabular too (details here), and it looks like SentryOne have done a great job of adapting it to the specific needs of the Tabular engine. I always advise my SSAS customers to invest in some kind of monitoring solution because it makes the job of detecting and solving issues like poor query performance so much easier, and to be honest BI Sentry is better than anything you would be able to build yourself.

Moving onto Power BI, if you need to generate documentation for your Power BI datasets and reports check out Power BI Documenter; the August release looks like it has some cool new features. Alternatively the latest release of Power BI Helper also allows you to generate documentation as well as lots of other useful stuff.

Something that hasn’t been properly released yet, but will be incredibly useful when it is, is MAQ Software’s Application Lifecycle Management Toolkit for Power BI. Closely related to BISM Normalizer, it will allow you to compare two Power BI datasets, merge changes, deploy only parts of a dataset (for example individual measures), and deploy to multiple datasets – all of which are things Power BI developers have been crying out for.

Last of all, the guys at DevScope also have a new(ish) product out, Power BI Robots, which automatically takes screenshots of Power BI reports and dashboards and can deliver them to various destinations such as email address and SharePoint. I haven’t looked at it yet but it seems like it could have a lot of interesting uses.

Analysis Services Import Performance Improvements In The August 2018 Release Of Power BI

While I was playing around with the new release (August 2018) of Power BI Desktop I noticed there was an undocumented change: similar to the OData improvements I blogged about here, there is a new option in the AnalysisServices.Database() and AnalysisServices.Databases() M functions that turns on a newer version of the MDX generation layer used by the Power Query engine. Like the OData improvements it is an option called Implementation=”2.0”, used like this:

AnalysisServices.Databases(
	"localhost", 
	[
		TypedMeasureColumns=true, 
		Implementation="2.0"
	]
)

…and also, as with the OData improvements, you will need to manually edit any existing M queries to take advantage of this.

In fact first heard about this option in a comment on this blog post back in January, but as I was told by the dev team that it hadn’t been tested properly I didn’t blog about it. However as it is now set by default in the M code generated by the Power Query Editor, I guess it’s ready for production use. I’m told it should improve the performance of M queries that import data from Analysis Services – and I would be very interested to hear from anyone who tests this about how much of an improvement they see.

I’ve done a little bit of testing myself and I can see there are indeed some differences in the MDX generated. For example, I created an M query that connected to the Adventure Works DW database and returned all combinations of customer and day name where the Internet Sales Amount measure is greater than 100. In the old version of the MDX generation layer (ie without Implementation=”2.0” set) the following MDX query is generated:

SELECT
    {
        [Measures].[Internet Sales Amount]
    }ON 0,
    SUBSET(
        NONEMPTY(
            CROSSJOIN(
                [Date].[Day Name].[Day Name].ALLMEMBERS,
                [Customer].[Customer].[Customer].ALLMEMBERS
            ),
            {
                [Measures].[Internet Sales Amount]
            }
        ),
        4096
    )
    PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON 1
FROM(
    SELECT
        FILTER(
            CROSSJOIN(
                [Date].[Day Name].[Day Name].ALLMEMBERS,
                [Customer].[Customer].[Customer].ALLMEMBERS
            ),
            (
                NOT((
                    ISEMPTY(
                        [Measures].[Internet Sales Amount]
                    )
                OR
                    ISEMPTY(
                        100
                    )
                ))
            AND
                ([Measures].[Internet Sales Amount] > 100)
            )
        )ON 0
    FROM [Adventure Works]
)CELL PROPERTIES VALUE

 

However with the Implementation=”2.0” option set, the following query is generated:

SELECT
    {
        [Measures].[Internet Sales Amount]
    }ON 0,
    SUBSET(
        NONEMPTY(
            FILTER(
                CROSSJOIN(
                    [Date].[Day Name].[Day Name].ALLMEMBERS,
                    [Customer].[Customer].[Customer].ALLMEMBERS
                ),
                (
                    NOT(ISEMPTY(
                        [Measures].[Internet Sales Amount]
                    ))
                AND
                    ([Measures].[Internet Sales Amount] > 100)
                )
            ),
            {
                [Measures].[Internet Sales Amount]
            }
        ),
        4096
    )
    PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON 1
FROM [Adventure Works] CELL PROPERTIES VALUE

The difference between the two is that the first query uses a subselect to do the filtering whereas the second does not; subselects in MDX are not necessarily bad, but I generally try to avoid using them unless I need to. There may be other differences in the way the MDX is generated in the new version but I haven’t had a chance to do any detailed testing.

How The New IsAvailableInMDX Property For Analysis Services Tabular Can Reduce Memory Usage And Speed Up Processing

Following on from my post the other week about the new RowsetSerializationLimit server property, I thought it would be a good idea to write about why the new IsAvailableInMDX property (announced in the same blog post) is so important. In fact, I would say that everyone using Analysis Services Tabular 2017 (CU7 or higher) or Azure Analysis Services should spend some time investigating it because the potential benefits in terms of reduced memory usage and faster processing times are significant, especially for larger models.

First of all, what does it actually do? As the blog post says, it allows you to stop attribute hierarchies from being built on columns when you don’t need them. But what are attribute hierarchies? They are structures that are used only when you are querying your Tabular model using MDX; Excel PivotTables, for example, generate MDX queries when they are connected to Analysis Services Tabular whereas Power BI always generates DAX queries. An attribute hierarchy allows a column on a table to be used on the rows or columns axis of an MDX query, and in Excel that means you will be able to drag that field onto the rows or columns area of a PivotTable. Attribute hierarchies are used by some DAX functionality too – for example the TreatAs() function (at least for now) needs them to be present to work. Frustratingly, the DAX functionality that does need attribute hierarchies is not documented.

To give you an example, consider a Tabular model that contains a table with three columns, Product, Customer and Sales, and a measure that sums up the values in the Sales column.

image

 

I can query this Tabular model in Power BI, for example by creating a Matrix visualisation:

image

I can also get the same values out using an Excel PivotTable:

image

Now the reason I can create this PivotTable is that Analysis Services Tabular has created attribute hierarchies on the Customer and Product columns. However, the important thing to understand is that Analysis Services Tabular creates attribute hierarchies on every column on every table by default, including the Sales column. This allows me to create a PivotTable like this, with the distinct values from Sales on the rows of the PivotTable:

image

image

You’re probably thinking, why would I ever want to use Sales – a measure column – like this? And the answer is you probably wouldn’t, even though Tabular allows this by default. What’s more, building the attribute hierarchy for Sales makes processing slower and the resulting hierarchy uses memory, so all this comes as a cost. The IsAvailableInMDX property is therefore very useful because it allows you to stop attribute hierarchies from being built on columns like Sales where they serve no real purpose.

Unfortunately at the time of writing SSDT doesn’t allow you to set the IsAvailableInMDX property but the good news is that the latest versions of Tabular Editor do:

image

Setting IsAvailableInMDX to false for the Sales field has no impact at all in Power BI, so long as you are not using functionality like TreatAs() that needs it. In Excel, it just means that it is no longer possible to drag Sales onto rows or columns in a PivotTable – the Sales Amount measure still works:

image

As a result, there are two recommendations that can be made:

  • If you are not using any client tools that generate MDX queries (such as Excel) or you want to prevent your users from using them, and you can be sure that it does not affect any of your existing Power BI reports or DAX calculations, you can set IsAvailableInMDX to false on every column of every table
  • If you are using client tools that generate MDX you can still probably set IsAvailableInMDX to false on every measure column and not lose any important functionality

How much of an impact will doing this have on processing times and memory usage? It depends, but it could be a lot. The anecdotal evidence on Twitter is promising:

image

image

I did my own (not particularly scientific) test using a table with five million rows and ten columns, each of which contained the integers between one and five million. Here’s the M query to generate such a table without the need for an external data source:

let
    Source =
	Table.FromColumns(
		List.Repeat(
			{{1..5000000}},
			10
		)
	),
    #"Changed Type" =
	Table.TransformColumnTypes(
		Source,
		List.Transform(
			Table.ColumnNames(Source),
			each {_, Int64.Type}
		)
	)
in
    #"Changed Type"

On my laptop, with IsAvailableInMDX set to true for all ten columns, a full process on this table took around 105 seconds and the table size reported by Vertipaq Analyzer was 381MB. After changing IsAvailableInMDX to false for all ten columns, the time for a full process went down to around 81 seconds and the table size was down to 191MB.

In summary, this is one of those seemingly obscure technical changes that turns out to be way more useful than you might think. If you test out setting IsAvailableInMDX on your Tabular model, please leave a comment letting me know what kind of impact it had!

[Thanks to Daniel Otykier for providing a lot of information for this post]

More Details On The RowsetSerializationLimit Server Property For Analysis Services

The recent blog post on the Analysis Services team blog about new memory options contained information about some very interesting functionality that has just been added. The first of the new features I decided to try was the RowsetSerializationLimit server property, which restricts the number of rows returned by a query and which is intended to stop rogue users running queries that return very large amounts of data. It certainly works but there are two things to point out about it that aren’t immediately obvious.

First of all, an example of it in action. In Azure Analysis Services, using a database that contains one very simple table with one column and 5000 rows, the following DAX query returns all 5000 rows from the table:

EVALUATE 'MyTable'

image

To change the RowsetSerializationLimit server property, open SQL Server Management Studio, right-click on your server name in the Object Explorer pane and select Properties. Then go to the General page, check the Show Advanced (All) Properties box, and you’ll see the property listed under OLAP\Query\RowsetSerializationLimit:

image

The default value is –1, meaning no limit is placed on the number of rows returned by a rowset. Changing it to 100 and then running the above query results in the following error:

image

Executing the query …
Obtained object of type: Microsoft.AnalysisServices.AdomdClient.AdomdDataReader
The maximum number of rows (100) was reached.

A query that returns less than 99 rows, for example

EVALUATE TOPN(99,'MyTable')

…is ok though.

The important thing to understand is that although this applies to both DAX and MDX queries, it only applies to queries that are returned as rowsets (a tabular format) and not as cellsets (a multidimensional format). Most client tools that generate MDX, including Excel, use cellsets so this property will not apply to them unfortunately.

For example, the following MDX query run from a SQL Server Management Studio MDX query window (which also returns a cellset) on the same database runs ok and returns 5000 rows:

SELECT
{[Measures].[Number Of Rows]} ON 0,
[MyTable].[Column1].[Column1].MEMBERS ON 1
FROM
[Model]

 

image

However the same query run from Power BI Desktop to import data from the same database:

image

…runs, but returns only 100 rows and then an error:

image

Something else to point out is that this applies to all queries that return rowsets, even Discover requests and DMV queries. As a result, setting this property to a very low value can cause problems in tools like SQL Server Management Studio: for example if you set the property to 10 and you had 11 tables in a database, you would see an error when you tried to expand the Tables node of a database in the Object Explorer pane!

Even though this property only applies to rowsets I think it’s still going to be very useful, especially in scenarios where Power BI Desktop users are importing vast amounts of data from Analysis Services and causing major problems on the server.

A New Approach To Handling SSRS Multi-Valued Parameters in DAX Queries

Back in 2012 I wrote a blog post explaining how to handle multi-value parameters in DAX queries in Reporting Services reports. The approach I took back then was basically:

  1. Generate a string containing a pipe-delimited list of all the parameter values that the user has selected (I did this in DAX too, but to be honest it’s better to use the SSRS Join function to do this outside the DAX query)
  2. Use the DAX PathContains() function in a Filter() to check whether the value on the current row of the table being filtered appears in the pipe-delimited list

Here’s a deliberately simplified example of how this works based on Adventure Works DW data. The following query filters the FactInternetSales fact table and returns the rows for the Sales Order Numbers that are present in the OrderList variable:

EVALUATE
VAR OrderList = "SO43713|SO43758|SO43784|SO43821"
RETURN
    FILTER (
        FactInternetSales,
        PATHCONTAINS(OrderList, FactInternetSales[SalesOrderNumber])
    )

The trouble with this approach is that is that it can be very slow. Running a trace in DAX Studio for the query above reveals the problem:

image

The presence of CallbackDataID shows that the Storage Engine is calling the Formula Engine to handle the use of PathContains() in the filter, and this is often a cause of poor query performance. However back when I wrote the post the only alternative was, as Chris Koester points out here, to dynamically generate the entire DAX query as an SSRS expression and that is very painful to do.

The good news is that recent changes in DAX mean that there is another way to tackle this problem that can give much better performance. Here’s an example of this new approach:

EVALUATE
VAR OrderList = "SO43713|SO43758|SO43784|SO43821"
VAR OrderCount =
    PATHLENGTH ( OrderList )
VAR NumberTable =
    GENERATESERIES ( 1, OrderCount, 1 )
VAR OrderTable =
    GENERATE (
        NumberTable,
        VAR CurrentKey = [Value]
        RETURN
            ROW ( "Key", PATHITEM ( OrderList, CurrentKey ) )
    )
VAR GetKeyColumn =
    SELECTCOLUMNS ( OrderTable, "Key", [Key] )
VAR FilterTable =
    TREATAS ( GetKeyColumn, FactInternetSales[SalesOrderNumber] )
RETURN
    CALCULATETABLE ( FactInternetSales, FilterTable )

Broken down variable by variable, here’s how it works:

  1. OrderList is the pipe-delimited list of key values passed from SSRS
  2. OrderCount uses the PathLength() DAX function to find the number of parameter values in this list
  3. NumberTable uses the GenerateSeries() function to create a table of numbers with one row for each number between 1 and the number of parameter values in the list
  4. OrderTable uses the trick Marco describes here to iterate over NumberTable and, for each row, uses the PathItem() function to return one parameter value from the list for each row in the able
  5. GetKeyColumn uses the SelectColumns() DAX function to only return the column from OrderTable that contains the parameter values
  6. FilterTable uses the TreatAs() DAX function to take the table of values returned by GetKeyColumn and treat them as values in the FactInternetSales[SalesOrderNumber] column
  7. Finally, the query returns the contents of the FactInternetSales table filtered by the values in FilterTable using the CalculateTable() DAX function

There’s a lot of extra code here and in some cases you may find that performance with smaller data volumes is worse as a result, but in this particular case the new approach is twice as fast at the old one. There’s certainly no CallBackDataID:

image

How To Tell Whether Query Folding Is Taking Place When Importing Data From Analysis Services In Power BI And Excel Power Query

As a quick follow-on from last week’s post on how to detect whether query folding is taking place when importing from OData data sources, if you’re importing data from Analysis Services you have a similar problem: how do you know whether query folding is taking place? Ensuring that query folding takes place for as many of the steps in your query – especially those that filter or otherwise reduce the amount of data returned – is very important for data refresh performance.

Although the Power Query engine generates MDX queries when importing from Analysis Services in the same way it generates SQL queries when it imports from a relational database, the View Native Query option doesn’t work for Analysis Services data sources. You can of course use a Profiler trace or xEvents to see the MDX, but for most users that will not be an option for security reasons. The UI does tell you in another way though. If query folding is taking place for a step, then the Cube Tools menu on the ribbon (with the Add Items and Collapse Columns buttons) will be available, and in the top right-hand corner of the table in the results area there will be a cube icon:

image

If query folding is not taking place for a step (even though it might be taking place for previous steps in the query) then the Cube Tools menu will not be visible, and the cube icon will be replaced by a table icon. For example, in the following screenshot an Index column has been added to the query shown above, so query folding is not taking place from this step on:

image

[Thanks to Jure Jaklic for pointing this out]

%d bloggers like this: