Finding All Selected Items In An Excel Slicer Connected To SSAS, Power BI Or the Excel Data Model Using Dynamic Arrays

The big news in the world of Excel right now is the introduction of dynamic arrays. They’re only available in the Office 365 click-to-run version of Excel and, at the time of writing, only available to people on the Office Insiders programme, but eventually they’ll be available to anyone running Excel for Office 365 on their desktop. There are already lots of blog posts about them including this overview by Jon Acampora, and you probably also want to download Bill Jelen’s detailed mini-book on them here which is free for the rest of 2018. Now I’m not an Excel expert by any stretch of the imagination but I’m as excited as anyone else about them because they will be incredibly useful for anyone building reports using Excel cube functions against Analysis Services, the Excel Data Model/Power Pivot and Power BI. Bill Jelen’s book has a short section on this subject but the possibilities are limitless…

Here’s one example of how they can be used. A while ago I blogged about how to use a regular array formula and the TextJoin() Excel function to get all the selected items from a slicer. Dynamic arrays make this problem much easier to solve. Take the following table loaded into the Excel Data Model:

Capture1

Now, say you have a PivotTable built from this and a slicer (called Slicer_Fruit) connected to it:

Capture2

It’s possible to use the CubeSet() function to get the set of selected items in a slicer using the following formula:

=CUBESET("ThisWorkbookDataModel",Slicer_Fruit,"Slicer Set")

Assuming this formula is in cell H1, you can then get the number of items in this set using CubeSetCount():

=CUBESETCOUNT($H$1)

Assuming this is in cell H2, all you need to do to get a comma-delimited list of all the selected items in the slicer via this set is:

=
 TEXTJOIN(
  ", ", 
  TRUE, 
  CUBERANKEDMEMBER(
   "ThisWorkbookDataModel",
   $H$1, 
   SEQUENCE($H$2,1)
  )
 )

Capture3

Here it is in action:

demo

It works as follows:

  • The new Sequence() function is used to create a dynamic array of numbers from one to the number returned by the CubeSetCount() function.
  • The CubeRankedMember() gets the name of each of the items in the set using the numbers returned by Sequence()
  • TextJoin() then concatenates all of the names returned by CubeRankedMember() into a single comma-delimited list.

You can download the sample workbook here (remember it will only work if you have a version of Excel installed that includes dynamic arrays!).

Performance Problems With MDX Calculated Measures That Return Constants In Analysis Services Multidimensional

Recently I was working on an MDX calculation that calculated a value at a low granularity and aggregated the result – basically what I described in my blog post here. Here’s a simplified version of the calculation written on a cube built from Adventure Works data running on my local SSAS 2017 MD instance:

SCOPE([Measures].[Tax Amount]);
    SCOPE(
	[Customer].[Customer].[Customer].MEMBERS, 
	[Date].[Date].[Date].MEMBERS, 
	[Product].[Product].[Product].MEMBERS);
        THIS = [Measures].[Sales Amount]*0.08;
    END SCOPE;
END SCOPE;

All it does is take the value of the Sales Amount measure at the lowest granularities of the Customer, Date and Product dimensions, multiply it by 0.08 to find a tax value, and because [Tax Amount] is a real, non-calculated measure, the result of the calculation aggregates up through the cube. [I know that I don’t have to aggregate the result of this specific calculation but remember that this is a simplified example – in the real case I did have to write the calculation using Scope statements – and anyway the best way of handling a basic multiplication like this would be with a measure expression]

The performance was sub-second for my test query and I was happy, but then I realised that the same tax rate was being used in other calculations and may change in the future, so I thought I would store the value 0.08 in a calculated measure:

CREATE MEMBER CURRENTCUBE.[Measures].[Tax Rate] AS  
0.08;

SCOPE([Measures].[Tax Amount]);
    SCOPE(
	[Customer].[Customer].[Customer].MEMBERS, 
	[Date].[Date].[Date].MEMBERS, 
	[Product].[Product].[Product].MEMBERS);
        THIS = [Measures].[Sales Amount]*[Measures].[Tax Rate];
    END SCOPE;
END SCOPE;

Surely a simple change? But no, as soon as I did this my query ran for several minutes and memory usage went through the roof until the query was automatically cancelled:

image

Clearly the SSAS MD Formula Engine could optimise the version with the hard-coded constant value but could not optimise the version with the calculated measure. There was nothing in Profiler to indicate the calculation was being evaluated in cell-by-cell mode though.

So I tried another variation:

CREATE MEMBER CURRENTCUBE.[Measures].[Tax Rate] AS  
IIF([Measures].[Sales Amount]=0, NULL, 0.08);

SCOPE([Measures].[Tax Amount]);
    SCOPE(
	[Customer].[Customer].[Customer].MEMBERS, 
	[Date].[Date].[Date].MEMBERS, 
	[Product].[Product].[Product].MEMBERS);
        THIS = [Measures].[Sales Amount]*[Measures].[Tax Rate];
    END SCOPE;
END SCOPE;

This time the memory usage was completely flat but the query was still so slow had to be cancelled. Next, I thought I’d try setting the NON_EMPTY_BEHAVIOR property:

CREATE MEMBER CURRENTCUBE.[Measures].[Tax Rate] AS  
0.08
, NON_EMPTY_BEHAVIOR = [Measures].[Sales Amount];

SCOPE([Measures].[Tax Amount]);
    SCOPE(
	[Customer].[Customer].[Customer].MEMBERS, 
	[Date].[Date].[Date].MEMBERS, 
	[Product].[Product].[Product].MEMBERS);
        THIS = [Measures].[Sales Amount]*[Measures].[Tax Rate];
    END SCOPE;
END SCOPE;

And guess what? The query went back to being sub-second. Strange. Now the NON_EMPTY_BEHAVIOR property is very dangerous and this is a clear example of how it should not be set – the expression 0.08 is never going to return an empty value, regardless of the value of the [Sales Amount] measure. As a result I would be very wary of using this trick in production in case it ended up returning inconsistent results. It’s also worth noting that the following calculation, which is a correct use of NON_EMPTY_BEHAVIOR, is as slow as the other examples above:

CREATE MEMBER CURRENTCUBE.[Measures].[Tax Rate] AS  
IIF([Measures].[Sales Amount]=0, NULL, 0.08)
, NON_EMPTY_BEHAVIOR = [Measures].[Sales Amount];

Finally I created a dummy table in my DSV with one row and one column to hold the 0.08 value, and then created a real, non-calculated measure from the column. When I used this measure in my calculation then performance of my test query was also sub-second.

So, to sum up, it looks like hard-coding constant values in calculated measures is a bad idea, at least in cases like this, and either using the values themselves in your MDX calculations or creating a table and non-calculated measure specifically to hold the value is better for performance.

Analysis Services Import Performance Improvements In The August 2018 Release Of Power BI

While I was playing around with the new release (August 2018) of Power BI Desktop I noticed there was an undocumented change: similar to the OData improvements I blogged about here, there is a new option in the AnalysisServices.Database() and AnalysisServices.Databases() M functions that turns on a newer version of the MDX generation layer used by the Power Query engine. Like the OData improvements it is an option called Implementation=”2.0”, used like this:

AnalysisServices.Databases(
	"localhost", 
	[
		TypedMeasureColumns=true, 
		Implementation="2.0"
	]
)

…and also, as with the OData improvements, you will need to manually edit any existing M queries to take advantage of this.

In fact first heard about this option in a comment on this blog post back in January, but as I was told by the dev team that it hadn’t been tested properly I didn’t blog about it. However as it is now set by default in the M code generated by the Power Query Editor, I guess it’s ready for production use. I’m told it should improve the performance of M queries that import data from Analysis Services – and I would be very interested to hear from anyone who tests this about how much of an improvement they see.

I’ve done a little bit of testing myself and I can see there are indeed some differences in the MDX generated. For example, I created an M query that connected to the Adventure Works DW database and returned all combinations of customer and day name where the Internet Sales Amount measure is greater than 100. In the old version of the MDX generation layer (ie without Implementation=”2.0” set) the following MDX query is generated:

SELECT
    {
        [Measures].[Internet Sales Amount]
    }ON 0,
    SUBSET(
        NONEMPTY(
            CROSSJOIN(
                [Date].[Day Name].[Day Name].ALLMEMBERS,
                [Customer].[Customer].[Customer].ALLMEMBERS
            ),
            {
                [Measures].[Internet Sales Amount]
            }
        ),
        4096
    )
    PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON 1
FROM(
    SELECT
        FILTER(
            CROSSJOIN(
                [Date].[Day Name].[Day Name].ALLMEMBERS,
                [Customer].[Customer].[Customer].ALLMEMBERS
            ),
            (
                NOT((
                    ISEMPTY(
                        [Measures].[Internet Sales Amount]
                    )
                OR
                    ISEMPTY(
                        100
                    )
                ))
            AND
                ([Measures].[Internet Sales Amount] > 100)
            )
        )ON 0
    FROM [Adventure Works]
)CELL PROPERTIES VALUE

 

However with the Implementation=”2.0” option set, the following query is generated:

SELECT
    {
        [Measures].[Internet Sales Amount]
    }ON 0,
    SUBSET(
        NONEMPTY(
            FILTER(
                CROSSJOIN(
                    [Date].[Day Name].[Day Name].ALLMEMBERS,
                    [Customer].[Customer].[Customer].ALLMEMBERS
                ),
                (
                    NOT(ISEMPTY(
                        [Measures].[Internet Sales Amount]
                    ))
                AND
                    ([Measures].[Internet Sales Amount] > 100)
                )
            ),
            {
                [Measures].[Internet Sales Amount]
            }
        ),
        4096
    )
    PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON 1
FROM [Adventure Works] CELL PROPERTIES VALUE

The difference between the two is that the first query uses a subselect to do the filtering whereas the second does not; subselects in MDX are not necessarily bad, but I generally try to avoid using them unless I need to. There may be other differences in the way the MDX is generated in the new version but I haven’t had a chance to do any detailed testing.

More Details On The RowsetSerializationLimit Server Property For Analysis Services

The recent blog post on the Analysis Services team blog about new memory options contained information about some very interesting functionality that has just been added. The first of the new features I decided to try was the RowsetSerializationLimit server property, which restricts the number of rows returned by a query and which is intended to stop rogue users running queries that return very large amounts of data. It certainly works but there are two things to point out about it that aren’t immediately obvious.

First of all, an example of it in action. In Azure Analysis Services, using a database that contains one very simple table with one column and 5000 rows, the following DAX query returns all 5000 rows from the table:

EVALUATE 'MyTable'

image

To change the RowsetSerializationLimit server property, open SQL Server Management Studio, right-click on your server name in the Object Explorer pane and select Properties. Then go to the General page, check the Show Advanced (All) Properties box, and you’ll see the property listed under OLAP\Query\RowsetSerializationLimit:

image

The default value is –1, meaning no limit is placed on the number of rows returned by a rowset. Changing it to 100 and then running the above query results in the following error:

image

Executing the query …
Obtained object of type: Microsoft.AnalysisServices.AdomdClient.AdomdDataReader
The maximum number of rows (100) was reached.

A query that returns less than 99 rows, for example

EVALUATE TOPN(99,'MyTable')

…is ok though.

The important thing to understand is that although this applies to both DAX and MDX queries, it only applies to queries that are returned as rowsets (a tabular format) and not as cellsets (a multidimensional format). Most client tools that generate MDX, including Excel, use cellsets so this property will not apply to them unfortunately.

For example, the following MDX query run from a SQL Server Management Studio MDX query window (which also returns a cellset) on the same database runs ok and returns 5000 rows:

SELECT
{[Measures].[Number Of Rows]} ON 0,
[MyTable].[Column1].[Column1].MEMBERS ON 1
FROM
[Model]

 

image

However the same query run from Power BI Desktop to import data from the same database:

image

…runs, but returns only 100 rows and then an error:

image

Something else to point out is that this applies to all queries that return rowsets, even Discover requests and DMV queries. As a result, setting this property to a very low value can cause problems in tools like SQL Server Management Studio: for example if you set the property to 10 and you had 11 tables in a database, you would see an error when you tried to expand the Tables node of a database in the Object Explorer pane!

Even though this property only applies to rowsets I think it’s still going to be very useful, especially in scenarios where Power BI Desktop users are importing vast amounts of data from Analysis Services and causing major problems on the server.

Analysis Services Query Analyzer

Last week a new, free tool for analysing the performance of MDX queries on SSAS Multidimensional was released: Analysis Services Query Analyzer. You can get all the details and download it here:

https://ssasqueryanalyzer.github.io/

…and here’s a post on LinkedIn by one of the authors, Francesco De Chirico, explaining why he decided to build it:

https://www.linkedin.com/pulse/asqa-10-released-francesco-de-chirico/

I’ve played around with it a bit and I’m very impressed – it’s a really sophisticated and powerful tool, and one I’m going to spend some time learning because I’m sure it will be very useful to me. It’s an AddIn for SQL Server Management Studio and captures Profiler and Perfmon data for an MDX query when you run it, does so for a cold and warm cache, and then displays the results in a variety of graphs and charts. And it even has an MDX formatter built in!

image

image

image

You can find a full list of features here, and when you download the tool it comes with very detailed documentation. Definitely a must-have for all serious SSAS Multidimensional developers.

Drillthrough On Calculated Members In SSAS MD 2017 Using DAX Expressions, Part 2

If you read part 1 of this series you probably already have a good idea of how I’m going to use DAX queries in actions to implement drillthrough on calculated members. However, there is one last major problem to solve – and no 100% fool-proof way of solving it.

That problem is that there is a maximum length of (as far as I can tell) 32768 characters for the DAX query that is generated by the action. If you exceed that limit you’ll see an error in Excel when you drillthrough:

image

This is the result of the generated DAX query being truncated before Excel tries to run it. As you can imagine, if you have large dimensions and you’re generating a DAX query with thousands of key values in an IN expression, it’s very easy to hit the maximum query length. What can you do to make this less likely?

Probably the most effective method is to check whether the IN clause will contain all of the possible key values on the dimension (most likely because no attributes from the dimension are used in the query), and if so don’t bother including that dimension in the DAX query. Here’s an example of how this can be done:

IIF(
COUNT(EXISTING [Date].[Date].[Date].MEMBERS) = 
COUNT([Date].[Date].[Date].MEMBERS),
"",
",'Date'[Date.Key0] IN {
" +
GENERATE(EXISTING [Date].[Date].[Date].MEMBERS , 
[Date].[Date].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "}
")

Another option is to use NonEmpty() to remove any members from the drillthrough that have no data, like so:

IIF(
COUNT(EXISTING [Date].[Date].[Date].MEMBERS) = 
COUNT([Date].[Date].[Date].MEMBERS),
"",
",'Date'[Date.Key0] IN {
" +
GENERATE(EXISTING 
NONEMPTY([Date].[Date].[Date].MEMBERS, [Measures].[Sales Amount]) 
, [Date].[Date].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "}
")

However, if these techniques (and there may be others) still don’t reduce the length of the query to an acceptable length you will want to try to fail as gracefully as possible and show a more helpful error message than the one above. You can do this with the DAX Error() function, like so:

IIF(
COUNT(EXISTING [Customer].[Customer].[Customer].MEMBERS) = 
COUNT([Customer].[Customer].[Customer].MEMBERS),
"",
",'Customer'[Customer.Key0] IN {
" +
IIF(COUNT(
{EXISTING 
NONEMPTY(
[Customer].[Customer].[Customer].MEMBERS, 
[Measures].[Sales Amount])} 
AS CustomerList)>7000,
"ERROR(
""Please reduce your selection on the Customer dimension
 or remove it from your PivotTable"")",
GENERATE(
CustomerList
, [Customer].[Customer].CURRENTMEMBER.PROPERTIES("KEY"), ",")
)
+ "}
")

In this example, if the resulting DAX IN expression for the Customer dimension will have more than 7000 keys in it, the entire DAX query returns a custom error message instead:

image

Once again this is a far from perfect solution – I would have liked to test the total number of characters in the query, but if you do that you have to write the expression twice, once in the first parameter of IIF() and once in one of the results, and that would be horrible. My gut feeling is that you should only use this technique on dimensions with a large number of members on the key attribute.

Putting this all together, for a simple cube based on data from Adventure Works with three dimensions (Date, Product and Customer) here’s what a complete Action Expression for a regular measure might look like:

"EVALUATE 
FILTER(
CALCULATETABLE(
SELECTCOLUMNS(
'Sales Order', 
""Sales Order Number"",
'Sales Order'[Sales Order Number],
""Sales Order Line Number"",
'Sales Order'[Sales Order Line Number],
""Sales Amount"",
[Sales Amount])
" +
IIF(
COUNT(EXISTING [Date].[Date].[Date].MEMBERS) = 
COUNT([Date].[Date].[Date].MEMBERS),
"",
",'Date'[Date.Key0] IN {
" +
GENERATE(EXISTING 
NONEMPTY([Date].[Date].[Date].MEMBERS, 
[Measures].[Sales Amount]) 
, [Date].[Date].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "}
") +
IIF(
COUNT(EXISTING [Customer].[Customer].[Customer].MEMBERS) = 
COUNT([Customer].[Customer].[Customer].MEMBERS),
"",
",'Customer'[Customer.Key0] IN {
" +
IIF(
COUNT(
{EXISTING NONEMPTY([Customer].[Customer].[Customer].MEMBERS, 
[Measures].[Sales Amount])} AS CustomerList)>7000,
"ERROR(""Please reduce your selection on the Customer dimension 
or remove it from your PivotTable"")",
GENERATE(
CustomerList
, [Customer].[Customer].CURRENTMEMBER.PROPERTIES("KEY"), ",")
)
+ "}
") +
IIF(
COUNT(EXISTING [Product].[Product].[Product].MEMBERS) = 
COUNT([Product].[Product].[Product].MEMBERS),
"",
",'Product'[Product.Key0] IN {
" +
GENERATE(EXISTING 
NONEMPTY([Product].[Product].[Product].MEMBERS, 
[Measures].[Sales Amount]) 
, [Product].[Product].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "}
") +
"),
[Sales Amount]>0)"

 

What about a calculated measure though? Assuming that  you have a time utility/date tool/shell dimension, you can use a Condition expression on the action above to make sure it only gets executed for the member on the time utility dimension that contains the non-calculated values, in this case called [Actual Value]:

([Measures].CURRENTMEMBER IS [Measures].[Sales Amount]) AND
([Date Calculations].[Date Calculations].CURRENTMEMBER IS 
[Date Calculations].[Date Calculations].&[Actual Value])

image

Now, let’s say there is a calculated member on the time utility dimension that contains a year-to-date calculation with the following definition:

CREATE MEMBER 
CURRENTCUBE.[Date Calculations].[Date Calculations].[Year-To-Date] 
AS
AGGREGATE(
PERIODSTODATE(
[Date].[Calendar].[Year],
[Date].[Calendar].CURRENTMEMBER),
[Date Calculations].[Date Calculations].&[Actual Value]);

You can create a new action that has a Condition expression as follows that restricts it to the year-to-date calculation:

([Measures].CURRENTMEMBER IS [Measures].[Sales Amount]) AND
([Date Calculations].[Date Calculations].CURRENTMEMBER IS 
[Date Calculations].[Date Calculations].[Year-To-Date])

Now, the final problem to solve is to generate a DAX query that returns all of the Sales Orders from the beginning of the current year up to and including the selected date – the Sales Orders that a user would expect to see when they drilled through on the year-to-date calculated member above.

Here’s a modified MDX expression for the Date dimension that returns a DAX expression that finds all of the dates associated with the current selection on the Date dimension, finds the maximum date, then filters the drillthrough DAX query by all dates from the beginning of the current calendar year up to and including the maximum date:

IIF(
COUNT(EXISTING [Date].[Date].[Date].MEMBERS) = 
COUNT([Date].[Date].[Date].MEMBERS),
"",
", VAR FilteredDates = FILTER('Date', 'Date'[Date.Key0] IN {
" +
GENERATE(
EXISTING 
NONEMPTY([Date].[Date].[Date].MEMBERS,[Measures].[Sales Amount]) , 
[Date].[Date].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "})
VAR MaxDate = MAXX(FilteredDates, 'Date'[Date.Key0])
VAR MaxYear = MAXX(FilteredDates, 'Date'[Year])
RETURN 
FILTER('Date', 'Date'[Date.Key0]<=MaxDate && 'Date'[Year]=MaxYear)
")

Here’s the complete MDX expression for the year-to-date drillthrough:

"EVALUATE 
FILTER(
CALCULATETABLE(
SELECTCOLUMNS(
'Sales Order', 
""Sales Order Number"",
'Sales Order'[Sales Order Number],
""Sales Order Line Number"",
'Sales Order'[Sales Order Line Number],
""Sales Amount"",
[Sales Amount])
" +
IIF(
COUNT(EXISTING [Date].[Date].[Date].MEMBERS) = 
COUNT([Date].[Date].[Date].MEMBERS),
"",
", VAR FilteredDates = FILTER('Date', 'Date'[Date.Key0] IN {
" +
GENERATE(
EXISTING 
NONEMPTY([Date].[Date].[Date].MEMBERS,[Measures].[Sales Amount]) , 
[Date].[Date].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "})
VAR MaxDate = MAXX(FilteredDates, 'Date'[Date.Key0])
VAR MaxYear = MAXX(FilteredDates, 'Date'[Year])
RETURN 
FILTER('Date', 'Date'[Date.Key0]<=MaxDate && 'Date'[Year]=MaxYear)
") +
IIF(
COUNT(EXISTING [Customer].[Customer].[Customer].MEMBERS) = 
COUNT([Customer].[Customer].[Customer].MEMBERS),
"",
",'Customer'[Customer.Key0] IN {
" +
IIF(COUNT({
EXISTING 
NONEMPTY([Customer].[Customer].[Customer].MEMBERS, 
[Measures].[Sales Amount])} AS CustomerList)>7000,
"ERROR(""Please reduce your selection on the Customer dimension or 
remove it from your PivotTable"")",
GENERATE(
CustomerList
, [Customer].[Customer].CURRENTMEMBER.PROPERTIES("KEY"), ",")
)
+ "}
") +
IIF(
COUNT(EXISTING [Product].[Product].[Product].MEMBERS) = 
COUNT([Product].[Product].[Product].MEMBERS),
"",
",'Product'[Product.Key0] IN {
" +
GENERATE(EXISTING 
NONEMPTY([Product].[Product].[Product].MEMBERS, 
[Measures].[Sales Amount]) 
, [Product].[Product].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "}
") +
"),
[Sales Amount]>0)"

I warned you it was going to be complicated, didn’t I? You can download a SSAS MD 2017 .abf backup file containing the sample database and the two actions here.

Drillthrough On Calculated Members In SSAS MD 2017 Using DAX Expressions, Part 1

Without a doubt one of the most useful features of SSAS Tabular 2017 is the new Detail Rows Expression property. It allows you to control exactly which columns and rows appear when you do a drillthrough – something that is particular important when you’re doing a drillthrough on a calculation, and something that SSAS MD users have also wanted to have for a long time now. For example, imagine that you have an Excel PivotTable that is sliced by a single date and a calculated member that shows a year-to-date sum of sales calculation: when a user does a drillthough they would expect to see data for all the fact data that contributes to the value they have clicked on, which in this case means data for all dates from the beginning of the year up to the selected date; this is what the Detail Rows Expression property makes possible and this is exactly what a regular drillthrough in SSAS MD doesn’t do.

There have been many attempts at solving this problem in SSAS MD, from Mosha’s blog post back in 2008 to these custom functions in the Analysis Services Stored Procedure Project (for a few more weeks still on Codeplex, but when Codeplex dies available here on GitHub). None of these solutions have been perfect and all have involved a certain amount of .NET code. In this series of posts I’m going to describe a slightly different approach, and while it isn’t perfect either and is very complex (you’ll need to be good at MDX and DAX to implement it) I think it has a lot to recommend it, not least because no .NET code is required. In this first post I’m going to demonstrate some of the functionality that makes my approach possible; in part 2 I’ll put it all together into a working solution.

First thing to note: you have been able to query SSAS MD using DAX as well as MDX since SQL Server 2012 SP1 CU3. Most client tools, like Excel, generate MDX queries but Power BI for example generates DAX queries when you create a Live connection to SSAS MD. To learn more about DAX support in SSAS MD this video of a session of mine from SQLBits from a few years ago is a good place to start; it’s fairly old but most of the content is still relevant.

This in turn means that you can create a Rowset action (not a Drillthrough action) in SSAS MD that return the results of a DAX query. Here’s an example of an action that does this:

image

The Action Expression property is an MDX expression that returns the text of the query to be executed and whose results will be displayed to the user as the output of the drillthrough. In this case the MDX expression consists of one string, and that string is a DAX query that returns a list of Sales Order Numbers, Line Numbers and their associated Sales Amounts:

EVALUATE 
SELECTCOLUMNS(
'Sales Order', 
"Sales Order Number",
'Sales Order'[Sales Order Number],
"Sales Order Line Number",
'Sales Order'[Sales Order Line Number],
"Sales Amount",
[Sales Amount])

Here’s the result in Excel:

image

image

This is just a static query though, and for an action you will need to generate a query dynamically to return an appropriate table of data depending on which cell the user has drilled through on.

However before I carry on there’s an important question that needs to be addressed. You may be wondering why I’m using a DAX query for this, when I could be using an MDX DRILLTHROUGH statement (as in the approaches linked to above) or an MDX SELECT statement. The problem with a DRILLTHROUGH statement is that it can only accept an MDX SELECT statement that returns a single cell in its first parameter; this means it’s not possible to get it to return more complex resultsets like the one required for the year-to-date example above. Normal MDX SELECT statements don’t suffer from this restriction and it would indeed be possible to dynamically generate one that meets any need. Unfortunately when the results of an MDX SELECT statement are returned from a Rowset action you have no control over the format of the column headers that are returned, and they are often not pretty at all. A DAX query, in contrast, gives you complete control over the data that is returned and the way the column headers are formatted.

The last question I’m going to address in this post is how the DAX query can be made dynamic. To do this I’m going to use the new DAX IN operator, which is only available in SSAS 2017. As always with DAX, there’s a great article describing it written by Marco Russo here:

https://www.sqlbi.com/articles/the-in-operator-in-dax/

Here’s how the DAX query above can be adapted to return the Sales Orders for just two dates using the IN operator:

EVALUATE 
FILTER(
CALCULATETABLE(
SELECTCOLUMNS(
'Sales Order', 
"Sales Order Number",
'Sales Order'[Sales Order Number],
"Sales Order Line Number",
'Sales Order'[Sales Order Line Number],
"Sales Amount",
[Sales Amount]),
'Date'[Date.Key0] IN {20030101, 20030102}),
[Sales Amount]>0)

image

In this example, the ‘Date’[Date.Key0] column is the column that contains the key values of the Date attribute on the Date dimension in my SSAS cube. To make this dynamic, you need an MDX expression that will return a query like the one above and, in particular, return a different list of date keys depending on what the user has drilled through on. The MDX GENERATE() function can be used to do this: you can use it to iterate over the set of existing members on the Date attribute of the Date dimension and output a comma-delimited list of key values from each member:

"EVALUATE 
FILTER(
CALCULATETABLE(
SELECTCOLUMNS(
'Sales Order', 
""Sales Order Number"",
'Sales Order'[Sales Order Number],
""Sales Order Line Number"",
'Sales Order'[Sales Order Line Number],
""Sales Amount"",
[Sales Amount]), 
'Date'[Date.Key0] IN {" +
GENERATE(EXISTING [Date].[Date].[Date].MEMBERS, 
[Date].[Date].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "}),
[Sales Amount]>0)"

If this expression is used in an action and a user drills down on, say, the month April 2003, the following DAX query is generated and run to get all the Sales Orders for all the days in April 2003:

EVALUATE  
FILTER( 
CALCULATETABLE( 
SELECTCOLUMNS( 
'Sales Order',  
"Sales Order Number", 
'Sales Order'[Sales Order Number], 
"Sales Order Line Number", 
'Sales Order'[Sales Order Line Number], 
"Sales Amount", [Sales Amount]), 
'Date'[Date.Key0] IN 
{20030401,20030402,20030403,20030404,20030405,
20030406,20030407,20030408,20030409,20030410,20030411,
20030412,20030413,20030414,20030415,20030416,20030417,
20030418,20030419,20030420,20030421,20030422,20030423,
20030424,20030425,20030426,20030427,20030428,20030429,
20030430})
, [Sales Amount]>0)

OK, that’s more than enough for one post. In my next post I’m going to look at some of the shortcomings of this approach, how they can be (partly) worked around, and demonstrate a full solution for drillthrough on a regular measure and also on a year-to-date calculation.

%d bloggers like this: