More Detail On Detail Rows Expressions In SSAS Tabular V.Next

My second-favourite feature in SSAS Tabular v.next after Power Query integration is the Detail Rows expression property for measures – it not only brings drillthrough on measures to Tabular, it means that we can define meaningful drillthrough on any measure, no matter how it is calculated. There’s a basic description of the functionality in this blog post but I thought it would be useful to walk through a simple example showing how it can be used.

Consider a simple SSAS Tabular model with two tables in it. First, a table containing sales data called Sales:

image

Second, a date table called Date:

image

[Note: dates are in DD/MM/YYYY format]

There are two measures with the following definitions:

Total Sales:=SUM(Sales[Sales])

Total YTD Sales:=TOTALYTD([Total Sales], 'Date'[Date])

The measure Total YTD Sales gives the running total of sales from the beginning of the current year. I know it doesn’t follow Marco and Alberto’s best-practice pattern but I wanted to keep things simple on the DAX front…

Browsing the model in an Excel PivotTable gives the following result:

image

At this point if you double-click on cell C6 in the PivotTable you get the following, not very useful result, on a new worksheet:

image

What drillthrough does in SSAS Multidimensional, and what the new Detail Rows Expression property in SSAS Tabular v.next does, is allow an end user to see the detail-level data (usually the rows in the fact table) that was aggregated to give the value the user clicked on in the original PivotTable.

For the Total Sales measure, this property can be set with a DAX expression that returns a table something like this:

SELECTCOLUMNS(
	'Sales', 
	"Date", 'Sales'[Date], 
	"Sales Value", [Total Sales]
)

[For more details on the SelectColumns() function, see here]

image

Now when you click on cell C6 in the PivotTable you get the result of the table expression above filtered by the context of the cell you’ve clicked on – in this case, the date 4/1/2017. What appears in the new worksheet is data from the row from the Sales table for 4/1/2017:

image

This is already better than SSAS Multidimensional drillthrough because as a developer you have control over the column headers displayed in this table (in Multidimensional drillthrough the column names come out in a ridiculously user-unfriendly format) and the order that they are displayed in (which is equally painful to control in Multidimensional).

Now, consider cell D6 in the PivotTable, the cell that shows the year-to-date sales amount for 4/1/2017. If a user double-clicked on this cell they would expect to see all of the rows from the Sales table from 1/1/2017 to 4/1/2017, the rows whose sales have been aggregate to give the YTD total.

This can be achieved using the following expression in the Detail Rows Expression for the Total YTD Sales measure:

CALCULATETABLE(
	SELECTCOLUMNS(
		'Sales', 
		"Date", 
		'Sales'[Date], 
		"Sales Value", 
		[Total Sales]
	), 
	DATESYTD('Date'[Date])
)

image

Now, double-clicking on cell D6 in the PivotTable gives the following table:

image

It may not look all that impressive, but there are few words that can describe how happy this makes me feel. This is exactly what is not possible with drillthrough in SSAS Multidimensional, and why drillthrough in Multidimensional has always been so frustrating to use. It’s great to see the feature properly implemented in Tabular.

The Show Hidden Cubes SSAS Connection String Property

If you need to write queries in SQL Server Management Studio against an SSAS cube that has its Visible property set to false, you have a problem because when you connect to your database you can’t see the cube or its metadata! Luckily you can override this and make the cube visible by setting the following connection string property when you connect:

Show Hidden Cubes=true

image

Connection string properties can be set in the Additional Connection Parameters tab of the connection dialog that appears when you open a new MDX query window.

Unfortunately this doesn’t make any objects in the cube that are not visible, like measures or dimensions, visible again – it just makes the cube itself visible. However, if you’re working on the Calculations tab of the Cube Editor in SSDT it is possible to make all hidden objects visible as I show here.

First Thoughts On The Integration Of Power Query/M Into Analysis Services Tabular

Last Friday’s big news was the release of the first CTP for Analysis Services v.next. Among several major new pieces of functionality (Ragged hierarchies! Drillthrough that works properly, even for calculations! Table-level security!) probably the biggest is the integration of Power Query/M into Analysis Services. As you can probably guess, I’m incredibly pleased that my two favourite technologies have got together. The technical details are given in this blog post, which I suggest you read if you haven’t done so already, but what I think is missing is an explanation of why this is so important and what kind of opportunities it opens up – hence this post. Of course this is just my take on the subject and not what Microsoft may actually thinking; it’s also very early days, so as the functionality develops and I have more chance to think about this my opinions may change. If you have any ideas on this subject I would be interested to hear them so please leave a comment!

Why this had to happen: Power BI

There is an obvious reason why Microsoft decided to integrate Power Query/M into SSAS, and that is because it needs to support the conversion of Power BI models into Analysis Services Tabular models. There are two scenarios where this will be necessary.

The first is the ability to convert a Power BI model into an Azure Analysis Services Tabular model (listed as ‘planned’ here), something that will be a key selling point for Azure Analysis Services when it releases. The engine behind Power BI is essentially the same as the one used in Analysis Services so migrating the data model should be straightforward, but since Power BI uses Power Query/M to load data then a migrated Azure Analysis Services model will also have to use Power Query/M.

The second scenario is similar to the first. We now know that on-premises Power BI will be delivered through Reporting Services v.next, and it’s reasonable to assume Reporting Services will need a database engine to store the data for published Power BI reports. That engine will have to be an Analysis Services instance of some kind (either standalone or running in-process inside Reporting Services) and again for that to work Analysis Services will have to support the same data access mechanisms as Power BI.

Better support for a larger number of data sources

I’ve just argued why Microsoft was obliged to include this functionality in SSAS v.next but in fact there are many positive reasons for doing this too. The most obvious one is to do with support for more data sources. At the moment SSAS Tabular supports a pretty good range of data sources, but the world of BI is getting more and more diverse and in order to stay relevant SSAS needs to support far more than it does today. By using Power Query/M as its data access mechanism, SSAS v.next will immediately support a much larger number of data sources and this number is going to keep on growing: any investment that Microsoft or third parties make for Power BI in this area will also benefit SSAS. Also, because Power Query/M can query and fold to more than just relational databases, I suspect that in the future this will allow for DirectQuery connections to many of these non-relational data sources too.

Different data sources for partitions in the same table

Another benefit of this change is that we’ll have a lot more flexibility with partitioning tables in an SSAS Tabular model. As the blog post says:

As long as a partition’s M query adheres to the column mappings of the table, you are free to perform any transformations and pull in data from any data source defined in the model.

In SSAS 2016 the partitions in a table all have to get data from the same data source whereas in v.next we’ll be able to get data from different data sources in different partitions, and this opens up some interesting new possibilities. For example, I can imagine a simple budgeting application where the partitions in a table get data from different Excel workbooks stored in OneDrive for Business, and where the each partition gets processed automatically when changes are saved to one of these workbooks.

Does this replace SSIS and my data warehouse? 

The short answer is no. Power Query/M is not a full-featured ETL tool and I don’t think it ever will be; it certainly does not have the kind of functionality needed to perform enterprise-level ETL right now. My view is that Microsoft have built Power Query/M into SSAS for the reasons above and not to encourage enterprise SSAS users to do their own quick-and-dirty ETL when loading data (although there is a risk that that will happen anyway). That said, I think the dividing line between corporate and self-service BI will become increasingly blurred over the next few years as the Microsoft BI stack develops, and we’ll see Analysis Services being used in self-service scenarios as well as the more traditional corporate ones.

Centralised data source objects

One last thing to point out is that the way SSAS v.next makes a distinction between data sources and other queries is very interesting. In Power BI and Power Query it’s easy to end up with data source connection information duplicated across multiple queries unless you know what you’re doing, and this can cause no end of problems later on in a project. As far as I can see, in SSAS v.next a “data source object” is an M query that only contains the connection to external data, while all other queries have to reference a data source to be able to access external data. This means, as the blog post says:

Referring to data source objects helps to centralize data source settings for multiple queries and simplifies deployments and maintenance if data source definitions must be updated later on. When updating a data source definition, all M queries that refer to it automatically use the new settings.

I wonder whether this concept is coming to Power BI and Power Query at some point? I hope so – it makes a lot of sense.

SSAS Documentation With Squarehouse Commentary

A few months ago I found out about a new tool for managing and viewing Analysis Services Multidimensional documentation called Squarehouse Commentary. I’ve now had the chance to play around with it and talk to the company that makes it and I have to say I’m impressed: in my opinion it’s by far the best tool of its type that I’ve seen. Rather than generating a large pile of documentation that no-one will read and which will get left, forgotten on a fileshare or a SharePoint site, it allows you to synch the descriptions created in its own web application with the description properties in your SSAS cubes and dimensions. It also automatically generates actions so that end users can easily view rich web-based documentation for hierarchies and measures direct from whatever client tool they are using in a few clicks.

This video does a good job of showing off the main features:

Overall, definitely something to check out. It doesn’t support Tabular models yet, but I’m told that this is in the planning stages.

Nested Variables In DAX

Last week, at the SQL Server Days conference in Belgium, Kasper mentioned in his presentation that it was possible to define variables inside variables in DAX. So, for example, you could define a measure like so:

MyMeasure = 
var Outer1 = 
               var Inner1 = 1
               var Inner2 = 2
               return Inner1 + Inner2
var Outer2 = 3
return Outer1 + Outer2

This measure returns 6 as you might expect:

image

There aren’t any performance benefits to doing this, although of course it helps with code readability and organisation (thanks to Marius for confirming this).

With my newly rekindled love of DAX I thought this was quite interesting. I’m not really sure why though, given that it’s not particularly useful; I think Matt might be right:

image

Defining Variables In DAX Queries

Variables are the best thing to happen to DAX since, well forever – they are so cool I’m almost ready to like DAX as much as I like MDX. There are already several good articles and blog posts out there describing how to use them (see here and here), but I was looking at a Profiler trace the other day and saw something I hadn’t yet realised about them: you can declare and use variables in the DEFINE clause of a DAX query. Since my series of posts on DAX queries still gets a fair amount of traffic, I thought it would be worth writing a brief post showing how this works.

Say you have the following table (called Sales) in your model:

image

You can declare DAX variables in the DEFINE clause of a query like so:

DEFINE
    VAR MyMonth = "January"
    VAR FilteredMonths =
        FILTER ( VALUES ( Sales[Month] ), Sales[Month] = MyMonth )
EVALUATE
CALCULATETABLE ( Sales, FilteredMonths )

This query returns the following result:

image

The benefits of using variables in this way are the same as you get when using variables in measures and calculated columns: improved readability, less repetition and potential performance improvements.

I also wondered whether I would be able to refer to these variables inside measures declared in the DEFINE clause, but unfortunately you can’t. The following query:

DEFINE
    VAR MyMonth = "January"
    VAR FilteredMonths =
        FILTER ( VALUES ( Sales[Month] ), Sales[Month] = MyMonth )
    MEASURE Sales[FirstMeasure] =
        COUNTROWS ( FilteredMonths )
EVALUATE
ROW ( "First Measure", [FirstMeasure] )

…returns the error

“Failed to resolve name ‘FilteredMonths’. It is not a valid table, variable or function name”.

However if you define your calculations inside the query itself, for example using the Summarize() or AddColumns() functions, or like so:

DEFINE
    VAR MyMonth = "January"
    VAR FilteredMonths =
        FILTER ( VALUES ( Sales[Month] ), Sales[Month] = MyMonth )
EVALUATE
ROW (
    "First Calc", COUNTROWS (FilteredMonths),
    "Second Calc", CALCULATE (SUM(Sales[Sales]), FilteredMonths)
)

…the query works:

image

In a lot of cases, multiple calculations contain some of the same logic and being able to use variables to share tables and values between calculations opens up some really interesting opportunities for performance optimisations.

SSAS Multidimensional Caching-Related Performance Problems With Power BI Tables

Last week I was doing some performance tuning for a customer using Power BI on top of a large SSAS Multidimensional cube via a Live connection. Some of their reports were performing particularly badly and I uncovered a problem with the DAX generated by Power BI for tables with a large number of rows, fields from two or more large hierarchies and totals turned on.

The problem is very easy to reproduce; I’m going to do it using a simplified version of the Adventure Works cube that contains only the Date and Product dimensions. Take a look at the following table from a Power BI report:

image

It has the Date attribute from the Date dimension, and the Product attribute from the Product dimension, on rows and two measures on columns. The table has a large number of rows in it (both Date and Product are fairly large hierarchies) and if you look at the DAX generated by Power BI you can see that it only requests the first 501 rows. That’s fine – the real problem is that Power BI also generates a second DAX query to get the two values displayed in the Total line at the bottom of the table. The DAX looks something like this, and is easily identifiable because it uses the Row() function:

EVALUATE
  CALCULATETABLE(
    ROW(
      "Sales_Amount", 'Internet Sales Facts'[Sales Amount],
      "Tax_Amt", 'Internet Sales Facts'[Tax Amt]
    ),
    KEEPFILTERS(
      GENERATE(
        KEEPFILTERS(VALUES('Product'[Product.Key0])),
        CALCULATETABLE(
          FILTER(
            KEEPFILTERS(VALUES('Order Date'[Date.Key0])),
            OR(
              NOT(ISBLANK('Internet Sales Facts'[Sales Amount])),
              NOT(ISBLANK('Internet Sales Facts'[Tax Amt]))
            )
          )
        )
      )
    )
  )

This query has something in it – I don’t know what – that means that it cannot make use of the Analysis Services Storage Engine cache. Every time you run it SSAS will go to disk, read the data that it needs and then aggregate it, which means you’ll get cold-cache performance all the time. On a big cube this can be a big problem. This is very similar to problems I’ve seen with MDX queries on Multidimensional and which I blogged about here; it’s the first time I’ve seen this happen with a DAX query though. I suspect a lot of people using Power BI on SSAS Multidimensional will have this problem without realising it.

This problem does not occur for all tables – as far as I can see it only happens with tables that have a large number of rows and two or more hierarchies in. The easy way to check whether you have this problem is to refresh your report, run a Profiler trace that includes the Progress Report Begin/End and Query Subcube Verbose events (and any others you find useful) and then refresh the report again by pressing the Refresh button in Power BI Desktop without changing it at all. In your trace, if you see any of the Progress Report events appear when that second refresh happens, as well as Query Subcube Verbose events with an Event Subclass of Non-cache data, then you know that the Storage Engine cache is not being used.

image

Also look at the Duration column in the trace for these events which shows the time in milliseconds that they took to execute. This will tell you how much of an impact this problem is having on your report refresh times.

The easy way to stop this happening is to turn off the totals row in the table:

image

Displaying the totals in a separate table also seems to avoid the problem, although of course it doesn’t look as good. Only putting one hierarchy in the table apart from your measures, also seems to solve the problem. You could also try all the usual methods to improve SSAS performance such as building aggregations on the cube.

The dev team is aware of this problem but it’s unlikely to be fixed in the short term.