Power BI Desktop As A Client Tool For SSAS Tabular

There has been another flurry of Power BI announcements in the last few days in preparation for RTM on July 24th; you can read about them here if you haven’t already. There’s no point me repeating them all, but in amongst the major features announced there was one thing that I thought was worth highlighting and which could easily get overlooked. It is that by RTM the Power BI Desktop app will be able to connect direct to SSAS Tabular – that’s to say, you will be able to use it as a client tool for SSAS Tabular in the same way you can use Excel and any number of third party products.

The Power BI Desktop app was previously known as the Power BI Designer – the name change was a wise move, because it is in fact a full featured desktop BI tool in itself, and not just a ‘designer’ for the cloud based Power BI service. It is a free download and you can use it without any kind of Power BI subscription at all. Therefore even if you are a traditional corporate BI shop that uses SSAS Tabular and you aren’t interested in any kind of self-service BI at all, you could use it just as a client for SSAS and forget about its other capabilities.

Why would you want to do this though? More specifically, why use Power BI Desktop rather than Excel, which is of course the default client tool for SSAS? I’m a big fan of using Excel in combinations with SSAS (pretty much everything Rob Collie says here about Excel and Power Pivot also applies to Excel and SSAS – for the vast majority of users, for real work, Excel will always be the tool of choice for anything data related), but its data visualisation capabilities fall well short of the competition. While you can do some impressive things in Excel, it generally requires a lot of effort on the part of the user to build a dashboard or report that looks good. On the other hand, with Power BI Desktop it’s much easier to create something visually arresting quickly, and with the new open-source data visualisation strategy it seems like we’ll be able to use lots of really cool charts and visualisations in the future. Therefore:

  • Showing off the capabilities of Power BI Desktop will make selling a SSAS Tabular-based solution much easier, because those visualisations will make a much better first impression on users, even if they do end up using Excel for most of their work.
  • Less capable users, or those without existing Excel skills, will appreciate the simplicity of Power BI Desktop compared to Excel as a client tool.
  • Some users will need those advanced data visualisation capabilities if they are building reports and dashboards for other people – especially if those people expect to see something flashy and beautiful rather than a typically unexciting, practical Excel report.
  • If your users are stuck on Excel 2007 (or an earlier version) and aren’t likely to upgrade soon, giving them the Power BI Desktop app instead will give them access to a modern BI tool. Excel 2007 is an OK client for SSAS but is missing some features, notably slicers, that Excel 2010 and 2013 have and that are also present in Power BI Desktop.
  • Similarly, if your users are expecting to do a mixture of corporate BI using SSAS Tabular as a data source, and self-service BI, but face the usual problems with Excel versions, editions and bitness that prevent them from using the power-add-ins in Excel, then standardising on Power BI Desktop instead could make sense.
  • If you do have a Power BI subscription and can work with the requirements for setting up direct connection from PowerBI.com to an on-prem SSAS Tabular instance, then publishing from Power BI Desktop to PowerBI.com will be very easy. If you need to see reports and dashboards in a browser or on a mobile device, it could be a more attractive option than going down the Excel->SharePoint/Excel Services or Excel->OneDrive->PowerBI.com route.

In short, I don’t see Power BI Desktop as a replacement for Excel as a SSAS Tabular client tool but as a useful companion to it.

The last question that needs to be asked here is: what does this mean for third party SSAS client tool vendors like Pyramid Analytics and XLCubed? I don’t think these companies have too much to worry about, to be honest. These vendors have been competing with a less feature-rich, but effectively free, Microsoft option for a long time now. While Power BI Desktop erodes their advantage to a certain extent, they have a lot of other features besides visualisations that Microsoft will never probably provide and which justify their price. Besides that, the fact that Power BI doesn’t support direct connections to SSAS Multidimensional (yet…? ever…?) excludes at least 80% of the SSAS installations out there.

Checking Columns Are Present In Power Query

Something I was meaning to mention in my previous post (but forgot about…) was that in a lot of cases you don’t really care if your output contains all the required columns – it’s enough just to check that your input contains all the required columns. Luckily M has a function called Table.HasColumns() to help you do this. For example, using the csv source file from my previous post, which should have three columns called Product, Month and Sales, the following query will return true if the source file has these columns and false if it doesn’t:

let
    Source = Csv.Document(File.Contents("C:\MissingColumnDemo.csv"),[Delimiter=",",Encoding=1252]),
    PromotedHeaders = Table.PromoteHeaders(Source),
    CheckColumns = Table.HasColumns(PromotedHeaders, {"Product", "Month", "Sales"})
in
    CheckColumns

Advanced SSAS Multidimensional Security Tips & Tricks Webinar This Thursday

In association with the nice people at SQLRelay I’ll be presenting an hour-long webinar on advanced SSAS Multidimensional tips and tricks this Thursday July 9th 2015 at 1pm UK time (that’s 8am EDT for you Americans). It’s free to attend and open to anyone, anywhere in the world. You can join the meeting by going to

http://t.co/apht1IhJlg

In the webinar I’ll be covering topics such as:

  • The difference between Allowed Sets and Denied Sets in dimension security
  • Handling security-related errors in your MDX calculations
  • The different ways of implementing dynamic security
  • Why you should avoid cell security, and how (in some cases) you can replace it with dimension security

…and lots more.

If you’re in the UK, you should definitely check out SQLRelay, an annual series of one-day SQL Server events that happens at a number of different places around the country each autumn. For more details, see http://www.sqlrelay.co.uk/2015.html

I’m presenting this webinar in my capacity as a sponsor of SQLRelay, so expect me to spend a small amount of time promoting Technitrain’s autum course schedule. There are some cool courses on SSIS, MDX, SQL Server high availability and data science/machine learning coming up, you know…

UPDATE: you can download the slides and demos from the webinar at http://1drv.ms/1LYk1k8 and watch the recording at https://www.youtube.com/watch?v=cB9F6IVo7MA

For whoever was asking about using a measure group to store permissions for dynamic security, this blog post has all the details: http://bifuture.blogspot.co.uk/2011/09/ssas-setup-dynamic-security-in-analysis.html

Ensuring Columns Are Always Present In A Table Returned By Power Query

Disappearing or renamed columns in your data source can cause all kinds of problems when you’re importing data using Power Query: errors when you try to refresh the query, broken calculations in Power Pivot, PivotTables that reformat themselves and then need to be manually recreated. As a result, it can be a very good idea to build some logic into your Power Query queries that ensures that a table always contains the columns you’re expecting.

Consider the following csv file:

image

In Power Query, if you connect to it and create a query you’ll end up with something like this:

let
    Source = Csv.Document(File.Contents("C:\Demo.csv"),null,",",null,1252),
    #"First Row as Header" = Table.PromoteHeaders(Source),
    #"Changed Type" = Table.TransformColumnTypes(#"First Row as Header",{{"Sales", Int64.Type}})
in
    #"Changed Type"

Let’s assume that this query is called GetSourceData. Let’s also assume that your output from Power Query should always be a table that has the three columns Product, Month and Sales, and that Product and Month should be text columns and Sales should be numeric. The basic steps to take to ensure that this always happens, even if the columns in the csv file change, are as follows:

  1. Create a query that connects to your data source, for example like GetSourceData above
  2. Create a query that will always return a table with the columns you want, but which contains no rows
  3. Append the second table onto the end of the first table. This will result in a table that contains all of the columns from both tables.
  4. Remove any unwanted columns.

There are a number of ways to create the empty table needed in step 2. You could use the #table() function if you’re confident writing M code, and the following single line query (no Let needed) does the job:

#table(
 type table [Product=text, Month=text, Sales=number],
 {})

image

Alternatively, if you wanted something that an end user could configure themselves, you could start with a table in Excel like this:

image

then transpose it, use the first row of the resulting table as the header row, then set the data types on each table to get the same output:

let
    Source = Excel.CurrentWorkbook(){[Name="Columns"]}[Content],
    #"Transposed Table" = Table.Transpose(Source),
    #"First Row as Header" = Table.PromoteHeaders(#"Transposed Table"),
    #"Changed Type" = Table.TransformColumnTypes(#"First Row as Header",
	{{"Product", type text}, {"Month", type text}, {"Sales", Int64.Type}})
in
    #"Changed Type"

Assuming that this query is called ExpectedColumns, it’s then a trivial task to create a third query that appends the ExpectedColumns query onto the end of the GetSourceData query. If GetSourceData includes all the columns it should then this append will have no effect at all; if some of the columns have changed names or disappeared, you’ll see all of the columns present from both GetSourceData and ExpectedColumns in the output of the append. For example if the Month column in GetSourceData is renamed Months then the output of the append will look like this:

image 

Finally, in this third query you need to select all the columns you want (ie all those in the ExpectedColumns query) and right click/Remove Other Columns, so you remove all the columns you don’t want. In the previous example that gives you:

image

The point here is that even though the Month column only contains nulls, and the actual month names have been lost, the fact that the columns are all correct means that you won’t get any errors downstream and your PivotTables won’t be reformatted etc. Once you’ve fixed the problem in the source data and refreshed your queries, everything will go back to normal.

Here’s the code for this third query:

let
    Source = GetSourceData,
    Append = Table.Combine({Source,ExpectedColumns}),
    #"Removed Other Columns" = Table.SelectColumns(Append,{"Product", "Month", "Sales"})
in
    #"Removed Other Columns"

For bonus points, here’s another query that compares the columns in GetSourceData and ExpectedColumns and lists any columns that have been added to or are missing from GetSourceData:

let
    //Connect to Excel table containing expected column names
    ExcelSource = Excel.CurrentWorkbook(){[Name="Columns"]}[Content],
    //Get list of expected columns
    ExpectedColumns = Table.Column(ExcelSource, "ColumnName"),
    //Get a list of column names in csv
    CSVColumns = Table.ColumnNames(GetSourceData),
    //Find missing columns
    MissingColumns = List.Difference(ExpectedColumns, CSVColumns),
    //Find added columns
    AddedColumns = List.Difference(CSVColumns, ExpectedColumns),
    //Report what has changed
    OutputMissing = if List.Count(MissingColumns)=0 then
                     "No columns missing" else
                     "Missing columns: " & Text.Combine(MissingColumns, ","),
    OutputAdded = if List.Count(AddedColumns)=0 then
                     "No columns added" else
                     "Added columns: " & Text.Combine(AddedColumns, ","),
    Output = OutputMissing & "   " & OutputAdded
in
    Output

image

You can download the sample workbook for this post here.

Automatically Generating Date Dimension Tables In Excel 2016 Power Pivot

As you probably know, whenever you are doing any kind of date or time-based calculations in DAX you should always have a separate Date table in your Power Pivot model. There are a number of ways of building these tables (see, for example, my Power Query query here) but they are all a bit of a hassle – which is why it’s so cool that, in Excel 2016, you can get one built automatically inside the Power Pivot window.

Consider the following table of sales data on an Excel worksheet:

Source

With this table loaded into the Data Model (and the Order Date column recognised as containing data of the Date data type), when you go to the Design tab in the Power Pivot window you’ll see the new Date Table button enabled:

Menu

Clicking on the New button will add a new date table to the Data Model, called Calendar:

CalendarBasic

This table is automatically marked as the Date Table in your model.

The table contains a continuous range of dates starting from the beginning of the year of the earliest date found in any column in any table in your Data Model, up to the end of the year containing the latest date found in any column in any table in your Data Model. Obviously, this means that your table could contain a very large date range if, for example, you have a Customer table containing a Date Of Birth column. Luckily, you also have the option of manually configuring the range of dates used by clicking the Update Range button:

Range

One other thing to point out is that the resulting table is a table like any other, so you can add, delete or rename columns as you wish. You should also be able to set the table back to its default state by using the Set Default menu option, but I couldn’t make that work (possibly it hasn’t been implemented yet – this post was written using the Excel 2016 Preview).

If you do make changes like adding calculated columns, such as the Month Year calculated column shown below:

Calc

You can then click the Save Configuration button to save the current state of the table as your default. This means that the next time you create a new Date table in the same workbook, the table will include any customisations. However these changes don’t seem to be applied in Date tables created in new workbooks – maybe this will also change before RTM?

All in all, this is a very handy feature that will save Power Pivot modellers a lot of time. I wonder if it uses the new Calendar() or CalendarAuto() DAX functions under the covers?

Using DateDiff() To Calculate Time Intervals In DAX

One of the most useful new additions to DAX in Excel 2016 and the Power BI Designer is the DateDiff() function. It does exactly what you would expect: calculate the amount of time in between two dates, and express that value as either seconds, minutes, hours, days, weeks, months, quarters or years.

Here’s a very simple table of dates:

image

With this table loaded into the Power BI Designer, you can add new calculated columns to the table by clicking the New Column button on the ribbon. Here are two calculated column definitions that give the number of days and the number of years between the Start Date and the End Date on each row:

DayDurationColumn = 
DATEDIFF(MyTable[Start Date], MyTable[End Date], DAY)

YearDurationColumn = 
DATEDIFF(MyTable[Start Date], MyTable[End Date], YEAR) 

The output is pretty much what you’d expect:

image

It is of course also possible to create measures that use the DateDiff() function to, for example:

YearDurationMeasure = 
DATEDIFF(
FIRSTDATE(MyTable[Start Date]), 
LASTDATE(MyTable[End Date]), 
YEAR)

image

All very straightforward, then, and much easier than having to calculate these values yourself.

The Use And Abuse Of The MDX Freeze Statement

The other day, while helping a customer with some particularly nasty MDX scoped assignments, I realised that there weren’t many good resources on the internet that explained how to use the MDX Freeze statement. It’s something I see used quite often, but usually because some MDX calculations aren’t giving the correct results and a developer has found that putting a Freeze statement in has fixed the problem – even if they don’t understand why it has fixed the problem. So, in this post I’ll explain what Freeze does, when you might want to use it, and when there are other other, better alternatives.

First of all, the basics. Imagine you have a super-simple cube and that, apart from the Calculate statement, the only MDX you have on the Calculations tab in the cube editor is the following:

CREATE MEMBER CURRENTCUBE.MEASURES.M1 AS 1;

CREATE MEMBER CURRENTCUBE.MEASURES.M2 AS NULL;

SCOPE(MEASURES.M2);
    THIS = MEASURES.M1;
END SCOPE;

If you query the cube in Excel, you’ll see the following:

image

No surprises here: we have created two calculated measures, M1 and M2, and then used a scoped assignment to set M2 to show the value of M1. It’s important to understand that the scope statement has not copied the value of M1 into M2, but acts more like a pointer so that M1 will always display the same value as M2 even if M1 subsequently changes. This means that when we add a second scope statement to the code that alters the value of M1, as follows:

CREATE MEMBER CURRENTCUBE.MEASURES.M1 AS 1;

CREATE MEMBER CURRENTCUBE.MEASURES.M2 AS NULL;

SCOPE(MEASURES.M2);
    THIS = MEASURES.M1;
END SCOPE;

SCOPE(MEASURES.M1);
    THIS = 2;
END SCOPE;

You see the following in your PivotTable:

image

This behaviour is the source of a lot of confusion! An assignment to one measure has indirectly changed the value of another measure, and of course in a real-world cube it can be very difficult to spot situations where this has happened and if you do, what other MDX has caused this to happen.

Each statement in the MDX Script of a cube adds an extra layer of calculations to it, called a calculation pass; this is true for all the calculations in the examples above. As new calculations are added, and new passes are created, the previous passes still exist and are still accessible. In the second example above, in the outermost calculation pass, the measure M2 returns the value 2 but at the previous calculation pass (as seen in the first example) it returned the value 1. The Freeze statement allows you to freeze the values returned by a subcube of cells at a given calculation pass, so that no future calculations will change those values.

Therefore, by taking our code and adding a Freeze statement to the first scoped assignment we can prevent the second scoped assignment changing the value of M2:

CREATE MEMBER CURRENTCUBE.MEASURES.M1 AS 1;

CREATE MEMBER CURRENTCUBE.MEASURES.M2 AS NULL;

SCOPE(MEASURES.M2);
    THIS = MEASURES.M1;
    FREEZE(THIS);
END SCOPE;

SCOPE(MEASURES.M1);
    THIS = 2;
END SCOPE;

Here’s the output now:

image

Another very common way that scoped assignments can affect the value of a cell is through the aggregation of the results of a calculation. This blog post (one of the most popular I’ve ever written) explains how this behaviour can be used to implement calculations like currency conversions and weighted averages. However, in other cases, this aggregation of a calculation is an unwanted and unexpected side effect of a scope statement and calculated values that you did want to be displayed instead get replaced with weird, meaningless values. The Freeze statement can be used to stop this happening but in actual fact it’s a much better idea to understand the cause of these problems and rewrite your calculations so that Freeze isn’t necessary.

Now, imagine that in your cube you have a regular (ie not calculated) measure called Sales Amount that has its AggregateFunction property set to Sum, and that you have a fairly standard Date dimension with a Year attribute hierarchy. A PivotTable with Sales Amount on columns and Year on rows looks like this in Excel:

image

If you add the following assignment to the cube, to change the value of the All Member on Year, the value of the Grand Total in the PivotTable (which is the All Member, even if that name isn’t shown) will be changed:

SCOPE([Date].[Year].[All], [Measures].[Sales Amount]);
    THIS = 123;
END SCOPE;

image

If, on the other hand, you remove that previous assignment and replace it with an assignment on the year 2001:

SCOPE([Date].[Year].&[2001], [Measures].[Sales Amount]);
    THIS = 456;
END SCOPE;

You’ll see that not only has the value for Sales Amount for the year 2001 changed, but that the value of the All Member has been changed too: the All Member represents the aggregated total of all the years, so therefore if a year value has changed, the All Member value must change the reflect this:

image

What happens if we try to combine the two previous scope statements?

SCOPE([Date].[Year].[All], [Measures].[Sales Amount]);
    THIS = 123;
END SCOPE;

SCOPE([Date].[Year].&[2001], [Measures].[Sales Amount]);
    THIS = 456;
END SCOPE;

In this case, the output is exactly the same as with the previous example (although the measure formatting has also been lost):

image

This is because even though the first Scope statement successfully changed the value of the All Member, the aggregation of values triggered by the second Scope overwrote this value. Although you can’t see this happening in Excel, where you only see the values returned at the final calculation pass of the cube, the MDX Script Debugger can be used to see the values returned for a query at all the different passes so you can work out what’s going on.

The Freeze statement can be used to stop the second Scope from overwriting the first, like so:

SCOPE([Date].[Year].[All], [Measures].[Sales Amount]);
    THIS = 123;
    FREEZE(THIS);
END SCOPE;

SCOPE([Date].[Year].&[2001], [Measures].[Sales Amount]);
    THIS = 456;
END SCOPE;

image

However, in my opinion it makes a lot more sense to change the order of the Scope statements so that the assignment to 2001 doesn’t overwrite the assignment to the All Member:

SCOPE([Date].[Year].&[2001], [Measures].[Sales Amount]);
    THIS = 456;
END SCOPE;

SCOPE([Date].[Year].[All], [Measures].[Sales Amount]);
    THIS = 123;
END SCOPE;

The end result is the same:

image

Why do I prefer this approach to the use of Freeze? Two reasons:

  1. It works with natural MDX behaviour rather than trying to fight against it. In this case it’s just one line of code less, but in the real world it could result in a much greater reduction. It’s true that you have to put a lot of thought into the ordering of your calculations, but I don’t think you can get away from that. Using Freeze to make your calculations work properly without understanding why it’s needed results in much more complex code, often with duplicated calculations because Freeze still doesn’t give the desired results, and is frankly a bit of a hack.
  2. There are, or at least were, performance implications with the use of Freeze. In Analysis Services 2005 I saw a few cases where the use of Freeze contributed to poor query performance, and where reordering scope statements so that it was no longer necessary made performance better. I’m not sure whether this is still the case with SSAS 2014 but it may well be.

I see Freeze abused most often in financial cubes, when scope statements are used to define calculations on a chart of accounts hierarchy. Sometimes I have even seen the same calculation code appear in several places in the same MDX Script, just to make sure that the calculations always return the right result – all because the calculations on the chart of accounts dimension are aggregating up and overwriting each other. In this case the simple rule you have to remember is to always scope the calculations on the lowest level of the hierarchy first, then scope the calculations on the second-lowest level, and so on working your way up to the top of the hierarchy. This way you can be sure that your scope will never aggregate up and overwrite the result of another calculation.

Apart from that, I also see Freeze used when a cube contains a Date Tool dimension that uses regular members instead of calculated members, in the way described here. Now there are a lot of good reasons to use regular members on a Date Tool dimension (it will work with all versions of SSAS and Excel for instance) but I have also seen a lot of cases where the fact that you are scoping calculations on regular measures, which may then get aggregated up accidentally, has caused a lot of problems – not only resulting in incorrect values appearing, but also making query performance worse. For that reason, nowadays I prefer to use calculated members on my Date Tool dimension rather than regular members.