New SSAS, Power BI And SQL Server Training Courses For 2016

I’ve just added a number of new 2016 training course dates to the Technitrain site. If you’re looking for Power BI, Analysis Services, SQL Server or Data Science training in London then please check them out! Also, if you’d like to sign up for the Technitrain newsletter to stay up-to-date with our news you can do so here.

Here are more details on the new courses:

SQL Server Performance Tuning and Internals Boot Camp, Bradley Ball, 15-19 February 2016, London
This 5-day hands-on course is designed to provide DBAs with the tools and knowledge that are required to keep their SQL Servers running efficiently and reliably.
http://technitrain.com/coursedetail.php?c=60&trackingcode=CWB

Introduction To Power BI, Chris Webb, 22-23 February 2016, London
A two-day introduction to Microsoft’s exciting new BI tool suitable for BI professionals, analysts, report developers or anyone interested in using it to build reports or dashboards.
http://technitrain.com/coursedetail.php?c=65&trackingcode=CWB
 
Real World Cube Design And Performance Tuning With SSAS Multidimensional,  Chris Webb, 11-13 April 2016, London
A course aimed at intermediate-to-experienced Analysis Services Multidimensional developers, looking at more advanced cube design topics and query performance tuning.
http://technitrain.com/coursedetail.php?c=66&trackingcode=CWB

Mastering DAX, Marco Russo, 20-22 June 2016, London
A three-day introduction to the DAX language used by Power BI, Analysis Services Tabular models and Power Pivot.
http://technitrain.com/coursedetail.php?c=63&trackingcode=CWB

Optimising DAX, Marco Russo, 23-24 June 2016, London
An advanced two-day course for experienced DAX developers who wish to learn how to optimise DAX calculations and queries for performance.
http://technitrain.com/coursedetail.php?c=64&trackingcode=CWB

Introduction to MDX, Chris Webb, 4-6 July 2016, London
A three day course designed for those with little or no experience of MDX, this course will teach you how to write MDX queries and calculations for Analysis Services.
http://technitrain.com/coursedetail.php?c=67&trackingcode=CWB
 
Practical Data Science with Cortana Analytics, Rafal Lukawiecki, 24-27 October 2016, London
This course is aimed at analysts, analytical power users, predictive developers, BI power users and developers, budding data scientists and consultants.
http://technitrain.com/coursedetail.php?c=68&trackingcode=CWB

Using Parameter Tables To Control Data Refresh In Power BI

The use of parameter tables is a well-known technique in Power Query. It involves using a table from the current Excel workbook to hold various parameter values that are read using a Power Query query, and these parameter values are then passed to another Power Query query to control how it behaves. Ken Puls has a great post describing all this here, and similarly a lot of the demos in my video on on building a reporting solution in Power Query involve reading values from a table which are then used to filter data in a SQL query.

The problem with Power BI Desktop and PowerBI.com is that without a current Excel workbook there’s nowhere to store parameter values (at least at the moment) so you can’t implement the technique in exactly the same way. However, with a bit of creativity, it is possible to do something very similar.

Consider the following M query that gets all the rows from the DimDate table in the Adventure Works DW SQL Server database where the day name is Tuesday.

let
    Source = Sql.Database("localhost", "adventure works dw"),
    dbo_DimDate = Source{[Schema="dbo",Item="DimDate"]}[Data],
    #"Filtered Rows" = Table.SelectRows(
     dbo_DimDate, 
     each ([EnglishDayNameOfWeek] = "Tuesday"))
in
    #"Filtered Rows"

From this you can build a report in Power BI Desktop that looks something like this:

image

The question is, in this case, how can you make this dynamic and allow your end users to enter a different day name to filter by, and how can you ensure that this dynamic filtering works even when the report has been deployed to PowerBI.com?

There may not be a current workbook to store parameters but in PowerBI.com you can of course display Excel workbooks stored in OneDrive for Business as reports (full details on how to do this are here). Rather than display an Excel report, you can instead display a workbook containing an Excel table containing a day name:

image

This is going to be your parameter table. First minor irritation: while it seems like you can change the value in the table inside PowerBI.com the value doesn’t get saved; you have to use the Edit option to open the workbook in Excel Online before any changes you make do get saved.

image

image

The next problem is this: how can you read the day name parameter value from a table stored in an Excel workbook in OneDrive For Business? It’s actually possible using the Excel Services OData API and I blogged about how to call this API in Power Pivot a few years ago here. With a workbook called FilterParameter.xlsx and an Excel table called FilterDay, here’s an example M query that reads the day name parameter value:

let
    Source = OData.Feed("https://enteryourdomainhere-my.sharepoint.com/_vti_bin/ExcelRest.aspx/personal/chris_enteryourdomainhere_onmicrosoft_com/Documents/FilterParameter.xlsx/OData/FilterDay"),
    #"0" = Source{[excelRowID=0]}[Enterparametervalue]
in
    #"0"

If this M query is called DayFilter, then the value it returns can be used in the original query to make the filter dynamic (you will also need to set your data privacy levels correctly or enable Fast Combine):

let
    Source = Sql.Database("chriszbook", "adventure works dw"),
    dbo_DimDate = Source{[Schema="dbo",Item="DimDate"]}[Data],
    #"Filtered Rows" = Table.SelectRows(
     dbo_DimDate, 
     each ([EnglishDayNameOfWeek] = DayFilter))
in
    #"Filtered Rows"

image

At this point you’ll have something that works in Power BI Desktop, but there’s still more to do to make this work after the report has been published to PowerBI.com. For a start you’ll need to make sure you’re using the latest version of the Power BI Personal Gateway and that you have enabled Fast Combine in it by following the instructions in this blog post. You also need to make sure that in PowerBI.com, when you set up the credentials for the OData connection to the Excel table, you use the OAuth2 Authenication Method option:

image

And with that, you’re done. You can edit the Excel workbook to change the day name that you want to filter by, and to refresh your report in PowerBI.com you can use the Refresh Now option on the dataset:

image

While dashboards reflect the updated data automatically, if you’re looking at a report you may also need to use the Refresh button on the top of the report to see updated data:

image

And there you have it: a parameter table in PowerBI.com. To be honest, I think there are slightly too many fiddly steps for users to follow in this technique for me to be happy recommending its use unconditionally, but it should be useful in some scenarios. Hopefully there will be an easier way of accomplishing the same thing in Power BI in future…

Webinar Recording: Is OLAP Dead?

Yesterday I presented a webinar for the nice people at Pyramid Analytics, the first of two. I was paid for it of course (I have to earn a living somehow, you know), but the main reason I agreed to do it was because they wanted me to discuss interesting Microsoft BI questions rather than just demo their product – which is, as I’ve said here before, the most effective way of doing tech marketing in my opinion. In any case Pyramid Analytics is one of a small number of third-party tools I’m happy to recommend to my customers even without being paid.

Today’s discussion was on the topic of whether OLAP is dead and you can watch the recording here: https://www.brighttalk.com/webcast/13401/178899 [free sign-up required]

You won’t be surprised to find out that I don’t think OLAP is dead. By “OLAP” I mean the idea of a centralised model containing not just all your data but also things like how your tables should be joined, how measures aggregate up, advanced calculations and KPIs and so on. Using this definition both SSAS Multidimensional and SSAS Tabular are OLAP; I would say both pass the FASMI test for sure. The opposite of OLAP is of course the recently-fashionable self-service BI approach of tools like Power Pivot where every end user can create their own model, which is certainly more flexible but also less easily maintainable, leads to duplication of effort and data, and can be a problem for less technically-able end users. Both a single centralised OLAP model and a distributed self-service approach are useful in different circumstances and the latter certainly doesn’t replace the former. One of the reasons why I’m such a fan of Microsoft’s new direction with Power BI is that it recognises this fact.

The other big question that came up was what the relative strengths and weaknesses of SSAS Multidimensional and Tabular are. Before the webinar I ran a Twitter poll asking people whether they had used Multidimensional or Tabular on the last SSAS project that had worked on; from 151 responses, 48% said Tabular and 52% said Multidimensional. This even split doesn’t surprise me to be honest, based on my own experiences: both flavours of SSAS have different strengths and weaknesses and choosing between them can be very difficult. I’m sure new features like bi-directional cross-filtering and the improved development experience in SSAS Tabular 2016 will increase Tabular’s popularity in the future. Personally, I think if Microsoft were able to offer equivalent functionality to:

  • Calculated members on non-measures dimensions
  • MDX SCOPE statements
  • Parent/child hierarchies (without the performance penalties)
  • Writeback

…in SSAS Tabular then there would be very few good reasons to choose SSAS Multidimensional for new projects.

First Look At SSAS 2016 MDX On DirectQuery

Following on from my last post covering DirectQuery in Power BI, I thought it might be interesting to take a look at the way MDX queries are supported in SSAS Tabular 2016 CTP3 DirectQuery mode.

There were a lot of limitations when using DirectQuery in SSAS Tabular 2012/4, but for me the showstopper was the fact that it only worked if you were running DAX queries against your model. Historically the only major client tool that generated DAX queries to get data was Power View, and Power View was/is too limited for serious use, so that alone meant that none of my customers were interested in using DirectQuery. Although we now have Power BI Desktop and PowerBI.com, which also generate DAX queries, the fact remains that the vast majority of business users will still prefer to use Excel PivotTables as their primary client tool – and Excel PivotTables generate MDX queries. So, support for MDX queries in DirectQuery mode in SSAS 2016 means that Excel users will now be able to query a Tabular model in DirectQuery mode. This, plus the performance improvements made to the SQL generated in DirectQuery mode, means that it’s now a feature worth considering in scenarios where you have too much data for SSAS Tabular’s native in-memory engine to handle or where you need to see real-time results.

At the time of writing the most recent release of SQL Server 2016 is CTP3. If you want to test out the BI features in SQL Server 2016 CTP3 in an Azure VM, I highly recommend Dan English’s blog post here showing how to set one up. To test DirectQuery mode you need to use the older 1103 compatibility mode for your project and not the latest 1200 compatibility mode. This is documented in the release notes:
https://msdn.microsoft.com/en-us/library/dn876712.aspx#bkmk_2016_ctp3_0

image

Once you’ve created your project, you can enable DirectQuery mode in the same way as in previous versions by following the instructions here. The DirectQueryMode property on Model.bim needs to be set to On, and the QueryMode property on the project should be set to DirectQuery.

For testing purposes I downloaded the 2016 version of the Adventure Works DW database and restored it to SQL Server, then created a SSAS Tabular model containing only the DimDate table to keep things simple. I created one measure in the model with the following definition:
TestMeasure:=COUNTROWS(‘DimDate’)

First of all, I ran the following MDX query:

SELECT
{[Measures].[TestMeasure]} 
ON 0,
[DimDate].[CalendarYear].[CalendarYear].MEMBERS 
ON 1
FROM
[Model]

image

Using a Profiler trace (yes, I know I should be using XEvents but Profiler is so much more convenient for SSAS) I could see the SQL generated by SSAS in the Direct Query Begin and Direct Query End events. For the MDX query above there were three SQL queries generated. The first looks like it is getting the list of years displayed on the Rows axis:

SELECT 
TOP (1000001) [t0].[CalendarYear] AS [c15]
FROM 
(
  (SELECT [dbo].[DimDate].* FROM [dbo].[DimDate])
)
AS [t0]
GROUP BY [t0].[CalendarYear] 

The second SQL query gets the measure value requested:

SELECT 
TOP (1000001) [t0].[CalendarYear] AS [c15],
COUNT_BIG(*)
AS [a0]
FROM 
(
  (SELECT [dbo].[DimDate].* FROM [dbo].[DimDate])
)
AS [t0]
GROUP BY [t0].[CalendarYear] 

The third is simply a repeat of the first query.

However, there’s one important thing to say here: there are going to be significant changes and improvements to the SQL generated before RTM, so don’t read too much into the queries shown here.

There are several limitations in CTP3 that may or may not remain at RTM. One that you may run into is the that you can only use fully qualified MDX unique names in your queries, so

[DimDate].[CalendarYear].&[2010]

…will work but

[2010]

…will not. To be honest, I consider it a best practice to use fully qualified unique names anyway so I’m not too bothered about this. Drillthrough doesn’t work at the moment either.

MDX calculations defined in the WITH clause of a query are supported, which is really useful if you’re writing custom MDX queries for SSRS. For example the following query works and generates the same SQL (though with a few more executions) as the previous query:

WITH
MEMBER [Measures].[TestMDXCalcMeasure] AS 
SUM(NULL:[DimDate].[CalendarYear].CURRENTMEMBER,
[Measures].[TestMeasure])

SELECT
{[Measures].[TestMeasure],
[Measures].[TestMDXCalcMeasure]} 
ON 0,
[DimDate].[CalendarYear].[CalendarYear].MEMBERS 
ON 1
FROM
[Model]

image

All in all, this looks like a solid piece of work by the SSAS dev team. Go and test it! I would love to hear from anyone with genuinely large amounts of data (maybe APS/PDW users?) regarding their experiences with 2016 DirectQuery. Recently I’ve been working with a customer using SSAS Multidimensional in ROLAP mode on top of Exasol and I’ve been surprised at how well it works; I would imagine that 2016 DirectQuery and APS would be an even better combination.

One last thought. If we get the ability to query a cloud-based Power BI mode with MDX and MDX on DirectQuery is supported in Power BI too, why would you bother paying for an expensive SQL Server Enterprise/BI Edition licence plus hardware to use DirectQuery when you can get almost the same functionality in the cloud for a fraction of the price?

Power BI DirectQuery Mode: Not Just SSAS DirectQuery v2.0

When DirectQuery mode for Power BI was announced I assumed it was just the next version of SSAS Tabular DirectQuery mode with the same extra features that we’ll get in SSAS 2016 (such as better SQL generation, and other benefits enabled by Super-DAX). If it was just that I would have been happy, but there’s something else that was mentioned in Miguel’s video introducing the feature that I almost missed, something that is also hinted at in the documentation where it mentions the following limitation:

If the Query Editor query is overly complex an error will occur. To remedy the error you must: delete the problematic step in Query Editor, or Import the data instead of using DirectQuery

It turns out that Power BI in DirectQuery mode is actually SSAS DirectQuery version 2.0 combined with Power Query/Power BI “Get Data”’s query folding capabilities (where the logic in your queries is pushed back to the data source rather than evaluated inside Power BI) – which is quite interesting.

Let’s look at an example using the Adventure Works DW database and SQL Server. If you import just the DimDate table in DirectQuery mode and create a table that shows the count of values from the DateKey column grouped by CalendarYear, like this:

image

The following SQL will be generated:

SELECT 
TOP (1000001) [t0].[CalendarYear] AS [c15],
COUNT_BIG([t0].[DateKey])
 AS [a0]
FROM 
(
(select [$Table].[DateKey] as [DateKey],
    [$Table].[FullDateAlternateKey] as [FullDateAlternateKey],
    [$Table].[DayNumberOfWeek] as [DayNumberOfWeek],
    [$Table].[EnglishDayNameOfWeek] as [EnglishDayNameOfWeek],
    [$Table].[SpanishDayNameOfWeek] as [SpanishDayNameOfWeek],
    [$Table].[FrenchDayNameOfWeek] as [FrenchDayNameOfWeek],
    [$Table].[DayNumberOfMonth] as [DayNumberOfMonth],
    [$Table].[DayNumberOfYear] as [DayNumberOfYear],
    [$Table].[WeekNumberOfYear] as [WeekNumberOfYear],
    [$Table].[EnglishMonthName] as [EnglishMonthName],
    [$Table].[SpanishMonthName] as [SpanishMonthName],
    [$Table].[FrenchMonthName] as [FrenchMonthName],
    [$Table].[MonthNumberOfYear] as [MonthNumberOfYear],
    [$Table].[CalendarQuarter] as [CalendarQuarter],
    [$Table].[CalendarYear] as [CalendarYear],
    [$Table].[CalendarSemester] as [CalendarSemester],
    [$Table].[FiscalQuarter] as [FiscalQuarter],
    [$Table].[FiscalYear] as [FiscalYear],
    [$Table].[FiscalSemester] as [FiscalSemester]
from [dbo].[DimDate] as [$Table])
)
 AS [t0]
GROUP BY [t0].[CalendarYear] 

Then, if you go to Edit Queries and set a filter on EnglishDayNameOfWeek so that you only get the dates that are Fridays, like so:

image

The click Close And Apply, you’ll see that the table now shows the count of dates in each year that are Fridays (as you would expect):

image

…and the SQL generated also reflects that filter:

SELECT 
TOP (1000001) [t0].[CalendarYear] AS [c15],
COUNT_BIG([t0].[DateKey])
 AS [a0]
FROM 
(
(select [_].[DateKey],
    [_].[FullDateAlternateKey],
    [_].[DayNumberOfWeek],
    [_].[EnglishDayNameOfWeek],
    [_].[SpanishDayNameOfWeek],
    [_].[FrenchDayNameOfWeek],
    [_].[DayNumberOfMonth],
    [_].[DayNumberOfYear],
    [_].[WeekNumberOfYear],
    [_].[EnglishMonthName],
    [_].[SpanishMonthName],
    [_].[FrenchMonthName],
    [_].[MonthNumberOfYear],
    [_].[CalendarQuarter],
    [_].[CalendarYear],
    [_].[CalendarSemester],
    [_].[FiscalQuarter],
    [_].[FiscalYear],
    [_].[FiscalSemester]
from [dbo].[DimDate] as [_]
where [_].[EnglishDayNameOfWeek] = 'Friday')
)
 AS [t0]
GROUP BY [t0].[CalendarYear] 

What’s happening here is that the output of “Get Data” (we so need a better name for this feature – how about “the functionality formerly known as Power Query”?) becomes the inner SELECT statement with the filter on EnglishDayNameOfWeek; while the table in the report that returns the count of dates by Year is responsible for generating the outer SELECT statement with the GROUP BY (this is the part of the Power BI engine that is related to SSAS DirectQuery).

Now, you can only do this if all the steps in “Get Data” can be folded back to SQL. How do you know if they can or not? Well, if query folding can’t take place then you’ll get an error: this is what is meant by the warning about your query being “overly complex” in the documentation. Unfortunately there’s no way of knowing in advance what can be folded and what can’t; with every release of Power BI Desktop and Power Query I’ve noticed that more and more things can be folded (and I’m always pleasantly surprised at the transformations that can be folded, such as pivots and basic calculated columns), but there are still plenty of limitations. For example at the time of writing adding an index column to your query will prevent query folding and therefore break DirectQuery. If you do this, you’ll see the following error in the Query Editor window:

This step results in a query that is not supported in DirectQuery mode

image

Even with this restriction I think the ability to apply transformations in Get Data is very useful indeed, because it means you have a lot of scope for cleaning and filtering data in DirectQuery mode and therefore building ‘live’ reporting solutions on data that isn’t modelled the way you’d like it to be.

While I’m talking about DirectQuery mode, there are a few other points I’d like to mention:

  • Remember, it’s still in Preview and so it has some limitations and bugs. For example, I’ve hit an issue where DirectQuery fails with a connection from my “Recent Sources” list, but works ok if I create a new connection.
  • Prepare to be underwhelmed by the performance of DirectQuery, people: remember this is just ROLAP by another name, ROLAP has been around for years, and ROLAP has always had performance problems. These problems are not just related to the speed of the underlying engine or the size of the data – the ability of the OLAP engine to generate the SQL to get the data it needs also plays a major role. SSAS Multidimensional ROLAP and SSAS Tabular 2012-4 DirectQuery generate some pretty wretched SQL even in the most simple scenarios and it looks like Power BI DirectQuery is a big improvement on them. But what about more complex queries? This is a very difficult problem to crack. My feeling is that if your data does fit into Power BI’s native engine then you should import it rather than use DirectQuery, if you want to get the best possible query performance.
  • I suspect that this performance issue is also the reason why the New Measure button is greyed out in Power BI Desktop when you’re in DirectQuery mode. This isn’t a limitation of the engine because SSAS Tabular does support more complex DAX measures in DirectQuery mode, albeit with some restrictions on the functions you can use. However, the more complex your DAX measures are, the more complex the problem of generating SQL becomes and the more likely your queries are to be slow. I don’t think this is a good reason for completely preventing users from creating their own measures though: there are lots of scenarios where you will need to create measures and performance would still be acceptable. Maybe this is an example of an ‘advanced’ feature that could be switched on by power users?

Avoiding Duplication Of Database Connection Information In Power BI

In a year’s time there will be a great opportunity for someone to present a session on “Power BI Development Best Practices” at the PASS Summit. Before then, we will all have to work out what those best practices actually are – probably the hard way. With that in mind, here’s a suggestion for one good practice that came out of a conversation at this year’s PASS Summit (thank you Prathy).

If you’re connecting to a data source like SQL Server in Power BI (or Power Query for that matter) you’re probably going to be importing multiple tables of data. For example, if I was importing data from my local SQL Server instance and the Adventure Works DW database, I might see something like this in the Navigator pane in Power BI:

image

Clicking the Load or Edit buttons would create five different queries to get data, one from each of the selected tables:

image

The problem here is that each query duplicates the connection information for the SQL Server database; for example the M code for the FactInternetSales query looks like this:

let
    Source = 
        Sql.Database("chriszbook", "adventure works dw"),
    dbo_FactInternetSales = 
        Source{[Schema="dbo",Item="FactInternetSales"]}[Data]
in
    dbo_FactInternetSales

That means that if you ever need to change the server or database that the queries point to – maybe because the server has migrated, or because you’re moving the reports from dev to test to production – then you have to edit each of these five queries. Which would be a pain.

Ideally Power BI would create a single connection that each of these queries could share, something like a data source in SSRS. In fact I can see that this has already been raised as a suggestion on the forum here and is under review; I’m sure some more votes would raise its profile. However there are certainly a number of different ways you can avoid this kind of duplication by making your own changes to these queries though.

One possible approach would be to create new Power BI queries that returned the names of the SQL Server instance and the database name, and for each of your main queries to reference these queries. To do this you would need to:

1) Create a new query using the Blank Query option:

image

2) Call the query SQLServerInstanceName:

image

3) Open the Advanced Editor window by clicking on the Advanced Editor button on the Home tab, deleting all the automatically generated code in there and replacing it with the name of the SQL Server instance that you want to connect to in double quotes:

image

4) Repeat steps 1-3 to create a new query called SQLServerDatabaseName that returns the name of the database that you want to connect to. It might also be a good idea to create a new group to separate these new queries from the ones that load the data:

image

5) Edit the queries that actually return data so that instead of hard-coding the instance name and database name, they take these values from the queries you have just created. A Power BI query can return a value of any data type (not just a table), and the queries created in steps 1-4 return values of type text – the names of the SQL Server instance and database to connect to. These queries can now be used as variables in other queries, so after editing the FactInternetSales query shown above would look like this:

let
    Source = 
        Sql.Database(SQLServerInstanceName, SQLServerDatabaseName),
    dbo_FactInternetSales = 
        Source{[Schema="dbo",Item="FactInternetSales"]}[Data]
in
    dbo_FactInternetSales

image

Now, if you ever need to change the connection you just need to change the values in these two queries rather than edit every single query that returns data.

There are other ways of solving this problem: for example you could have a query that returns the output of Sql.Database() (as used in the Source step in the FactInternetSales query shown above)and have all the other data load queries reference that. I like the approach I show here though because it makes it very easy to see the values for the SQL Server instance and database that are currently in use. If you’re being even more ambitious – maybe because you have many queries in many .pbix files that connect to the same database – you could even store connection information somewhere outside the .pbix file, maybe in another SQL Server database. But if you did that, you would then need to worry about the connection information for that database too…

Analysing Audience Reaction To The PASS Summit 2015 Keynote

If my post from a few weeks back asking for help with my session at PASS piqued your interest, I’m pleased to say that you can now watch the recording of my session “Analysing Audience Reaction To The PASS Summit 2015 Keynote” online here:

Despite having only 24 hours to prepare all my demos after collecting the data I think it went pretty well. The data from Bing Pulse was particularly interesting: I had around 30-35 people providing regular feedback throughout the keynote and there were some obvious peaks and troughs, as you can see from this graph:

BingPulse

Audience satisfaction was a little flat for the first half hour and then rose noticeably when the content became more technical and the demos started. Both the SQL Server R Services and Stretch Database demos were the high points as far as my panel were concerned; the lowest point came when there was mild outrage at the use of Profiler in a demo.

Thanks once again to everyone who helped me out!