Filtering Data Loaded Into A Workspace Database In Analysis Services Tabular 2017 And Azure Analysis Services

The first mistake that all new Analysis Services Tabular developers make is this one: they create a new project in SSDT, they connect to their source database, they select the tables they want to work with, they click Import, and they then realise that trying to load a fact table with several million rows of data into their Workspace Database (whether that’s a separate Workspace Database instance or an Integrated Workspace) is not a good idea when they either end up waiting for several hours or SSDT crashes because it has run out of memory. You of course need to filter your data down to a manageable size before you start developing in SSDT. Traditionally, this has been done at the database level, for example using views, but modern data sources in SSAS 2017 and Azure Analysis Services allow for a new approach using M.

Here’s a simple example of how to do this using the Adventure Works DW database. Imagine you are developing a Tabular model and you have just connected to the relational database, clicked on the FactInternetSales table and clicked Edit to open the Query Editor window before importing. You’ll see something like this:

image

…that’s to say there’ll be a single query visible in the Query Editor with the same name as your source table. The M code visible in the Advanced Editor will be something like this:

let
    Source = 
	#"SQL/localhost;Adventure Works DW",
    dbo_FactInternetSales = 
	Source{[Schema="dbo",Item="FactInternetSales"]}[Data]
in
    dbo_FactInternetSales

At this point the query is importing all of the data from this table, but the aim here is to:

  1. Filter the data down to a much smaller number of rows for the Workspace Database
  2. Load all the data in the table after the database has been deployed to the development server

To do this, stay in the Query Editor and create a new Parameter by going to the menu at the top of the Query Editor and clicking Query/Parameters/New Parameter, and creating a new parameter called FilterRows of type Decimal Number with a Current Value of 10:

image

The parameter will now show up as a new query in the Queries pane on the left of the screen:

image

Note that at the time of writing there is a bug in the Query Editor in SSDT that means that when you create a parameter, close the Query Editor, then reopen it, the parameter is no longer recognised as a parameter – it is shown as a regular query that returns a single value with some metadata attached. Hopefully this will be fixed soon but it it’s not a massive problem for this approach.

Anyway, with the parameter created you can now use the number that it returns to filter the rows in your table. You could, for example, decide to implement the following logic:

  • If the parameter returns 0, load all the data in the table
  • If the parameter returns a value larger than 0, interpret that as the number of rows to import from the table

Here’s the updated M code from the FactInternetSales query above to show how to do this:

let
    Source = 
	#"SQL/localhost;Adventure Works DW",
    dbo_FactInternetSales = 
	Source{[Schema="dbo",Item="FactInternetSales"]}[Data],
    FilterLogic = 
	if 
		FilterRows<=0 
	then 
		dbo_FactInternetSales 
	else 
		Table.FirstN(dbo_FactInternetSales, FilterRows)
in
    FilterLogic 

The FactInternetSales query will now return just 10 rows because the FilterRows parameter returns the value of 10:

image

And yes, query folding does take place for this query.

You now have a filtered subset of rows for development purposes, so you can click the Import button and carry on with your development as usual. Only 10 rows of data will be imported into the Workspace Database:

image

What happens when you need to deploy to development though?

First, edit the FilterRows parameter so that it returns the value 0. To do this, in the Tabular Model Explorer window, right-click on the Expressions folder (parameters are classed as Expressions, ie queries whose output is not loaded into Analysis Services) and select Edit Expressions:

image

Once the bug I mentioned above has been fixed it should be easy to edit the value that the parameter returns in the Manage Parameters pane; for now you need to open the Advanced Editor window by clicking the button shown below on the toolbar, and then edit the value in the M code directly:

image

Then close the Advanced Editor and click Import. Nothing will happen now – the data for FactInternetSales stays filtered until you manually trigger a refresh in SSDT – and you can deploy to your development server as usual. When you do this, all of the data will be loaded from the source table into your development database:

image

At this point you should go back to the Query Editor and edit the FilterRows parameter so that it returns its original value, so that you don’t accidentally load the full dataset next time you process the data in your Workspace Database.

It would be a pain to have to change the parameter value every time you wanted to deploy, however, and luckily you don’t have to do this if you use BISM Normalizer – a free tool that all serious SSAS Tabular developers should have installed. One of its many features is the ability to do partial deployments, and if you create a new Tabular Model Comparison (see here for detailed instructions on how to do this) it will show the differences between the project and the version of the database on your development server. One of the differences it will pick up is the difference between the value of the parameter in the project and on in the development database, and you can opt to Skip updating the parameter value when you do a deployment from BISM Normalizer:

image

Analysis Services Query Analyzer

Last week a new, free tool for analysing the performance of MDX queries on SSAS Multidimensional was released: Analysis Services Query Analyzer. You can get all the details and download it here:

https://ssasqueryanalyzer.github.io/

…and here’s a post on LinkedIn by one of the authors, Francesco De Chirico, explaining why he decided to build it:

https://www.linkedin.com/pulse/asqa-10-released-francesco-de-chirico/

I’ve played around with it a bit and I’m very impressed – it’s a really sophisticated and powerful tool, and one I’m going to spend some time learning because I’m sure it will be very useful to me. It’s an AddIn for SQL Server Management Studio and captures Profiler and Perfmon data for an MDX query when you run it, does so for a cold and warm cache, and then displays the results in a variety of graphs and charts. And it even has an MDX formatter built in!

image

image

image

You can find a full list of features here, and when you download the tool it comes with very detailed documentation. Definitely a must-have for all serious SSAS Multidimensional developers.

Running M Queries In Visual Studio With The Power Query SDK

Writing M in the Advanced Editor in Excel or Power BI can be a frustrating experience unless you’re the kind of masochist who loves writing code in Notepad. There are some options for writing M code outside Excel and Power BI, for example Lars Schreiber’s M extension for Notepad++ (see here for details) or the M extension for Visual Studio Code (available from the Visual Studio Marketplace here; more details on Brett Powell’s blog here), but the trouble with them is that you have to copy the code back into Excel or Power BI to run it. What many people don’t realise, however, is that it is possible to write M code and have IntelliSense, formatting, keyword highlighting and also the ability to execute your own M queries, using the Power Query SDK in Visual Studio.

The Power Query SDK (which you can download here) supports Visual Studio 2015 and 2017 and is intended for people who are writing custom Data Connectors for Power BI. To let you test your Data Connector you can create a .pq file containing M code, and this in fact allows you to run any M query you want whether you’re building a Data Connector or not.

Here’s how. First, install the Power Query SDK and then open Visual Studio and create a new project. Find the Power Query template, select the PQ file option and give your file a name:

image

Then, in the .pq file that is created, you can enter an M query and then either press the Start button on the toolbar or hit F5 to run the query. The output of the query is shown on the Output tab in the M Query Output window:

image

Right-clicking on your project in the Solution Explorer pane and selecting Properties brings up a Properties dialog with various properties that control how your queries behave:

image

image

Many of the properties are self-explanatory, at least for anyone used to writing M in Power BI or Excel. FastCombine turns off data privacy checks. Allow Native Query lets you use M queries that contain ‘native’ queries (for example your own SQL queries if you’re using a SQL Server data source), as Cédric Charlier shows here. A few of them, such as Legacy Redirects, I have no idea about yet (I should really ask someone…). Turning on Show Engine Traces displays engine trace information in the Log tab of the M Query Output pane; turning on Show User Traces displays trace information generated by the use of the Diagnostics.Trace() function in your code in the Log tab. You can save the contents of the Log tab to a text file.

image

Error messages are displayed on the Errors tab of the M Query Output pane:

image

When you have a query that connects to an external data source, the first time you try to run your query you will be prompted to set the credentials used to connect to that data source (as you would in Power BI Desktop), and the data privacy level for the data source, on the Errors pane:

image

The query won’t actually run this first time though; you’ll need to hit Start/F5 again to see the results. If you close the project and then reopen it you will need to enter credentials again; alternatively, on the Credentials tab you can save the credentials used for a data source to a .crd file which can then be reloaded when you reopen your project. You can also edit and delete credentials on the Credentials tab.

image

If I’m honest it’s all very basic but it does the job. The main thing that I miss from writing M code in Power BI is the Query Editor UI – when I write M code there I only write about 50% of it manually, the rest I generate by clicking buttons in the UI because it’s faster. Give me the Query Editor (or the ribbonless version of it that comes with SSDT, because Visual Studio doesn’t support ribbons apparently) and I’ll be happy. Even better, give me the improved code editing functionality in the Advanced Editor in Power BI Desktop and Excel that we’ve been promised!

Creating Excel “Data Dump” Reports From Power BI

We’ve all met them: the users whose idea of a ‘report’ is a large table of data dumped into Excel. However many beautiful data visualisations they see, however many times you try to convince them of the benefits of using Power BI to build reports, they just want to know where the “Export to Excel” button is so they can carry on analysing data in the same old way. Sometimes there’s a valid reason for doing this, sometimes not, but all too often internal politics means that you have to accommodate them.

If you’re using Power BI the obvious way to do this is to use the “Export Data” button in Power BI. However, as the documentation notes, there are some limits on the amount of data that can be exported:

  • The maximum number of rows that can be exported from Power BI Desktop and Power BI service to .csv is 30,000
  • The maximum number of rows that can be exported to .xlsx is 150,000

What’s more, clicking a button to export data and then copying it into an Excel report is a time-consuming, error-prone, manual process that no-one likes doing.

There is another way to get data from Power BI into Excel though: using the Analyze in Excel feature. Rather than using Analyze in Excel to create a PivotTable, which may suffer from the subtotals issue described here if you aren’t using the click-to-run version of Excel 2016 and as a result may be very slow when dealing with large amounts of data, and which will be awkward to get data out of because you have to use Excel functions like GetPivotData(), in this post I’ll show you how to get data from Power BI into an Excel table instead.

First of all you need to create a PivotTable in Excel on your desktop that is linked to a dataset published to Power BI. You can do this either by clicking the Analyze in Excel option from inside the Power BI service in the browser (see here for how to do this), or by using the Connect to Data option in the Power BI Publisher for Excel add-in.

Next, drag any measure into your PivotTable (you will need to have at least one measure defined in your dataset to do this) like so:

image

Then double-click inside a cell containing values, such as the selected cell in the screenshot above, or right-click on the cell and select Show Details:

image

When you do this a new worksheet will appear with a table in it containing rows of data from a table in the underlying dataset, but the exact data is irrelevant here. The real point is that you now have an Excel table (not a PivotTable) with a connection back to your Power BI dataset and you can edit the query that it uses to return data from Power BI – a variation on an old trick known to SSAS and Power Pivot users. You can now delete the PivotTable you created because you will no longer need it.

On the new worksheet with the table on it, right-click in a cell and select Table then Edit Query:

image

A dialog will pop up, and you’ll see an MDX Drillthrough statement in the Command Text box as shown below:

image

You can replace this statement with any MDX or DAX query you want – I recommend using DAX queries here because they are likely to be easier to write if you already know how to write DAX calculations, and unlike MDX queries they return a tabular resultset. I have a fairly old series of posts on DAX queries here that will help get you going. It will be a good idea to install DAX Studio and write your queries there, then copy them into Excel when you are happy with them.

Once you have a working DAX query in the Command Text box:

image

You can click OK, the query will run and you’ll see the output of the query in the Excel table:

image

One minor annoyance is that the column names will be in DAX ‘Table Name’[Column Name] format, and although you can use the SELECTCOLUMNS() DAX function to alias your column names and thereby remove the table names, the square brackets around the column names will always be there.

There are several good things about this approach to getting data into Excel:

  • It allows you to exceed the 150,000 row limit of the native Power BI Export option mentioned above
  • The queries are very quick to run if you aren’t doing anything complex in them
  • It avoids manual exporting – you just use Excel’s native Refresh functionality to run the query whenever you want to download new data into Excel

There are some obvious downsides though:

  • There’s no easy way to pass parameters to the queries you use, and so allow the user to choose what data is retrieved from Power BI. I guess it would be possible with some VBA, though – I haven’t tried – and it may also be possible to connect Power Query/Get&Transform to the Power BI Service and then use some of the tricks I show in this video for building reporting solutions in Excel. You can of course use Excel’s own native table filtering functionality to filter the data that is downloaded.
  • If your DAX query is slow to run your users may get frustrated. To counter this you could create calculated tables in your Power BI dataset using the same DAX as your query, and because calculated tables are created when the Power BI dataset is refreshed rather than when your user refreshes their Excel worksheet this should speed things up. However it will increase the size of your Power BI dataset and make your dataset refresh take longer. In Excel your DAX query would simply be something like:
    EVALUATE ‘My Calculated Table Name’
  • Writing DAX can be complex, whether it’s a DAX query or a calculated table, so another option would be to use the Power BI Query Editor to create the table you need for your report, load that into your dataset and then load the table into Excel.
  • You can’t publish the Excel workbook up to Power BI and have it refresh automatically, because of course Power BI does not support connections from published Excel workbooks back to Power BI datasets. Hopefully this will change in the future – it marked as planned on the Ideas forum.

Of course doing this goes against the best practices that I and most other people recommend for Power BI, so you should probably only do this if you have no other choice. Sometimes you have to bend the rules a little to make sure your project succeeds…

Upcoming Power BI and Azure SSAS training in the UK, Israel and the Netherlands

If you have some training budget to spare why not come along to one of the pre-conference seminars/training courses I have coming up in the next few months? Specifically:

  • Introduction to Azure Analysis Services – a pre-conference seminar on Thursday 22nd February at SQLBits 2018 in London. You’ll learn about what Azure Analysis Services is, why you should use it and how to build Tabular models for it. More details and registration here.
  • Power BI for analysts and developers – a three-day course covering the whole Power BI development lifecycle in Herzliya, Israel, on March 12th-14th. Full agenda and registration here.
  • Loading and transforming data in Power BI and Power Query – a pre-conference seminar on Friday 23rd March before the Dutch Power BI User Day in Utrecht. It will cover everything you need to know about the Query Editor in Power BI and Excel, going from the basics to more advanced topics such as parameters, functions and the M language. Full agenda here.

Incidentally, if you’re in London for SQLBits don’t forget to come to the London Power BI User Group meeting on Wednesday 21st February: I’m doing a panel discussion with various other Power BI folks. I’m also going to be speaking at the Dublin Data and BI Summit (run by the Power BI User Group) at the end of April, and at several other user groups later in the year; follow me on Twitter (I’m now @cwebb_bi) to find out the details when they’re available. I hope to see you at one of these events!