12th Blog Birthday

Today is the 12th anniversary of the first post on this blog, and as in previous years I’m going to use this as an opportunity to sum up my thoughts over what’s been going on in my corner of the Microsoft BI world in the last twelve months.

Power BI

I think it’s fair to say that 2016 was the year that Power BI became the big commercial success that many of us hoped it would be. After the achingly slow uptake of Power Pivot and the failure of the original Office 365 Power BI it’s great to see Microsoft BI with a hit on its hands. Many of my existing customers have started using it alongside the rest of the SQL Server BI stack, especially SSAS, because it’s much easier to build reports and share them via the browser or mobile devices than with SSRS or Excel. I’ve also started working with new type of customer, one that I’ve never worked with before: small and medium organisations (including many not-for-profits) who have Office 365 but no existing BI solution, the kind of organisation that does not have the money or resources for a SQL Server BI solution or indeed any other kind of traditional BI solution. This, I believe, is where the real opportunity for Power BI lies and where the majority of the new growth will come from.

Apart from my own customers, there’s plenty of other evidence for the success of Power BI. The energy of the Power BI community, on forums and at user groups, is amazing – and once again, the people that I meet at user groups are completely different to the crowd you get at a normal SQL Server user group. The analysts love it too: for example, Microsoft is now in the Leaders section of the Gartner Magic Quadrant. There’s also the fact that competitors like Tableau have started attacking Power BI in their marketing, so I guess they must consider it a major threat.

Why has it been such a success? The underlying technology is great, but then again the technology was always great. The pace of change is incredible and it’s good to see Microsoft throwing its vast resources behind a product with some potential, rather than another Zune or Windows phone. There’s still some catching up to do but at this rate any existing gaps will have been closed by the end of 2017. The willingness to listen to customer feedback and act on it is refreshing. The Excel/Power Query/Power Pivot and SSAS crossover adds a angle that the competition doesn’t have. Finally, the licensing is almost perfect: it’s simple (compared to the usual thousands of SKUs that Microsoft usually comes up with) and cheap/free, although organisations with thousands of users who all need Pro subscriptions find the costs escalate rapidly; I’d like to see special deals for large numbers of users, and some recognition that many users who need to see Pro-level reports don’t need to create reports using these features. I know Microsoft has already heard this from a lot of people, though, and has taken it on board.

Probably the only criticism that I can make that Microsoft doesn’t seem to be actively addressing is the fact that the data visualisation functionality is relatively weak. If you know what you’re doing and you have the patience, you can create good-looking reports. For people like me who have minimal artistic talent and limited patience the experience of building reports can be frustrating. There are some features like small multiples that I can’t believe are still not implemented in the core product, and nothing to help users to follow good data visualisation practice. R visuals and custom visuals help fill the gap (I was quite impressed by this one by Business Solution Group, for example, which isn’t available in the gallery) but really Microsoft need to put some more thought into this area.

Analysis Services

There’s been a lot of good news in the world of Analysis Services this year too. SSAS Tabular 2016 dealt with a lot of the shortcomings that dogged it in 2012 and 2014: a much faster and less buggy development experience; many-to-many relationships supported using bi-directional cross filtering; and powerful new DAX functions and features like variables. SSAS Tabular v.next promises even more great new features such as the integration of the M language. These changes and the fact it’s now available in Standard Edition mean that Tabular should be the default choice over Multidimensional for almost all new SSAS projects.

Sadly, it looks like the neglect of Multidimensional will continue for the foreseeable future. I stopped being angry about this a long time ago and I understand that Microsoft need to concentrate their resources on SSAS Tabular and Power BI, but a lot of Multidimensional customers are now wondering where they stand. Either Microsoft needs to show some commitment to Multidimensional by adding new features – it wouldn’t take much to make a difference – or add features to Tabular that make it possible for more Multidimensional users to migrate over to it, for example equivalents to Scope statements or calculated members on non-Measures dimensions.

Last of all, Azure SSAS opens up a lot of exciting new possibilities for both on-prem SSAS users as well as Power BI users. Kasper does a great job of summing them up here and I won’t repeat what he has to say; once again I’m seeing a lot of interest from my customers and I’m sure I’ll be helping a few to migrate to the cloud very soon. The pricing seems a bit excessive at the moment, even when you take into account the ability to pause servers, and I hope it changes before RTM. Also it’s SSAS Tabular only at this stage but support for Multidimensional is by far the top-voted request on the feedback forum, with more than five times as many votes as the next highest request, so maybe this will be Microsoft’s opportunity to show some love to the Multidimensional world?

The Show Hidden Cubes SSAS Connection String Property

If you need to write queries in SQL Server Management Studio against an SSAS cube that has its Visible property set to false, you have a problem because when you connect to your database you can’t see the cube or its metadata! Luckily you can override this and make the cube visible by setting the following connection string property when you connect:

Show Hidden Cubes=true


Connection string properties can be set in the Additional Connection Parameters tab of the connection dialog that appears when you open a new MDX query window.

Unfortunately this doesn’t make any objects in the cube that are not visible, like measures or dimensions, visible again – it just makes the cube itself visible. However, if you’re working on the Calculations tab of the Cube Editor in SSDT it is possible to make all hidden objects visible as I show here.

First Thoughts On The Integration Of Power Query/M Into Analysis Services Tabular

Last Friday’s big news was the release of the first CTP for Analysis Services v.next. Among several major new pieces of functionality (Ragged hierarchies! Drillthrough that works properly, even for calculations! Table-level security!) probably the biggest is the integration of Power Query/M into Analysis Services. As you can probably guess, I’m incredibly pleased that my two favourite technologies have got together. The technical details are given in this blog post, which I suggest you read if you haven’t done so already, but what I think is missing is an explanation of why this is so important and what kind of opportunities it opens up – hence this post. Of course this is just my take on the subject and not what Microsoft may actually thinking; it’s also very early days, so as the functionality develops and I have more chance to think about this my opinions may change. If you have any ideas on this subject I would be interested to hear them so please leave a comment!

Why this had to happen: Power BI

There is an obvious reason why Microsoft decided to integrate Power Query/M into SSAS, and that is because it needs to support the conversion of Power BI models into Analysis Services Tabular models. There are two scenarios where this will be necessary.

The first is the ability to convert a Power BI model into an Azure Analysis Services Tabular model (listed as ‘planned’ here), something that will be a key selling point for Azure Analysis Services when it releases. The engine behind Power BI is essentially the same as the one used in Analysis Services so migrating the data model should be straightforward, but since Power BI uses Power Query/M to load data then a migrated Azure Analysis Services model will also have to use Power Query/M.

The second scenario is similar to the first. We now know that on-premises Power BI will be delivered through Reporting Services v.next, and it’s reasonable to assume Reporting Services will need a database engine to store the data for published Power BI reports. That engine will have to be an Analysis Services instance of some kind (either standalone or running in-process inside Reporting Services) and again for that to work Analysis Services will have to support the same data access mechanisms as Power BI.

Better support for a larger number of data sources

I’ve just argued why Microsoft was obliged to include this functionality in SSAS v.next but in fact there are many positive reasons for doing this too. The most obvious one is to do with support for more data sources. At the moment SSAS Tabular supports a pretty good range of data sources, but the world of BI is getting more and more diverse and in order to stay relevant SSAS needs to support far more than it does today. By using Power Query/M as its data access mechanism, SSAS v.next will immediately support a much larger number of data sources and this number is going to keep on growing: any investment that Microsoft or third parties make for Power BI in this area will also benefit SSAS. Also, because Power Query/M can query and fold to more than just relational databases, I suspect that in the future this will allow for DirectQuery connections to many of these non-relational data sources too.

Different data sources for partitions in the same table

Another benefit of this change is that we’ll have a lot more flexibility with partitioning tables in an SSAS Tabular model. As the blog post says:

As long as a partition’s M query adheres to the column mappings of the table, you are free to perform any transformations and pull in data from any data source defined in the model.

In SSAS 2016 the partitions in a table all have to get data from the same data source whereas in v.next we’ll be able to get data from different data sources in different partitions, and this opens up some interesting new possibilities. For example, I can imagine a simple budgeting application where the partitions in a table get data from different Excel workbooks stored in OneDrive for Business, and where the each partition gets processed automatically when changes are saved to one of these workbooks.

Does this replace SSIS and my data warehouse? 

The short answer is no. Power Query/M is not a full-featured ETL tool and I don’t think it ever will be; it certainly does not have the kind of functionality needed to perform enterprise-level ETL right now. My view is that Microsoft have built Power Query/M into SSAS for the reasons above and not to encourage enterprise SSAS users to do their own quick-and-dirty ETL when loading data (although there is a risk that that will happen anyway). That said, I think the dividing line between corporate and self-service BI will become increasingly blurred over the next few years as the Microsoft BI stack develops, and we’ll see Analysis Services being used in self-service scenarios as well as the more traditional corporate ones.

Centralised data source objects

One last thing to point out is that the way SSAS v.next makes a distinction between data sources and other queries is very interesting. In Power BI and Power Query it’s easy to end up with data source connection information duplicated across multiple queries unless you know what you’re doing, and this can cause no end of problems later on in a project. As far as I can see, in SSAS v.next a “data source object” is an M query that only contains the connection to external data, while all other queries have to reference a data source to be able to access external data. This means, as the blog post says:

Referring to data source objects helps to centralize data source settings for multiple queries and simplifies deployments and maintenance if data source definitions must be updated later on. When updating a data source definition, all M queries that refer to it automatically use the new settings.

I wonder whether this concept is coming to Power BI and Power Query at some point? I hope so – it makes a lot of sense.

Power BI Model Size Bloat And Auto Date/Time Tables

Opinion is split over Power BI’s ability to automatically create Date hierarchies in your model. Personally it drives me mad and I always turn it off, but I know a lot of people love the convenience of it. Whatever your feelings, though, it is important to be aware of the problems it can cause with the size of your model.

Imagine you have a .pbix file and you load just this one table into the data model:


Three columns, each containing three dates with long gaps in between, but only nine values overall. No other tables or queries, no measures, no visuals at all. When you save it results in a file that is a massive 4.7MB – but why, when there’s hardly any data?

Actually, there is a lot of data hidden in this file. If you connect to the .pbix file with DAX Studio you can see that the Auto Date/Time functionality has built three hidden Date tables whose names are prefixed with “LocalDateTable”, one for each date column in the original table above:


These tables can be queried in DAX Studio, and the following query reveals more about them (if you try this on your model you will need to alter the name of the table used in the query to match the names of one of the tables in your model):

"Hidden Date Table Rowcount", 
COUNTROWS ( 'LocalDateTable_17eac8aa-f559-4ade-971f-9a1ad5258fbe' ),
"Min Date", 
MIN ( 'LocalDateTable_17eac8aa-f559-4ade-971f-9a1ad5258fbe'[Date] ),
"Max Date", 
MAX ( 'LocalDateTable_17eac8aa-f559-4ade-971f-9a1ad5258fbe'[Date] )


In this case each of the three tables has 109938 rows. That’s one row for each date between the beginning of the year containing the earliest date in the source column and the end of the year containing the latest date in the source column – which is the best practice for building a Date table, but results in three very large tables in this case.

To stop Power BI automatically building these tables for you, in Power BI Desktop go to the File menu, select Options, then Data Load and deselect the Auto Date/Time option:


When you do this the automatically created date tables are removed from the model. In this case, after saving, the .pbix file shrinks to 181KB! Remember that, by doing this, you won’t get automatic date hierarchies created for you when you are designing your reports and you will have to build any Date tables and hierarchies you need manually.

This is an extreme example of course, but overall you should probably turn off Auto Date/Time if your model size is an issue and:

  • You have a lot of date columns in your tables, or
  • Your date columns contain large ranges of dates. Remember that some systems use 1/1/1900 as an ‘unknown’ date value, which can make things a lot worse.

Passing Parameters To SQL Queries With Value.NativeQuery() In Power Query And Power BI

I first came across the Value.NativeQuery() M function about six months ago, but it didn’t do anything useful then so I didn’t blog about it. I checked it again recently though and now it does something very handy indeed: it allows you to pass parameters to SQL queries. Before this, if you wanted to use parameters in your SQL, you had to do some nasty string manipulation in the way Ken Puls shows here. Now, with Value.NativeQuery(), you can handle SQL query parameters properly in M.

Here’s a simple example that shows how it works, passing two parameters to a SQL query on the Adventure Works DW database:

    Source = Sql.Database("localhost", "Adventure Works DW"),
    Test = Value.NativeQuery(
            "SELECT * FROM DimDate
            WHERE EnglishMonthName=@MonthName
            [MonthName="March", DayName="Tuesday"])


Some explanation of what’s happening here:

  • The Source step returns a reference to a SQL Server database, and this is passed to the first parameter of Value.NativeQuery().
  • The second parameter of the Value.NativeQuery() function is the SQL query to be executed. It contains two parameters called @MonthName and @DayName.
  • The parameters to the SQL query are passed using a record in the third parameter of Value.NativeQuery(). Note how the named of the fields in the records match the names of the parameters in the SQL query.

It looks like, eventually, this will be the way that any type of ‘native’ query (ie a query that you write and give to Power Query, rather than a query that is generated for you) is run against any kind of data source – instead of the situation we have today where different M functions are needed to run queries against different types of data source. I guess at some point the UI will be updated to use this function. I don’t think it’s ‘finished’ yet either, because it doesn’t work on Analysis Services data sources, although it may work with other relational data sources – I haven’t tested it on anything other than SQL Server and SSAS. There’s also a fourth parameter for Value.NativeQuery() that can be used to pass data source specific options, but I have no idea what these could be and I don’t think there are any supported for SQL Server. It will be interesting to see how it develops over the next few releases.

[Update February 2021]

For relational databases that use question marks (?) instead of @ names for parameter placeholders, you need to use Value.NativeQuery in a slightly different way by passing a list of values rather than a record to the second parameter. Here’s an example:

  "SELECT * FROM DimDate
   WHERE EnglishMonthName=?
   {"March", "Tuesday"})


Technitrain SQL Server & Microsoft BI Training Course Schedule For 2017

If you’re looking for expert-led, classroom based training on SQL Server and Microsoft BI in London, check out the training courses we have coming up in 2017 at Technitrain:

Introduction to Power BI with Chris Webb, March 13-15 2017
A three-day introduction to building and managing a BI solution using Power BI, suitable for both analysts and BI developers.

Analysis Services Tabular Workshop with Alberto Ferrari, March 20-21 2017
A two-day course for BI developers who want to learn to build Analysis Services Tabular models.

Mastering DAX with Alberto Ferrari, March 22-24 2017
Learn DAX from the master! Suitable for anyone who needs to know how to write calculations and queries for Analysis Services Tabular, Power Pivot or Power BI.

Mission Critical SQL Server with Allan Hirt, March 27-30 2017
This advanced 4-day workshop, designed for SQL Server professionals and IT admins alike, will teach you how to build solid high availability and disaster recovery solutions for SQL Server.

Real-World Cube Design And Performance Tuning With Analysis Services Multidimensional with Chris Webb, May 8-10 2017
A course aimed at intermediate-to-experienced Analysis Services Multidimensional developers, looking at more advanced cube design topics and query performance tuning.

From Zero To BIML with Andy Leonard, June 19-22 2017
A four-day course aimed at BI developers who want to use Business Intelligence Markup Language (BIML) to generate SQL Server Integration Services packages.

Introduction To MDX with Chris Webb, September 25-27 2017
A three-day introduction to writing queries and calculations in MDX for Analysis Services Multidimensional developers and report writers.

Database DevOps with Alex Yates, September 25-27 2017
Aimed at anyone who is responsible for SQL Server schema changes, this course will teach you how to deliver more frequently and more reliably as well as how to solve common technical (and cultural) problems.


If you’d like to keep up-to-date with all our new course announcements, please join our mailing list!

Power Query, Power BI And The “Allow Data Preview To Download In The Background” Option

Recently I was asked by a customer to do some tuning on an Excel workbook with a lot of Power Query queries in it. Although all of the data used in the queries was coming from tables in the workbook itself and the data volumes were small, there were fifty Power Query queries and clicking Refresh All resulted in a large, prolonged spike in CPU and memory usage by Excel.

Only a small number of these fifty queries were being loaded into the workbook and none were being loaded into the Excel Data Model. The queries that were being loaded into the workbook were referencing several other queries that in turn referenced several other queries, and indeed there were some reference chains that were over ten queries long. To give you an idea of the complexity here’s what the Query Dependencies view looked like:


I’m a big fan of using references to split complex logic up into separate queries, and in this case it was absolutely the right thing to do because otherwise the workbook would have been unmaintainable. That said, there was clearly something going wrong with the refresh in this case.

On further investigation I found that if I individually refreshed the small number of queries that actually loaded data into the workbook, they all refreshed very quickly and with none of the ill-effects seen with a Refresh All. So if it wasn’t the queries that were being loaded into the workbook, what was the problem? It turns out it was the queries that weren’t being loaded into the workbook.

Both Power Query and Power BI load previews of the data returned by a query for display in the Query Editor; clicking Refresh All in the workbook was obviously triggering a refresh of these previews and this was what was using all the memory and CPU. I found that to prevent this happening I had to use an option that was introduced in Power BI in January 2016 and is also now present in Power Query/Get & Transform in Excel: Allow Data Preview To Download In The Background.

You can find this option in Excel by going to the Query Options dialog:


…and then going to Current Workbook/Data Load. If you then deselect “Allow data preview to download in the background”:


…you will find that the background refresh of query previews stops.

In my case, after I had done this – and with a bit of other tuning using Table.Buffer() – the workbook refreshed very quickly indeed and there was no spike in CPU or memory after a Refresh All.

Other people have run into the same problem in Excel and also in Power BI Desktop (see here and here), so it looks like this is an important property to change if you have a large number of queries in a single workbook or pbix file.

%d bloggers like this: