Setting SQL Server Connection String Properties In Power BI and SSAS Tabular Modern Data Sources

It may not be immediately obvious, but you cannot set your own connection string properties when connecting to SQL Server using the built-in SQL Server connector from either Power BI or a modern data source in Azure SSAS/SSAS Tabular 2017:

image

All you can do is configure the options that are available in the UI, which in the current version of SSDT looks like this:

image

…and which are documented in the Sql.Databases() M function here.

It turns out that the restriction on using your own connection string properties in the built-in SQL Server connector is a deliberate design decision on the part of the Power Query team because, behind the scenes, they use different providers in different circumstances to optimise performance, and because allowing arbitrary connection string properties might make maintaining backwards compatibility difficult in the future.

While your average Power BI user is unlikely to even notice this, for SSAS Tabular developers it could be a big problem: complete control over the connection string is often necessary in enterprise BI scenarios. What are the alternatives then? Well you can use the OLE DB and ODBC connectors instead:

image

Both of these connectors do allow you to set your own connection string properties. For example here’s the UI for a new ODBC connection in SSDT:

image 

The documentation for the Odbc.DataSource and OleDb.DataSource M functions has more detail on how these connectors can be used and how connection string properties can be set. Remember also that the OLE DB Provider for SQL Server was un-deprecated in October 2017.

However, apart from possible performance differences between the two (which you should test yourself – Henk van der Valk wrote a good post on this for SSAS MD and most of what he said is relevant for Tabular) there’s one less-than-obvious difference between these two options: the OLE DB connector does not appear to support query folding right now whereas the ODBC connector does. Of course this isn’t an issue if you’re writing your own SQL queries to import data, but if you do want to use M functions for partitioning (as I show here) you’re likely to get very poor performance with the OLE DB connector.

Using Your Own SQL Queries For Tables With Modern Data Sources In SSAS 2016+ And Azure Analysis Services

While the integration of the Power Query engine into Analysis Services Tabular 2017 and Azure Analysis Services with modern data sources will certainly bring a lot of benefits, I think it’s fair to say that the implementation has not been entirely painless. One problem is that it is no longer obvious how to specify your own SQL query to populate a table or partition in your Tabular model – and while the Query Editor is great, there are a lot of cases where this is necessary. In this post I’ll show you how to do this.

If you’re used to using the Power Query UI in Excel or Power BI Desktop, you’ll notice that when you connect to a SQL Server database using the SQL Server connector in SSDT:

image

…there is no option to enter your own SQL query when you do so:

image

This is deliberate. In Analysis Services, unlike Power BI and Excel, there is a distinction made between data sources and other M queries that return data from those data sources, one that makes a lot of sense in my opinion. While it is possible to enter your own SQL for other data source types, such as OLE DB connections, a data source object is really intended just to define a connection to a data source and not to define what data you want from that data source.

[You may also notice that there’s a “SQL statement” property on a SQL Server data source visible in the Visual Studio properties pane, but I don’t recommend you use it – it doesn’t seem to work well with the rest of the SSDT/Power Query UI]

To import a table or view in your database all you have to do is right-click on your data source and choose Import New Tables; my blog post from September last year describes how to do this, and how to use M functions for creating partitions.

To use your own SQL queries though you need to write some M code. First, import a table – any table, but preferably a small one – and get to the Query Editor UI. In this case I’ve imported the DimDate table from the Adventure Works DW database:

image

Next, select your query in the Queries pane on the left-hand side of the screen and open the Advanced Editor either by clicking on the relevant button in the toolbar (shown above) or by right-clicking on the query name in the Queries pane. You’ll see the following dialog:

image

The M code will be something like this:

let
    Source = #"SQL/localhost;Adventure Works DW",
    dbo_DimDate = Source{[Schema="dbo",Item="DimDate"]}[Data]
in
    dbo_DimDate

In this example the Source step creates a reference to the data source you have already created, and the dbo_DimDate step gets the contents of the DimDate table from this data source.

You can modify this code to use your own SQL by using the Value.NativeQuery() function (which I have blogged about here):

let
    Source = #"SQL/localhost;Adventure Works DW",
    MyQuery = 
	Value.NativeQuery(
		#"SQL/localhost;Adventure Works DW",
		"SELECT DISTINCT FiscalYear FROM DimDate"
	)
in
    MyQuery

Here what I’ve done is replaced the dbo_DimDate step in the previous query with a step called MyQuery that uses Value.NativeQuery() to run my own SQL.

image

Now all you need to do is click Import and you have the output of the query loaded into SSAS. It would be nice if there was UI support for using your own SQL queries when importing data in the future. Note that, as soon as you use this method, any other steps or queries further downstream will not be able to perform query folding, so you should make sure that you do as much of your filtering and transformation in the SQL as possible otherwise you may encounter performance problems.

The documentation describes a similar – but not identical – workflow for achieving the same result here. Personally I think it’s counter-intuitive that you should click on Expressions to create a Table object! Expressions are used for functions and other M code that is shared by the M queries used by Tables.

An alternative to doing all this is to go back to the old way of doing things and use a legacy data source rather a modern data source in SSDT. You lose the ability to use the Query Editor and M if you do this, but in a lot of cases you probably won’t care. The 17.4 release of SSDT for Visual Studio 2015, released in December 2017, has exposed a property that allows you to create legacy data sources again easily. In Visual Studio, go to the Tools menu and select Options and in the Options dialog go Analysis Services Tabular/Data Import and check “Enable legacy data sources”:

image

When you do this, you’ll notice two new options when you right-click on Data Sources in the Tabular Model Explorer pane: Import From Data Source (Legacy) and Existing Connections (Legacy).

image

This gives you access to the Table Import wizard that was available in previous versions of Analysis Services Tabular, which not only allows you to enter your own SQL but also creates a legacy data source that in turn makes it easy to use your own SQL when creating partitions.

Creating A Partitioned Table In SSAS Tabular 2017 And SSDT Using M Functions

One of the coolest new features in SSAS Tabular 2017 and Azure Analysis Services is the integration of Power Query and M for data loading. Over the last year or so the Analysis Services team blog has posted a lot of fairly complex examples of how to use this functionality, but now that the latest release of SSDT has proper support for shared expressions I thought it would be a good idea to show a simple example of how to use it to create a partitioned table using M functions.

For this example I’ll be using the FactInternetSales fact table from the Adventure Works DW sample database, and the aim is to create a table in an SSAS Tabular project that has one partition for each year of data in FactInternetSales. Assuming that a new SSAS Tabular project has been created at the 1400 compatibility level with an integrated workspace:

image

…the first thing to do is to right-click on the Data Sources folder in the Tabular Model Explorer pane and select Import From Data Source:

image

This brings up the Get Data dialog:

image

Select SQL Server database and then click Connect. Enter the server name and database name in the SQL Server database dialog:

image

Choose how SSAS is to authenticate when it connects to the SQL Server database and click Connect:

image

Select the FactInternetSales table from the list of tables in the Adventure Works DW database:

image

This opens the Query Editor window; in it there is one query called FactInternetSales:

image

Here’s where it gets interesting. The first thing to do is to create a function that returns a filtered subset of the rows in the FactInternetSales table using the technique I blogged about here for Power BI. On the Query Editor menu bar, click Query/Parameters/New Parameter and create two new parameters called StartDate and EndDate that return the numbers 20010101 and 20011231. Here’s what they should look like:

image

image

These parameters are going to be used to filter the OrderDateKey column on the FactInternetSales table. Do this by clicking on the down arrow on the column header of OrderDateKey then selecting Number Filters and then Between:

image

In the Filter Rows dialog use the StartDate parameter for the start of the filter range and the EndDate parameter for the end of the filter range, then click OK:

image

Because the OrderDateKey contains dates stored as numbers in the YYYYMMDD format the result is a table that only contains sales where the order date is in the year 2001. This table should not be loaded into SSAS though, so right click on the FactInternetSales in the Queries pane and make sure that the Create New Table is not checked:

image

Next, on the same right-click menu, select Create Function:

image

In the Create Function dialog name the new function GetFactData then click OK:

image

The new GetFactData function will now be visible in the Queries pane; enter 20010101 for the StartDate parameter and 20011231 for the EndDate parameter and click Invoke:

image

This creates yet another new query called Invoked Function which should be renamed Internet Sales:

image

Right-click on this query and make sure Create New Table is selected. Next, click the Import button on the toolbar to close the Query Editor and load the Internet Sales table into SSAS.

At this point the Tabular Model Explorer will show all of the queries created above listed under the Expressions folder, and a single table called Internet Sales with a single partition:

image

Next, right-click on the Internet Sales table in the Tables folder and select Partitions:

image

This opens the Partition Manager dialog. Rename the existing partition to Internet Sales 2001:

image

Note that the M query for this partition calls the GetFactData() function to get the rows from FactInternetSales where OrderDateKey is between 20010101 and 20011231:

let
    Source = GetFactData(20010101, 20011231)
in
    Source

Click the New button to create new partitions, one for each year of data in the FactInternetSales table. Each new partition will initially contain the same M code shown above and should be edited so that the query gets data for the appropriate year:

image

Click OK, and the end result is a table with one partition per year:

image

What’s the point of using M functions to return the data for a partition, rather than the traditional method of using a SQL query embedded in each partition? One reason to do this would be to make maintenance easier: if you need to do something like add a new column to a fact table, rather than editing lots of partitions you just need to edit the function and all the partitions will reflect that change. I can think of a few others, but I’ll save them for future blog posts…

SSAS 2016 Locking Improvements

I first became aware of the server-wide lock taken out by SSAS when processing finishes – and the issues that this can cause – from this blog post by Andrew Calvett back in 2009. More information on how locking works in SSAS can be found in chapter 26 of “Microsoft SQL Server 2008 Analysis Services Unleashed”, while the most comprehensive discussion of this topic can be found in this post by Jason Howell:
https://blogs.msdn.microsoft.com/jason_howell/2012/07/03/analysis-services-stops-accepting-new-connections-processing-commit-locks-hurt/

Over the years I’ve worked with several customers who have run into locking problems as a result of users querying while processing or synchronisation are taking place, so as a result I was interested to read the following paragraph in the white paper on “Automated Partition Management For Analysis Services Tabular Models” that was published a few months ago:

Note that commit operations have been optimized considerably for tabular models in SQL Server 2016. This has caused noticeable improvements in locking and blocking for some customers with near-real time processing requirements. Database write-commit locks are required to safely complete tasks such as merging pending changes, persisting files to disk, clearing some cached state, deletion of old files, etc. In previous versions of Analysis Services, a server-level write commit lock was taken while most of these tasks were performed. With SQL Server 2016, the server-level locks are far more limited; they are only taken while producing the delta of transaction updates, and are then immediately released.

This is very good news, and in fact the improvements apply to SSAS Multidimensional 2016 as well as SSAS Tabular 2016. The ever-helpful Akshai Mirchandani of the dev team has given me more details on the changes, so here’s a summary of what happens during a commit operation and what’s new in SSAS 2016:

  • First of all, a database read-commit lock is taken to analyse all the pending changes.
  • Next a database write-commit lock is taken so that the transaction can be committed safely. This is the lock that can be blocked by long-running queries, and this is where the ForceCommitTimeout property comes into play with the result that these long-running queries may get cancelled.
  • This lock is held while the pending changes are merged together.
  • At this point SSAS is ready to do the commit, and where it takes a server-level write-commit lock. This is also the point where the improvements in SSAS 2016 have been made.
    • In previous versions SSAS would update the master.vmp file in place and hold the server-level write-commit lock while that happens and while some other, potentially time-consuming things like clearing cached state and deleting all the old files take place. This could in some cases result in the server-level write-commit lock being held for an extended period.
    • Instead in SSAS 2016 a delta of all the transaction updates are written to a .txn file, and after that the server commit lock is released. The time-consuming tasks mentioned in the previous bullet still take place but after the server-level write-commit lock has been released. This means the server-level write-commit lock is now held for a very short amount of time, and what’s more that amount of time is quite consistent.
  • Finally, all remaining locks such as the database write-commit lock are released.

I haven’t had a chance to test these changes in a production system yet but it sounds like anyone that needs to process or synchronise regularly throughout the day will benefit from upgrading to SSAS 2016.

More Detail On Detail Rows Expressions In SSAS Tabular V.Next

My second-favourite feature in SSAS Tabular v.next after Power Query integration is the Detail Rows expression property for measures – it not only brings drillthrough on measures to Tabular, it means that we can define meaningful drillthrough on any measure, no matter how it is calculated. There’s a basic description of the functionality in this blog post but I thought it would be useful to walk through a simple example showing how it can be used.

Consider a simple SSAS Tabular model with two tables in it. First, a table containing sales data called Sales:

image

Second, a date table called Date:

image

[Note: dates are in DD/MM/YYYY format]

There are two measures with the following definitions:

Total Sales:=SUM(Sales[Sales])

Total YTD Sales:=TOTALYTD([Total Sales], 'Date'[Date])

The measure Total YTD Sales gives the running total of sales from the beginning of the current year. I know it doesn’t follow Marco and Alberto’s best-practice pattern but I wanted to keep things simple on the DAX front…

Browsing the model in an Excel PivotTable gives the following result:

image

At this point if you double-click on cell C6 in the PivotTable you get the following, not very useful result, on a new worksheet:

image

What drillthrough does in SSAS Multidimensional, and what the new Detail Rows Expression property in SSAS Tabular v.next does, is allow an end user to see the detail-level data (usually the rows in the fact table) that was aggregated to give the value the user clicked on in the original PivotTable.

For the Total Sales measure, this property can be set with a DAX expression that returns a table something like this:

SELECTCOLUMNS(
	'Sales', 
	"Date", 'Sales'[Date], 
	"Sales Value", [Total Sales]
)

[For more details on the SelectColumns() function, see here]

image

Now when you click on cell C6 in the PivotTable you get the result of the table expression above filtered by the context of the cell you’ve clicked on – in this case, the date 4/1/2017. What appears in the new worksheet is data from the row from the Sales table for 4/1/2017:

image

This is already better than SSAS Multidimensional drillthrough because as a developer you have control over the column headers displayed in this table (in Multidimensional drillthrough the column names come out in a ridiculously user-unfriendly format) and the order that they are displayed in (which is equally painful to control in Multidimensional).

Now, consider cell D6 in the PivotTable, the cell that shows the year-to-date sales amount for 4/1/2017. If a user double-clicked on this cell they would expect to see all of the rows from the Sales table from 1/1/2017 to 4/1/2017, the rows whose sales have been aggregate to give the YTD total.

This can be achieved using the following expression in the Detail Rows Expression for the Total YTD Sales measure:

CALCULATETABLE(
	SELECTCOLUMNS(
		'Sales', 
		"Date", 
		'Sales'[Date], 
		"Sales Value", 
		[Total Sales]
	), 
	DATESYTD('Date'[Date])
)

image

Now, double-clicking on cell D6 in the PivotTable gives the following table:

image

It may not look all that impressive, but there are few words that can describe how happy this makes me feel. This is exactly what is not possible with drillthrough in SSAS Multidimensional, and why drillthrough in Multidimensional has always been so frustrating to use. It’s great to see the feature properly implemented in Tabular.

First Thoughts On The Integration Of Power Query/M Into Analysis Services Tabular

Last Friday’s big news was the release of the first CTP for Analysis Services v.next. Among several major new pieces of functionality (Ragged hierarchies! Drillthrough that works properly, even for calculations! Table-level security!) probably the biggest is the integration of Power Query/M into Analysis Services. As you can probably guess, I’m incredibly pleased that my two favourite technologies have got together. The technical details are given in this blog post, which I suggest you read if you haven’t done so already, but what I think is missing is an explanation of why this is so important and what kind of opportunities it opens up – hence this post. Of course this is just my take on the subject and not what Microsoft may actually thinking; it’s also very early days, so as the functionality develops and I have more chance to think about this my opinions may change. If you have any ideas on this subject I would be interested to hear them so please leave a comment!

Why this had to happen: Power BI

There is an obvious reason why Microsoft decided to integrate Power Query/M into SSAS, and that is because it needs to support the conversion of Power BI models into Analysis Services Tabular models. There are two scenarios where this will be necessary.

The first is the ability to convert a Power BI model into an Azure Analysis Services Tabular model (listed as ‘planned’ here), something that will be a key selling point for Azure Analysis Services when it releases. The engine behind Power BI is essentially the same as the one used in Analysis Services so migrating the data model should be straightforward, but since Power BI uses Power Query/M to load data then a migrated Azure Analysis Services model will also have to use Power Query/M.

The second scenario is similar to the first. We now know that on-premises Power BI will be delivered through Reporting Services v.next, and it’s reasonable to assume Reporting Services will need a database engine to store the data for published Power BI reports. That engine will have to be an Analysis Services instance of some kind (either standalone or running in-process inside Reporting Services) and again for that to work Analysis Services will have to support the same data access mechanisms as Power BI.

Better support for a larger number of data sources

I’ve just argued why Microsoft was obliged to include this functionality in SSAS v.next but in fact there are many positive reasons for doing this too. The most obvious one is to do with support for more data sources. At the moment SSAS Tabular supports a pretty good range of data sources, but the world of BI is getting more and more diverse and in order to stay relevant SSAS needs to support far more than it does today. By using Power Query/M as its data access mechanism, SSAS v.next will immediately support a much larger number of data sources and this number is going to keep on growing: any investment that Microsoft or third parties make for Power BI in this area will also benefit SSAS. Also, because Power Query/M can query and fold to more than just relational databases, I suspect that in the future this will allow for DirectQuery connections to many of these non-relational data sources too.

Different data sources for partitions in the same table

Another benefit of this change is that we’ll have a lot more flexibility with partitioning tables in an SSAS Tabular model. As the blog post says:

As long as a partition’s M query adheres to the column mappings of the table, you are free to perform any transformations and pull in data from any data source defined in the model.

In SSAS 2016 the partitions in a table all have to get data from the same data source whereas in v.next we’ll be able to get data from different data sources in different partitions, and this opens up some interesting new possibilities. For example, I can imagine a simple budgeting application where the partitions in a table get data from different Excel workbooks stored in OneDrive for Business, and where the each partition gets processed automatically when changes are saved to one of these workbooks.

Does this replace SSIS and my data warehouse? 

The short answer is no. Power Query/M is not a full-featured ETL tool and I don’t think it ever will be; it certainly does not have the kind of functionality needed to perform enterprise-level ETL right now. My view is that Microsoft have built Power Query/M into SSAS for the reasons above and not to encourage enterprise SSAS users to do their own quick-and-dirty ETL when loading data (although there is a risk that that will happen anyway). That said, I think the dividing line between corporate and self-service BI will become increasingly blurred over the next few years as the Microsoft BI stack develops, and we’ll see Analysis Services being used in self-service scenarios as well as the more traditional corporate ones.

Centralised data source objects

One last thing to point out is that the way SSAS v.next makes a distinction between data sources and other queries is very interesting. In Power BI and Power Query it’s easy to end up with data source connection information duplicated across multiple queries unless you know what you’re doing, and this can cause no end of problems later on in a project. As far as I can see, in SSAS v.next a “data source object” is an M query that only contains the connection to external data, while all other queries have to reference a data source to be able to access external data. This means, as the blog post says:

Referring to data source objects helps to centralize data source settings for multiple queries and simplifies deployments and maintenance if data source definitions must be updated later on. When updating a data source definition, all M queries that refer to it automatically use the new settings.

I wonder whether this concept is coming to Power BI and Power Query at some point? I hope so – it makes a lot of sense.

Nested Variables In DAX

Last week, at the SQL Server Days conference in Belgium, Kasper mentioned in his presentation that it was possible to define variables inside variables in DAX. So, for example, you could define a measure like so:

MyMeasure = 
var Outer1 = 
               var Inner1 = 1
               var Inner2 = 2
               return Inner1 + Inner2
var Outer2 = 3
return Outer1 + Outer2

This measure returns 6 as you might expect:

image

There aren’t any performance benefits to doing this, although of course it helps with code readability and organisation (thanks to Marius for confirming this).

With my newly rekindled love of DAX I thought this was quite interesting. I’m not really sure why though, given that it’s not particularly useful; I think Matt might be right:

image