SQLBits Power BI And Analysis Services Videos Now Free To View Online

SQLBits is one of the best Microsoft data platform conferences around, and last week’s event in Manchester was particularly good. As usual, videos of almost all of the sessions are available for everyone to view for free online (no registration required) here:

https://sqlbits.com/content/Event18

There were lots of Power BI and Analysis Services related sessions, so I thought I’d call out a few:

  • If you’re interested in the new calculation groups feature in SSAS 2019 that I blogged about last week, you should definitely watch Christian Wade’s two-part session here and here (part two has all the juicy details in), which also gives some details about other upcoming features such as XMLA endpoints. Kasper’s session here covers a lot of the same topics.
  • There’s more insight into Microsoft’s Power BI roadmap and thinking in the Q&A session with Christian, Kasper and Adam here
  • Marco and Alberto always do great sessions, and Alberto’s session on Aggregations here and Marco’s session on many-to-many relationships here are up to their usual high standards.
  • My session on Power BI Dataflows here sums up my current thoughts about them.

Of course there’s lots more there (more than I have had a chance to watch) so let me know if there are other sessions that are good!

SSAS Tabular 2019, Calculation Groups And Migration From SSAS Multidimensional

With the release of CTP 2.3 of SQL Server 2019 today there was big news for Analysis Services Tabular developers: Calculation Groups. You can read all about them in detail in this blog post:

https://blogs.msdn.microsoft.com/analysisservices/2019/03/01/whats-new-for-sql-server-2019-analysis-services-ctp-2-3/

In my opinion this is the most important new feature in DAX since… well, forever. It allows you to create a new type of calculation – which in most cases will be a time intelligence like a year-to-date or a previous period growth – that can be applied to multiple measures; basically the same thing that we have been doing in SSAS Multidimensional for years with the time utility/shell/date tool dimension technique. It’s certainly going to solve a lot of problems for a lot of SSAS Tabular implementations, many of which have hundreds or even thousands of measures for every combination of base measure and calculation type needed.

I’m not going to repeat any of the detailed technical information in the blog post here, though. Instead the point I want to make is that this is very big news for SSAS Multidimensional users too. In the past couple of years many people who have existing SSAS Multidimensional implementations have thought about migrating to SSAS Tabular so they can take advantage of its new features or move to the cloud, and indeed many of them have already migrated successfully. However, up to now, the biggest blocker for those wanting to migrate from Multidimensional to Tabular has been the fact that some complex calculations that can be expressed in MDX cannot be recreated (or recreated easily and efficiently) in DAX, because DAX has not had an equivalent of calculated members not on the Measures dimension or the MDX SCOPE statement.

Calculation groups do not remove this roadblock completely, but they do remove the roadblock for a large group of existing SSAS Multidimensional users whose only complex calculation requirement is a time utility/shell/date tool dimension. As a result these SSAS Multidimensional users will now be able to migrate to SSAS Tabular 2019, Azure Analysis Services or Power BI if they want to. Only those people who have more exotic uses for calculated members not on the Measures dimension (which are not very common at all) and those who use SCOPE statements (a larger group – many people working with financial data use SCOPE statements heavily) will find that Multidimensional is still the correct platform for them.

20 Years Of Analysis Services

Today marks the 20th birthday of Analysis Services: it was released (as OLAP Services) on November 16th 1998. There’s a celebratory blog post and video over on the Power BI blog here:

https://powerbi.microsoft.com/en-us/blog/analysis-services-is-20-years-old/

I’m one of the interviewees on the video, and in it I tell the story of my involvement with Analysis Services and MDX – I’ve been working with it almost full-time for a little over 20 years, right from the first betas. I’ve enjoyed every minute of it, and I’d like to take the opportunity here to thank all the people who have helped me over the years at IMS Health, Microsoft Consulting Switzerland, in the SSAS and Power BI community, and in my career as an independent consultant and trainer. If you had told me in 1998 that I would still be making a living with this product (even still writing some MDX) I’m not sure I would have believed you.

Finally, if your Bingling skills have failed you, here’s the OLAP Jokes post that is mentioned in the birthday video:

https://blog.crossjoin.co.uk/2005/08/25/olap-jokes/

It was for many years the most popular post on my blog. I should point out that I didn’t write all those jokes: my colleagues at the time, Jon Axon and Colin Hardie, deserve some of the blame too.

Azure Data Studio Should Support Analysis Services And Power BI Premium Capacities

I’m at the PASS Summit this week, and in this morning’s keynote there was a demo of the newly-released Azure Data Studio  – a modern, cross-platform tool for managing and querying SQL Server, Azure SQL Database and other Azure data services (it’s carefully described as “complementary to” SQL Server Management Studio rather than a replacement for it; this blog post has a detailed discussion of this question).

This video is provides a good, short overview of what it is:

I think it’s pretty cool, BUT… it doesn’t support Analysis Services. I had a moan about this and the generally poor state of Analysis Services tooling on Twitter, was invited to meet some of the developers and was told that if enough people request Analysis Services support it might happen.

What would support for Analysis Services involve? The following springs to mind:

  • I’d like to be able to connect to and manage Analysis Services Multidimensional and Tabular on-premises and Azure Analysis Services; if that’s too ambitious I could settle for supporting only Analysis Services Tabular 2016+ and Azure Analysis Services.
  • Since we will soon be able to connect to a Power BI Premium capacity as if it was an Analysis Services instance via XMLA endpoints, I would want to be able to connect to Power BI Premium capacity too.
  • I’d want to be able to run DAX and M queries, and ideally MDX queries too.
  • I would also want to be able to work with ASSL and TMSL for scripting and editing objects.
  • Azure Data Studio has a Profiler extension that works on xEvents; it would be great if that worked with Analysis Services xEvents too.
  • DAX and M Jupyter notebooks would be really useful!
  • It would make sense for some of the functionality of existing tools like DAX Studio and BISM Normalizer being turned into extensions.

If you want to see Analysis Services support in Azure Data Studio, go to the following issue on the Azure Data Studio GitHub repository:

https://github.com/Microsoft/azuredatastudio/issues/1026

…and click the thumbs-up icon on the first post:

AzureDataStudio

Let’s make our voices heard!

 

 

Finding All Selected Items In An Excel Slicer Connected To SSAS, Power BI Or the Excel Data Model Using Dynamic Arrays

The big news in the world of Excel right now is the introduction of dynamic arrays. They’re only available in the Office 365 click-to-run version of Excel and, at the time of writing, only available to people on the Office Insiders programme, but eventually they’ll be available to anyone running Excel for Office 365 on their desktop. There are already lots of blog posts about them including this overview by Jon Acampora, and you probably also want to download Bill Jelen’s detailed mini-book on them here which is free for the rest of 2018. Now I’m not an Excel expert by any stretch of the imagination but I’m as excited as anyone else about them because they will be incredibly useful for anyone building reports using Excel cube functions against Analysis Services, the Excel Data Model/Power Pivot and Power BI. Bill Jelen’s book has a short section on this subject but the possibilities are limitless…

Here’s one example of how they can be used. A while ago I blogged about how to use a regular array formula and the TextJoin() Excel function to get all the selected items from a slicer. Dynamic arrays make this problem much easier to solve. Take the following table loaded into the Excel Data Model:

Capture1

Now, say you have a PivotTable built from this and a slicer (called Slicer_Fruit) connected to it:

Capture2

It’s possible to use the CubeSet() function to get the set of selected items in a slicer using the following formula:

=CUBESET("ThisWorkbookDataModel",Slicer_Fruit,"Slicer Set")

Assuming this formula is in cell H1, you can then get the number of items in this set using CubeSetCount():

=CUBESETCOUNT($H$1)

Assuming this is in cell H2, all you need to do to get a comma-delimited list of all the selected items in the slicer via this set is:

=
 TEXTJOIN(
  ", ", 
  TRUE, 
  CUBERANKEDMEMBER(
   "ThisWorkbookDataModel",
   $H$1, 
   SEQUENCE($H$2,1)
  )
 )

Capture3

Here it is in action:

demo

It works as follows:

  • The new Sequence() function is used to create a dynamic array of numbers from one to the number returned by the CubeSetCount() function.
  • The CubeRankedMember() gets the name of each of the items in the set using the numbers returned by Sequence()
  • TextJoin() then concatenates all of the names returned by CubeRankedMember() into a single comma-delimited list.

You can download the sample workbook here (remember it will only work if you have a version of Excel installed that includes dynamic arrays!).

Performance Problems With MDX Calculated Measures That Return Constants In Analysis Services Multidimensional

Recently I was working on an MDX calculation that calculated a value at a low granularity and aggregated the result – basically what I described in my blog post here. Here’s a simplified version of the calculation written on a cube built from Adventure Works data running on my local SSAS 2017 MD instance:

SCOPE([Measures].[Tax Amount]);
    SCOPE(
	[Customer].[Customer].[Customer].MEMBERS, 
	[Date].[Date].[Date].MEMBERS, 
	[Product].[Product].[Product].MEMBERS);
        THIS = [Measures].[Sales Amount]*0.08;
    END SCOPE;
END SCOPE;

All it does is take the value of the Sales Amount measure at the lowest granularities of the Customer, Date and Product dimensions, multiply it by 0.08 to find a tax value, and because [Tax Amount] is a real, non-calculated measure, the result of the calculation aggregates up through the cube. [I know that I don’t have to aggregate the result of this specific calculation but remember that this is a simplified example – in the real case I did have to write the calculation using Scope statements – and anyway the best way of handling a basic multiplication like this would be with a measure expression]

The performance was sub-second for my test query and I was happy, but then I realised that the same tax rate was being used in other calculations and may change in the future, so I thought I would store the value 0.08 in a calculated measure:

CREATE MEMBER CURRENTCUBE.[Measures].[Tax Rate] AS  
0.08;

SCOPE([Measures].[Tax Amount]);
    SCOPE(
	[Customer].[Customer].[Customer].MEMBERS, 
	[Date].[Date].[Date].MEMBERS, 
	[Product].[Product].[Product].MEMBERS);
        THIS = [Measures].[Sales Amount]*[Measures].[Tax Rate];
    END SCOPE;
END SCOPE;

Surely a simple change? But no, as soon as I did this my query ran for several minutes and memory usage went through the roof until the query was automatically cancelled:

image

Clearly the SSAS MD Formula Engine could optimise the version with the hard-coded constant value but could not optimise the version with the calculated measure. There was nothing in Profiler to indicate the calculation was being evaluated in cell-by-cell mode though.

So I tried another variation:

CREATE MEMBER CURRENTCUBE.[Measures].[Tax Rate] AS  
IIF([Measures].[Sales Amount]=0, NULL, 0.08);

SCOPE([Measures].[Tax Amount]);
    SCOPE(
	[Customer].[Customer].[Customer].MEMBERS, 
	[Date].[Date].[Date].MEMBERS, 
	[Product].[Product].[Product].MEMBERS);
        THIS = [Measures].[Sales Amount]*[Measures].[Tax Rate];
    END SCOPE;
END SCOPE;

This time the memory usage was completely flat but the query was still so slow had to be cancelled. Next, I thought I’d try setting the NON_EMPTY_BEHAVIOR property:

CREATE MEMBER CURRENTCUBE.[Measures].[Tax Rate] AS  
0.08
, NON_EMPTY_BEHAVIOR = [Measures].[Sales Amount];

SCOPE([Measures].[Tax Amount]);
    SCOPE(
	[Customer].[Customer].[Customer].MEMBERS, 
	[Date].[Date].[Date].MEMBERS, 
	[Product].[Product].[Product].MEMBERS);
        THIS = [Measures].[Sales Amount]*[Measures].[Tax Rate];
    END SCOPE;
END SCOPE;

And guess what? The query went back to being sub-second. Strange. Now the NON_EMPTY_BEHAVIOR property is very dangerous and this is a clear example of how it should not be set – the expression 0.08 is never going to return an empty value, regardless of the value of the [Sales Amount] measure. As a result I would be very wary of using this trick in production in case it ended up returning inconsistent results. It’s also worth noting that the following calculation, which is a correct use of NON_EMPTY_BEHAVIOR, is as slow as the other examples above:

CREATE MEMBER CURRENTCUBE.[Measures].[Tax Rate] AS  
IIF([Measures].[Sales Amount]=0, NULL, 0.08)
, NON_EMPTY_BEHAVIOR = [Measures].[Sales Amount];

Finally I created a dummy table in my DSV with one row and one column to hold the 0.08 value, and then created a real, non-calculated measure from the column. When I used this measure in my calculation then performance of my test query was also sub-second.

So, to sum up, it looks like hard-coding constant values in calculated measures is a bad idea, at least in cases like this, and either using the values themselves in your MDX calculations or creating a table and non-calculated measure specifically to hold the value is better for performance.

A Quick Look Some Power BI And SSAS-Related Products And Books

I  don’t like writing reviews of books or products here on my blog for a couple of reasons, the main one being that I don’t usually have the time to read/test/understand something properly so I can write a thorough review. That said I do get sent a lot of free books and evaluation licences for products that deserve a wider audience, so I thought I would write a post rounding up some of them along with a few thoughts of my own.

Custom Visuals

One of the most interesting questions related to Power BI is whether third-party software companies will be able to build businesses selling extensions to it. The most obvious way that Power BI can be extended is through custom visuals and there are several companies that have paid-for (as opposed to free) custom visuals. Zebra BI is one such company and I’ve been really impressed by what they have produced for visualising financial data:

image

I also saw recently that OKViz (part of the Marco and Alberto/SQLBI family) now have a paid-for version of their excellent Smart Filter visual with some premium features – see here for more details; similarly new features in Klaus Birringer’s Ultimate Waterfall and Ultimate Decomposition Tree visuals are only available in the paid version.

I know many Power BI users who use custom visuals have suffered with various bugs and limitations in functionality over the past few users, and I think buying commercial custom visuals rather than relying on free equivalents is one way of dealing with reliability and support problems. It’s certainly in Microsoft’s interests to have a thriving partner community in this space given that flashy visuals are a major selling point of the product. But will Power BI users want to pay for visuals when so much is available out of the box for free, especially when the cost of the visuals seems relatively high when compared to the overall cost of Power BI? I guess we’ll see.

Custom Connectors

A lot of what I’ve just said about custom visuals also applies to custom connectors, although custom connectors are a lot less mature (at the time of writing, support for custom connectors in the on-premises gateway is still in preview). However I was pleased to see this announcement from CData software that they now have over 100 custom connectors available for Power BI. It looks like what they have done is wrapped their existing ODBC providers, and as a result some of their connectors are for sources that are already available in Power BI, but even so there are a lot of new data sources here.

Incidentally, I got very, very excited when I realised that the CData connectors for Excel and Excel Online supported DirectQuery mode as well as import mode. Why, I hear you ask? Well, just think about a planning/budgeting solution where users can enter data into an Excel spreadsheet and when the numbers change in Excel, the numbers change in Power BI too; think also how this could work with Composite Models. I tried this with CData’s Excel on-premises connector and unfortunately it returned errors when the source Excel worksheet was open; I did get it to work with the Excel Online connector but it was painfully slow, even with a small amount of data. If I can get it to work better (and I may be missing some optimisations within the connector) I’ll blog about it.

Books

One of the few Power BI-related books that have been published recently is Phil Seamark’s “Beginning DAX with Power BI”. He was kind enough to send me a review copy; it’s a good introduction to the subject and I particularly like the way he introduces DAX variables early one. Definitely worth a look if you’re just starting to learn DAX.

I was also sent a copy of a slightly older book, David Parker’s “Mastering Data Visualization with Visio 2016”. David knows pretty much all there is to know about using Visio for BI (his blog is great) and while this book doesn’t cover the most exciting new development in this area – the Visio custom visual for Power BI – if you want to learn all the advanced features of Visio that you could take advantage of in Power BI then this is the book to get.

Other Products

I’ve been a big fan of SentryOne’s SSAS monitoring tool, BI Sentry, for years now but up until recently it only supported SSAS Multidimensional. It now supports SSAS Tabular too (details here), and it looks like SentryOne have done a great job of adapting it to the specific needs of the Tabular engine. I always advise my SSAS customers to invest in some kind of monitoring solution because it makes the job of detecting and solving issues like poor query performance so much easier, and to be honest BI Sentry is better than anything you would be able to build yourself.

Moving onto Power BI, if you need to generate documentation for your Power BI datasets and reports check out Power BI Documenter; the August release looks like it has some cool new features. Alternatively the latest release of Power BI Helper also allows you to generate documentation as well as lots of other useful stuff.

Something that hasn’t been properly released yet, but will be incredibly useful when it is, is MAQ Software’s Application Lifecycle Management Toolkit for Power BI. Closely related to BISM Normalizer, it will allow you to compare two Power BI datasets, merge changes, deploy only parts of a dataset (for example individual measures), and deploy to multiple datasets – all of which are things Power BI developers have been crying out for.

Last of all, the guys at DevScope also have a new(ish) product out, Power BI Robots, which automatically takes screenshots of Power BI reports and dashboards and can deliver them to various destinations such as email address and SharePoint. I haven’t looked at it yet but it seems like it could have a lot of interesting uses.

%d bloggers like this: