Monitoring SSAS Multidimensional Non Empty Filtering Using Profiler, Part 2

In part 1 of this series I introduced all of the different types of non empty filtering that can occur in Analysis Services and MDX. In this post I’ll show you how you can monitor each of these types of non empty filtering using Profiler using the queries from part 1 as examples.

Profiler events

The three Profiler events we’re going to be most concerned with here are:

  • Calculate Non Empty Begin, which is raised when the engine starts a non empty filter
  • Calculate Non Empty Current, which will be raised one or more times when the non empty filter is in progress and which will give us more detail about what’s happening
  • Calculated Non Empty End, which is raised when the engine finishes evaluating a non empty filter

I’ve also included the Query Begin/End events in my traces just to show when query execution starts and finishes. All the queries in this post will be run on a warm cache; although the values used for non empty filtering can be cached, SSAS can’t cache the output of a non empty filter (unless you do something like this) so for our purposes whether the cache is warm or cold is irrelevant.

image

NON EMPTY and NONEMPTY()

In the simplest cases SSAS treats the NON EMPTY statement and the NONEMPTY() function the same, so let’s look again at the following query from my previous post:

SELECT 
{[Measures].[Internet Order Quantity]} 
ON 0,
NON EMPTY
[Product].[Category].[Category].MEMBERS
ON 1
FROM
[Adventure Works]

image

Here’s what our Profiler trace shows us when this query is run:

image

There are several things to notice here:

  • The Duration column tells us how long, in ms, both the query took to run (on the Query End line) and the non empty filter took to evaluate (on the Calculate Non Empty End line).
  • The EventSubclass column, for the Calculate Non Empty Current events, shows the different stages of evaluation. The possible values here are:
    • 1 – cell values are evaluated for the filtering
    • 2 – calculated members are evaluated (this is no longer used from SSAS 2008 on)
    • 3 – the tuples are rendered in the resultset
  • The IntegerData column shows what type of non empty filtering is taking place. The possible values here are:
    • 1 – All non empty filtering using NON EMPTY and NONEMPTY() using a fast algorithm (as shown in the screenshot above)
    • 2 – EXISTING operations using a fast algorithm.  This is a bit misleading though, because due to an internal bug this event will not fire for most uses of the EXISTING statement but will fire for some operations related to regular autoexists.
    • 3 – Autoexists operations using a fast algorithm
    • 11 – All non empty filtering using NON EMPTY and NONEMPTY() using a slower algorithm, for when complex calculations need to be evaluated
    • 12 – as (2) but with the slower algorithm (theoretically possible but should never occur in practice)
    • 13 – as (3) but with the slower algorithm (again, theoretically possible but should never occur in practice)
  • The ProgressTotal column for the Calculate Non Empty Current (as with the Serialize Results Current event) and EventSubclass 1 should return the number of tuples evaluated for the non empty filter. If the number of tuples is greater than 1000 you’ll see multiple events, one for each 1000 tuples, with the last event showing the overall total. Unfortunately there’s a bug here that means it returns one less than the actual number of tuples evaluated. What’s more, in most cases, the non empty filter operations in the SSAS engine are difficult to match to whatever’s going on in the query, so it’s only really useful as a relative measure of how expensive the operation is.

Putting this all together, for this query we can see that there was one non empty filter operation, it was using the fast NON EMPTY algorithm, the ProgressTotal column shows 3 tuples were returned (ie the three Categories on rows) and it took 1ms.

Autoexists

Now let’s look at the autoexists query from the previous post in this series:

SELECT 
{[Measures].[Internet Order Quantity]} 
ON 0,
[Product].[Category].[Category].MEMBERS
*
[Product].[Color].[Color].MEMBERS
ON 1
FROM
[Adventure Works]

image

Here’s what Profiler shows:

image

Even with what looks like a single autoexists operation there are three sets of Non Empty events here, two of which have IntegerData 3 (for autoexists) and one with IntegerData 2 (for EXISTING which, as noted above, is to be expected here with autoexists). The ProgressTotal column returns 26 for a query that returns 24 rows which I guess is near enough correct to be useful.

EXISTS() And Existing

As you might expect, the EXISTS() function produces a result very similar to a straightforward autoexists. Taking the EXISTS() query from my previous post:

SELECT 
{[Measures].[Internet Order Quantity]} 
ON 0,
EXISTS(
[Product].[Category].[Category].MEMBERS
, {[Product].[Color].&[White]})
ON 1
FROM
[Adventure Works]

image

You can see there’s just one Non Empty Begin/End event pair generated, with IntegerData 3, and ProgressTotal returns 0 which means that one tuple is returned:

image

Whereas the use of EXISTING inside a calculated measure, like so:

WITH
MEMBER MEASURES.COLOURCOUNT AS
COUNT(EXISTING [Product].[Color].[Color].MEMBERS)
SELECT 
{MEASURES.COLOURCOUNT} 
ON 0,
[Product].[Category].[Category].MEMBERS
ON 1
FROM
[Adventure Works]

image

Causes multiple Non Empty Begin/End pairs, very probably one for each member on the hierarchy based on the number of events and the values in the ProgressTotal column (the ProgressTotal values for the second, third, fourth and fifth non empty filters tally with the four values returned by the calculated measure; I don’t know what the first non empty filter is doing) :

image

The WHERE Clause And Subselects

Weirdly enough, the presence of a WHERE clause or a subselect in a query also triggers Non Empty Begin/End events [Subselects are something that a lot of SSAS developers get confused by, especially when it comes to the differences between a subselect and the WHERE clause – I recommend watching this short video if you’re unsure of what they do].

The following query with a WHERE clause generates a single Non Empty Begin/End event pair:

SELECT 
{[Measures].[Internet Order Quantity]} 
ON 0,
[Product].[Color].[Color].MEMBERS
ON 1
FROM
[Adventure Works]
WHERE([Customer].[Total Children].&[5])

image

image

Probably the only value in knowing that this happens is that you can ignore it when you see it.

A query with a subselect instead of a WHERE clause, while it returns exactly the same results in this case, produces different activity in Profiler:

SELECT 
{[Measures].[Internet Order Quantity]} 
ON 0,
[Product].[Color].[Color].MEMBERS
ON 1
FROM
(SELECT {[Customer].[Total Children].&[5]} ON 0
FROM
[Adventure Works])

image

image

Again, if you have a subselect in your query (they are very common in MDX generated by SSRS and Excel) you should ignore this activity. In both cases you’ll note that the IntegerData column shows 3 for autoexists and ProgressTotal shows 0.

Summary

What’s clear from these examples is that trying to relate what’s going on in the query to what you see in Profiler is quite tricky even for seemingly simple queries; for most real-world queries it would be almost impossible to do so with total confidence. That said, when I’m tuning queries I usually comment out large parts of the code to try to isolate problems, thus creating much simpler queries, and I hope the value of this post will lie in you being able to spot similar patterns in Profiler to the ones I show here when you do the same thing. In part 3 of this series I’ll show you some practical examples of how all this information can help you tune your own queries.

Thoughts On The Power BI Announcements At The MS Data Insights Summit

I’m not at the Microsoft Data Insights Summit, unfortunately, but I watched the live stream of the keynote today and was duly overwhelmed by the new features announced. I’m not going to summarise what was said because there’s already a post on the official Power BI blog that does that here:

https://powerbi.microsoft.com/en-us/blog//over-5-million-subscribers-are-embracing-power-bi-for-modern-business-intelligence/

…but I wanted to pick out a two topics for more in-depth discussion.

Query Power BI Datasets from Excel

For me, by far, the biggest announcement made today was that by the end of this month we’ll be able to connect an Excel PivotTable up to a Power BI dataset and query it as if it was an Analysis Services cube or a local Power Pivot Data Model. In my opinion this a true killer feature. For the last few years my customers have asked me when MS was going to release SSAS in the cloud and I’ve always replied that Power BI is SSAS in the cloud – it’s just tightly coupled with a front-end right now. That makes things simple in some ways but it also causes problems:

  • You have to build your data model and reports all in the same file, which is a pain because when you want to update one you need to deploy both
  • Other report authors who want to use your model for their reports need to get hold of your .pbix file and edit it
  • Everyone has to use Power BI to build reports on data stored in Power BI

This move solves these problems. It makes manageability much easier: you can build a Data Model, publish it, then you and other developers can build reports on it separately. What’s more it means that all those users who prefer to use Excel, either because it’s the tool they are most comfortable with or because it’s the best option for building the reports they want to build (Excel has a lot of great functionality that Power BI reports don’t have right now and will probably never have), to build reports can do so without having to copy the data into Excel.

Even better, as Amir hinted, this isn’t going to just work with just Excel. If this works the way I assume it works you’ll be able to use all the other MDX-capable front-end tools out there, like SSRS or Pyramid, to query your Power BI data, meaning that you have even more choices for consuming data held in Power BI.

SandDance: A symbol of how MS is finally exploiting all of its assets

SandDance is undoubtedly a cool data visualisation tool and will be great for creating ‘wow’ demos. It’s also not, as the hype is suggesting, something new – I saw it on the Microsoft Research site at least two years ago. What is really important about SandDance is that it shows off another area that Microsoft has got right with Power BI: it has created a platform that other developers both inside and outside Microsoft has build on top of. In this case Microsoft Research had some eye-catching data visualisation technology but not something that made sense as a standalone tool (yes, I know it is actually available as a standalone tool but let’s face it, it wouldn’t make sense as a commercial product). Integrate this data visualisation technology with Power BI, though, and suddenly you’re part of a much richer product that is commercially viable. SandDance gets the chance to be used for real-world purposes by a lot more users; Power BI gets another great bit of functionality; it’s a win-win.

The ‘build a platform’ strategy is classic Microsoft. It’s the story of Windows. It has some disadvantages in that the different pieces might not always fit together as smoothly as it should (compare and contrast with Tableau, which could be seen as the Apple of the data visualisation world) but it gets useful, specialised features out there very quickly. Microsoft has a wealth of great technology scattered across its different divisions that, historically, it has struggled to bring together coherently, but it looks like Power BI is managing to buck the trend. The integration with Azure Stream Analytics and Azure SQL DW that exist today, the integration Power Apps that was demoed in the keynote, the use of machine learning in Quick Insights and the integration with Azure ML that I also saw mentioned on Twitter today, are other examples of this happening.

Microsoft also has a strong partner community in the BI space that did good business building solutions on the SQL Server BI stack; Power BI v1 did not offer the same opportunities and was neglected by partners as a result but Power BI today offers partners a lot more opportunities and so the they are back out there building and selling on Microsoft’s behalf again. It’s happening in the Custom Visuals Gallery, it’s happening with the Power BI API and apps like Power BI Tiles,  and it’s going to happen in a lot of other ways in the future too.

Monitoring SSAS Multidimensional Non Empty Filtering Using Profiler, Part 1

Filtering out empty values is something that SSAS does a lot of during query execution, and it is also a common cause of performance problems. In this series of posts (similar to my series earlier this year on results serialisation) I’ll look at the different types of non empty filtering that can occur in an MDX query, how they can be monitored using Profiler and what you can do to improve their performance.

Some of this information has come from an old white paper, but I’ve found that some of what that paper says is now out of date and I’m extremely grateful to Akshai Mirchandani of Microsoft for answering my questions on this subject. Even with the long-suffering Akshai’s help a lot of the information here is based on my own research and therefore potentially incomplete/inaccurate, potentially different for different versions of SSAS (I’m using SSAS 2014 for this series) and could potentially change again in the future, so take due care!

The first question to ask is: what counts as non empty filtering? There are actually several different operations that the engine treats as a non empty filter, a few of which I was surprised by; here are the ones I know about.

1) The NON EMPTY statement

Most MDX queries generated by client tools include a NON EMPTY statement on the rows and columns axis. For example, take a look at the results returned by this MDX query on the Adventure Works cube:

SELECT 
{[Measures].[Internet Order Quantity]} 
ON 0,
[Product].[Category].[Category].MEMBERS 
ON 1
FROM
[Adventure Works]

image

As you can see, the Product Category Components has no data and returns a null. One way to remove the row for Components would be to add a NON EMPTY statement to the rows axis:

SELECT 
{[Measures].[Internet Order Quantity]} 
ON 0,
NON EMPTY
[Product].[Category].[Category].MEMBERS 
ON 1
FROM
[Adventure Works]

image

2) The NONEMPTY() function

Often confused with the NON EMPTY statement, but not the same thing: the NON EMPTY statement can only be used on an axis in a SELECT statement, whereas the NONEMPTY() function can be used in any MDX expression. Continuing our example, here’s how to use it to remove the Component category:

SELECT 
{[Measures].[Internet Order Quantity]} 
ON 0,
NONEMPTY(
 [Product].[Category].[Category].MEMBERS,
{[Measures].[Internet Order Quantity]})
ON 1
FROM
[Adventure Works]

The NONEMPTY() function is much more flexible than the NON EMPTY statement but essentially does the same thing – it isn’t any faster in what it does, but it does allow you to make certain assumptions about your data that can improve query performance (more of that later). One thing to remember is to always set the second parameter, because if you don’t you may get unexpected results.

There is also a NONEMPTYCROSSJOIN() function but it is deprecated and you should not be using it – everything that it does can be done more reliably with other functions.

3) Autoexists

Autoexists is not a feature of the MDX language but rather something that SSAS does automatically to remove tuples from a set that it knows must always be null. It’s described in great detail here, but it’s quite easy to illustrate. We already know from the queries above which categories have data; similarly the following query shows there is data for all colours except Grey and Silver/Black:

SELECT 
{[Measures].[Internet Order Quantity]} 
ON 0,
[Product].[Color].[Color].MEMBERS
ON 1
FROM
[Adventure Works]

image

However if you crossjoin every category and every colour on the rows axis, you don’t see every combination of category and colour returned:

SELECT 
{[Measures].[Internet Order Quantity]} 
ON 0,
[Product].[Category].[Category].MEMBERS
*
[Product].[Color].[Color].MEMBERS
ON 1
FROM
[Adventure Works]

image

There is no row for the category Bikes and the colour White, for example. This is because the SSAS engine knows from the data in the dimension that no product exists that is both the colour White and in the category Bikes, so it doesn’t return that combination from the crossjoin – that particular tuple could never contain any data so there’s no point returning it. Notice that there are combinations, such as Components/Black, that exist in the dimension and are present on the rows axis but still return null because there is no value for Internet Order Quantity.

It’s important to remember that autoexists only takes place when you are working with sets of members from different hierarchies on the same dimension, never with sets of members from different dimensions.

4) The EXISTS() function and the EXISTING keyword

The EXISTS() function and the EXISTING keyword allow you to take advantage of autoexists for filtering inside your own expressions without having to actually do a crossjoin (there’s another variant of EXISTS() with a third parameter that behaves more like NONEMPTY() but it’s very rarely used so I’m going to ignore it).

For example, here’s how you can use the EXISTS() function to return all the categories that have a product that is White:

SELECT 
{[Measures].[Internet Order Quantity]} 
ON 0,
EXISTS(
[Product].[Category].[Category].MEMBERS
, {[Product].[Color].&[White]})
ON 1
FROM
[Adventure Works]

image

The EXISTING keyword is used within calculations to apply autoexists filtering to a set based on the other hierarchies from the same dimension. The following query contains a calculated measure that counts the number of members on the Color level of the Color hierarchy, and unsurprisingly returns the same value each time it’s called:

WITH
MEMBER MEASURES.COLOURCOUNT AS
COUNT([Product].[Color].[Color].MEMBERS)
SELECT 
{MEASURES.COLOURCOUNT} 
ON 0,
[Product].[Category].[Category].MEMBERS
ON 1
FROM
[Adventure Works]

image

However, if you add the EXISTING keyword just before the set in the calculated measure definition, like so:

WITH
MEMBER MEASURES.COLOURCOUNT AS
COUNT(EXISTING [Product].[Color].[Color].MEMBERS)
SELECT 
{MEASURES.COLOURCOUNT} 
ON 0,
[Product].[Category].[Category].MEMBERS
ON 1
FROM
[Adventure Works]

Then you’ll see that the calculation now returns the number of members on the Color level of the Color hierarchy after autoexists filtering has been applied; so for example the first line shows there are five distinct colours associated with the Category Bikes:

image

Summary

OK, after that somewhat lengthy introduction, in part 2 I’ll show you how to use Profiler to monitor what’s going on inside SSAS when you do all of these different types of non empty filtering.

Generating Fixed-Width Text Files In Excel With Power Query

While it’s fairly common to need to load fixed-width files using Power Query or Power Query (and there’s a nice walkthrough of how to do this here), occasionally you might want to use Power Query and Excel to create a fixed-width output for another system, or maybe to create some test data. You might not want to do it often but I can imagine that when/if Power Query is integrated into SSIS this will be a slightly less obscure requirement; at the very least, this post should show you how to use a couple of M functions that are under-documented.

Before we begin, a quick tip: when working with any fixed width file always check the Monospaced box on the View tab in the Query Editor window, to make it easier to just the widths. You’ll notice I’ve done this in all the screenshots in this post.

image

For source data, let’s take the following expression which returns a table:

#table(
type table[Month=text,Product=text,Sales=text,Comments=text],
{
{"Jan","Apples","1","Not good"},
{"Feb","Pears","2","Better"},
{"March","Oranges","50000","Great!"}
})

 

image

Of course this could be any table of data that you choose, it’s just that using an expression like this makes the examples easier to follow. The only thing to notice here is that all the columns have a data type of text, even the Sales column: if your table doesn’t, I suggest you convert all the data types to text before you start – it will save you having to do it later.

The next thing you’ll need to do is to create a custom column that returns a list containing all the values from each of the columns from the current row. The expression to use here is:

Record.FieldValues(_)

For more information about what the underscore means here, see my previous post; basically it refers to the values in the current row.

Finally, you need to concatenate all the values in this list (which should all be text already, of course) into a single piece of fixed-width text. To do this we’ll need to use a Combiner function – a function that returns a function that, in turn, combines multiple pieces of text together in a specific way. I’ve already blogged about the Combiner.CombineTextByDelimiter() function here but in this case we’ll be using Combiner.CombineTextByRanges() which is a little more complex. You can’t use Combiner.CombineTextByRanges() to combine text directly, you have to call it to get a function that combines text in the way you want and then use this function on your data. To do this you need an expression like:

Combiner.CombineTextByRanges({{0,3},{9,10},{19,10},{29,255}})

How does this work? For each line, remember, we now have a value of type list that contains all of the values from the four columns in our table. The expression above returns a function that takes a list and constructs a single line of text from it. The list passed to the first parameter consists of a series of lists of two integers, and each value pair gives the position to place each value on the line and the number of characters to allow. So, in this example, the first value in the input list is put at position 0 on the line and is given a space of 3 characters; the second value in the input list is put at position 9 and given a space of 10 characters; the third value in the input list is put at position 19 and given a space of 10 characters; and the fourth value in the input list is put at position 29 and given a space of 255 characters.

This function can now be called in a second custom column to get the  combined text for each line. Here’s the full M code for the query:

let
    Source = #table(
              type table
              [Month=text,Product=text,Sales=text,Comments=text],
              {
              {"Jan","Apples","1","Not good"},
              {"Feb","Pears","2","Better"},
              {"March","Oranges","50000","Great!"}
              }),
    ListOfValues = Table.AddColumn(
                    Source, 
                    "List Of Values", 
                    each Record.FieldValues(_)),
    MyCombinerFunction = Combiner.CombineTextByRanges({
                          {0,3},
                          {9,10},
                          {19,10},
                          {29,255}
                          }),
    OutputColumn = Table.AddColumn(
                    ListOfValues, 
                    "Output Column", 
                    each MyCombinerFunction([List Of Values]))
in
    OutputColumn

And here’s the output:

image

One thing to notice: on the third line, the month name March has been truncated to Mar because we only specified a space of three characters for the month name.

image

There’s an optional second parameter to Combiner.CombineTextByRanges() that also needs some explanation. By default, the function that Combiner.CombineTextByRanges() returns will place your text on an otherwise blank line. However you can get it to place your text on a line that contains other characters apart from blank space. For example, if you alter the MyCombinerFunction step in the query shown above to be:

    MyCombinerFunction = Combiner.CombineTextByRanges({
                          {0,3},
                          {9,10},
                          {19,10},
                          {29,255}
                          }
                          ,Text.Repeat("#",285)
                         ),

You get:

image

The Text.Repeat() function, as used in the expression above, returns a piece of text with the # character repeated 285 times, and that piece of text is used as the ‘background’ on which the values are superimposed.

Of course, now you’ve got your desired output all you need to do to create a fixed width file is to remove all other columns apart from Output Column in your query, load your query to a table in the Excel worksheet, turn off the header row for the table by unchecking the Header Row option on the ribbon:

image

…and then save the workbook in a .txt file format.

You can download the sample workbook for this query here.

Dynamic Column Selection In Custom Columns In Power BI/Power Query/M

When you’re writing an M expression for a custom column in Power BI or Power Query it’s easy to reference the values in other columns. A slightly more difficult question, however, is how can you reference column values dynamically? For example, given the following input table:

image

How can you use the values in the “Column To Select” column to dynamically select values from either Column 1, Column 2 or Column 3? For example, on the first line of the table the “Column To Select” column contains the value 2, so the calculated column should contain the value from “Column 2”; on the second line of the table the “Column To Select” column contains the value 1, so the calculated column should contain the value from “Column 1” and so on:

image

There are a number of different ways to achieve this. You could, for instance, write a nested if or do some other kind of fancy conditional logic in M, but this could result in a lot of code. You could also unpivot all the columns except “Column To Select”, do some filtering, then pivot the data again but that would also be quite complicated. Probably the easiest way of doing this is with the Record.Field() function, which allows you to get the value of any given field in a record by passing the name of that field as a text value.

Here’s an example query that generates the table shown in the first screenshot above in its first step, sets some data types on the columns in the second step, then creates the custom column shown in the second screenshot in the final step:

let
    Source = #table(
                     {"Column 1", "Column 2", 
                      "Column 3", "Column To Select"}, 
                     {{10,20,30,2},
                      {30,40,50,1},
                      {50,60,70,3}}),
    #"Changed Type" = Table.TransformColumnTypes(
                     Source,
                     {{"Column 1", Int64.Type}, 
                      {"Column 2", Int64.Type}, 
                      {"Column 3", Int64.Type}, 
                      {"Column To Select", Int64.Type}}),
    #"Added Custom" = Table.AddColumn(
                     #"Changed Type", 
                     "Selected Column Value", 
                     each 
                     Record.Field(
                       _, 
                       "Column " & Text.From([Column To Select])))
in
    #"Added Custom"

Here’s the expression for the custom column isolated, in the form that you would use in the Add Custom Column dialog:

Record.Field(_, "Column " & Text.From([Column To Select]))

image

Understanding how to use the Record.Field() function here leads us to an interesting side discussion about custom columns, functions, records and each expressions. The full M expression for the third step in the query is:

Table.AddColumn(
//the table returned by the previous step
#"Changed Type", 
//the name of the new custom column
"Selected Column Value", 
//a function to call for each row of the table
//returning the value to display in the custom column
each Record.Field(_, "Column " & Text.From([Column To Select])))

The first two parameters of the Table.AddColumn() function are straightforward but if you read the documentation you’ll see that the third parameter requires a value of type function. This function is called for every row in the table and returns the values in the custom column. It doesn’t look like it but there is a new function being defined here. In M an each expression can be used to define a function that has no name and that takes one parameter whose name is _ (ie the underscore character). The Table.AddColumn() function passes this new function a value of type record representing all of the values in the current row of the table, and that means Record.Field() can access this record by using _. What’s more, when referring to fields in this record you don’t even have to say _[Column To Select], you can drop the underscore and just say [Column To Select], as in the code example above. All this means that the expression

each Record.Field(_, "Column " & Text.From([Column To Select]))

is basically the same as

(_)=> Record.Field(_, "Column " & Text.From(_[Column To Select]))

…which, if you know a bit of M, makes a lot more sense but for the average user is probably more intimidating.

A Quick Look At Azure Data Catalog

I’ve been meaning to look at Azure Data Catalog for a long time but somehow never got round to it until now. I’m not sure why – perhaps it’s because there’s so much other cool stuff happening in the world of Power BI and Azure, and Azure Data Catalog seems a bit, um, boring in comparison. Maybe it’s a bit too much like documentation and all that other stuff we instinctively try to avoid. Whatever the reason, now I have looked at it I’m impressed and I can see its uses, not just for Power BI but also for traditional corporate BI scenarios. It certainly deserves a lot more attention from the Microsoft BI community than it seems to be getting at the moment.

First of all, what is Azure Data Catalog anyway? There’s lots of good documentation, videos and a tutorial here:
https://azure.microsoft.com/en-us/documentation/services/data-catalog/

…but the short answer is that it’s a cloud based service for cataloguing all the data sources you want to use for BI. It doesn’t store any of the data itself (except for a small amount for preview purposes, and you can turn that off), just the connection details; it also stores metadata (column names, data types etc) as well as descriptions, documentation and contact details for people who own or know about the data. End users can search the catalogue for data and even, for some data source types like SSAS, click a button to open that data source directly in Excel or Power BI Desktop.

image

Is it any good? Well, I managed to get going with it very easily. As well as an Azure subscription it does require you to have Azure Active Directory set up though, which is a pain – some of my customers (mostly the ones that have bought into Office 365 and Power BI) have it already, but I know for others it would be a showstopper. After setting everything up I was able to register SQL Server, SSAS and other types of data source very quickly using both the desktop application and the browser interface. It’s certainly simple enough for a reasonably technical end user to use and the implementation is very solid. I do have a few minor quibbles (I found the search results interface slightly confusing at first but soon came to grips with it) and it’s clear that it’s still very much a work in progress (right now the browser interface supports registering more data types than the desktop app, for example) but nothing very significant.

There is an obvious need for Azure Data Catalog if you are going to use Power BI and have embraced the whole concept of self-service BI: it makes it much easier for your users to find and consume the various different data sources that you have available. However, as I said earlier, I think on-premises users of SSAS and SSRS could also benefit from it too. It’s becoming increasingly common for me to see organisations with several SSAS cubes in production across various servers, each of them with a confusing list of measures and dimensions. Equally I’ve seen plenty of SSRS implementations with hundreds of reports, with nobody knowing what many of these reports do or even if they are used at all. Azure Data Catalog could provide a convenient central place to document and discover these cubes and reports. I guess it all depends on whether you can be bothered to do this documentation and annotation though, rather than whether you have a suitable tool for this purpose; I don’t think many organisations can be bothered, unfortunately.

One last thing: Azure Data Catalog is meant to include replacement functionality for the old Power BI for Office 365 Data Catalog but as yet it doesn’t allow you to publish or share Power Query queries. This is something that I and other Power Query/Power BI fans would love to see, especially if (unlike the old Data Catalog) it let you publish updates to your code without needing to manually update each Excel workbook/Power BI report that used it afterwards.