Excel 2016 BI Branding Changes

Office 2016 is on the verge of being released, and although Power BI is the cool new thing Excel 2016 has added several new BI-related features too. What is also interesting – and less well publicised – is that several of the BI features in Excel 2016 have been rebranded. Specifically:

  • Power Query (which is now no longer an add-in, but native Excel functionality) is not called Power Query any more, but “Get & Transform”. It has also been squeezed onto the Data tab, next to the older data import functionality:
    image

    image

  • Power Map is not called Power Map any more, but “3D Maps”
    image
  • Power View is still Power View, but as John White points out here it is no longer visible on the ribbon by default, hidden from new users, although it’s easy to add the Power View button back onto the ribbon. Power View in Excel 2016 is unchanged from Power View in Excel 2013. Read into this what you will.
  • Although Power Pivot still has its own tab on the ribbon (and has finally got a space in the middle of its name), there’s also a “Manage Data Model” button on the Data tab in the ribbon that is visible even when the Power Pivot add-in has not been enabled:
    image
    Clicking this button opens the Power Pivot window. There’s a subtle distinction between Power Pivot the add-in and the Excel Data Model (which is the database engine behind Power Pivot, and which is present in all Windows desktop editions of Excel regardless of whether the add-in is enabled or not) that has existed since Excel 2013 and which is generally unknown or misunderstood. The fact this button is titled “Manage Data Model” rather than “Power Pivot” is telling.
  • All the add-ins now have the collective name “Data Analysis add-ins” and can be enabled with a single click:
    image

So, clearly Excel has moved away from branding all its BI functionality as Power-something. My guess, informed by various conversations with various people in the know, is that this has happened for a couple of reasons:

  • The ‘Power’ prefix was intimidating for regular Excel users, who thought it represented something difficult and therefore not for them; it also made it look like this was functionality alien to Excel rather than a natural extension of Excel.
  • Having separate Power add-ins led to a disjointed experience, rather than giving the impression that all of these tools could and should be used together. It also made comparisons, by analysts and in corporate bake-offs, with other competing tools difficult – were the Power-addins separate tools, or should they be considered a single tool along with Excel?
  • Previously there was a lot of confusion about whether these add-ins are anything to do with ‘Power BI’ or not. Up to now, depending on who you talked to, they either were or weren’t officially part of Power BI. Now there is a clear distinction between Excel and Power BI, despite the close technical relationships that remain.

The new names certainly address these problems and on balance I think this change was the right thing to do, even if I was quite annoyed when I first found out about them. There are significant downsides too: for example, changing the names means that several years of books, blog posts, articles and conference presentations about Power Query and Power Map now won’t be found by new users when they search the internet for help. Similarly, it won’t be obvious to new users that a lot of content is relevant for both Power BI Desktop and Excel. Now that the Power Query name has been de-emphasised, why should anyone looking at my old blog posts on that subject know that what I’ve written is still relevant for Excel 2016’s “Get & Transform” and Power BI Desktop? What would I call a second edition of my Power Query book, if I wrote one, given that Power Query exists only as the relatively nondescript “Get & Transform” in Excel 2016 and “Get Data” in Power BI Desktop?

How To Optimise The Performance Of MDX Queries That Return Thousands Of Rows

One problem I encounter on a regular basis is how to optimise the performance of MDX queries that return thousands, hundreds of thousands, or even millions of rows. The advice I give is always the same:

Reduce the number of rows that your query returns!

Yes, there are some things you can change in your queries and cube design to improve performance, but these are the same things I’d suggest for any query (build aggregations, rewrite MDX, partition etc etc). In my opinion, if you have a query that returns a ridiculously large number of rows you are doing something fundamentally wrong.

There are three reasons why SSAS developers write this kind of query:

  1. They are doing a data-dump from SSAS to another system. Mostly the developer doesn’t realise this though, since the other system is Excel-based and the end user has disguised their requirement as a report. In most cases, user education about how to use Excel with SSAS results in an approach that doesn’t require dumping thousands of rows of data to an Excel worksheet.I will admit that I have seen a few cases where developers need to dump data out of SSAS for other purposes, and have no option but to use SSAS because they have to add complex calculations that can only feasibly be implemented in MDX. These are very rare though, and most of the time using SQL queries against the underlying relational database works a lot better.
  2. The end users have specified a report that returns lots of data, because that’s just what they want, dammit! Often this is to recreate a report built in a previous system that, at some point in the 1970s, was printed out into a gigantic book every month. My argument here is that a report should return no more data than can be seen on a screen without scrolling. If you need to scroll in a report, you probably should be giving the end user more parameters to filter that report so they can find the data they want to see more easily instead.Of course it’s one thing to know what you should be doing, it’s another thing entirely to tell the CFO that their requirements are stupid. If you can’t convince your end users that you know better than them, you have my sympathy. Usually I find that having to choose between the poor performance of what they want and the better performance of a different approach helps them come to their senses.
  3. Finally, the way that SSRS handles drilling down in reports often leads report developers to bring back vast amounts of data. The advice to increase the number of parameters for filtering is equally relevant here, but you can also use MDX techniques like this one to implement drill down in a much more efficient way.

At the end of the day, SSAS just isn’t optimised for returning large resultsets – it was designed to return PivotTable-style queries, which are always relatively small. You can get good performance for large resultsets if you know what you’re doing, you have the time, and you’re lucky, but you’ll usually be better off rethinking your requirements or choosing a different tool.

MDX Scoped Assignments Outside The Granularity Of A Measure Group

If you’re an SSAS Multidimensional developer, you’ll know that not every dimension has to have a relationship with every measure group in your cube. You may also know that by setting the Granularity Attribute property of a regular relationship, you can join a dimension to a measure group using an attribute that isn’t the dimension’s key attribute. What happens when you make a scoped assignment to a non-calculated measure outside the granularity of a measure group?

The simple answer is that, unlike what happens when you assign to a non-calculated measure inside the granularity of a measure group, your assigned value does not aggregate up. For example, consider a dimension called Demo Dim with one user hierarchy, where there is just one member on each level:

image

If you add this dimension to a cube but don’t define any relationships to any measure groups (and don’t change the IgnoreUnrelatedDimensions property of the measure group) you’ll see the value of the All Member on the hierarchy repeated across all of the other members of the hierarchy:

image

If you use a scoped assignment to change the value of the member D for a regular, non-calculated measure M1, like so:

SCOPE([Measures].[M1], [Demo Dim].[L4].&[D]);
    THIS = 999;
END SCOPE;

You’ll see that D changes value, but the value isn’t aggregated up:

image

The same thing happens if you make an assignment below the granularity attribute of a dimension. This all makes sense when you think about it, in my opinion, and it means that in this scenario at least non-calculated measures and calculated measures behave in the same way.

One last word of warning: whenever I’ve done this, I’ve found that query performance hasn’t always been as good as I would have liked.

Loading Twitter Archive Data In Power Query

If you’re a Twitter user (I’m @Technitrain if you aren’t following me already) you may be aware that you can download your entire Twitter history – all of the data from all of your tweets – as a series of .js files. All the details on how to do this are here:

https://support.twitter.com/articles/20170160-downloading-your-twitter-archive

Since Power Query can work with JSON data I thought it would be easy to use it to analyse my Twitter archive… but while it’s possible, it’s not entirely straightforward. The problem seems to be that Power Query doesn’t like the first line in each .js file that Twitter gives you. Removing that first line isn’t too difficult but it requires some M code, so here’s a function (I’ve called it GetTwitterArchiveFile) that handles that problem. You give it the binary data from the file and it returns a table containing all the data from that file:

(TwitterFile as binary)=>
let
    //Read data from file and interpret as Lines
    Source = Lines.FromBinary(TwitterFile),
    //Remove the first line
    RemoveFirstRow = List.Skip(Source,1),
    //Convert back to Binary
    ConvertBackToBinary = Lines.ToBinary(RemoveFirstRow),
    //Now convert to JSON
    ConvertToJSON = Json.Document(ConvertBackToBinary),
    //Flatten to a table
    ConvertToTable = Table.FromList(ConvertToJSON, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
    //Expand the first set of columns
    ExpandColumns = Table.ExpandRecordColumn(ConvertToTable, "Column1", {"source", "entities", "geo", "id_str", "text", "id", "created_at", "user", "retweeted_status", "in_reply_to_status_id_str", "in_reply_to_user_id", "in_reply_to_status_id", "in_reply_to_screen_name", "in_reply_to_user_id_str"}, {"source", "entities", "geo", "id_str", "text", "id", "created_at", "user", "retweeted_status", "in_reply_to_status_id_str", "in_reply_to_user_id", "in_reply_to_status_id", "in_reply_to_screen_name", "in_reply_to_user_id_str"})
in
    ExpandColumns

Here’s an example of how to use the above function: it’s another function (called GetTwitterFullArchive) which, when you pass it the path of your tweets folder (this will be wherever you unzipped the download that you get from Twitter) returns the combined contents of all of the .js files in that format by calling GetTwitterArchiveFile() for each one:

(TweetsFolderPath as text) =>
let
    //Connect to Tweets folder
    Source = Folder.Files(TweetsFolderPath),
    //Remove everything but Content column
    #"Removed Other Columns" = Table.SelectColumns(Source,{"Content"}),
    //Use Custom Column to call GetTwitterArchiveFile for each .js file in the folder
    #"Added Custom" = Table.AddColumn(#"Removed Other Columns", "Custom", each GetTwitterArchiveFile([Content])),
    //Remove the Content columns
    #"Removed Columns" = Table.RemoveColumns(#"Added Custom",{"Content"}),
    //Expand all columns in the Custom column
    #"Expanded Custom" = Table.ExpandTableColumn(#"Removed Columns", "Custom", {"source", "entities", "geo", "id_str", "text", "id", "created_at", "user", "retweeted_status", "in_reply_to_status_id_str", "in_reply_to_user_id", "in_reply_to_status_id", "in_reply_to_screen_name", "in_reply_to_user_id_str"}, {"source", "entities", "geo", "id_str", "text", "id", "created_at", "user", "retweeted_status", "in_reply_to_status_id_str", "in_reply_to_user_id", "in_reply_to_status_id", "in_reply_to_screen_name", "in_reply_to_user_id_str"})
in
    #"Expanded Custom"

Invoking this function in a query, for example like this:

let
    Source = GetFullTwitterArchive("C:\Users\Chris\Downloads\TwitterArchive\data\js\tweets")
in
    Source

Gives you the following output:

image

As you can see, there are plenty of other columns that can themselves be expanded, but this is a good starting point for any analysis.

There’s nothing really ground-breaking in what I’ve done here – it’s a fairly standard example of how you can use Power Query functions to combine data from multiple files, very similar to this example of combining data from multiple Excel files.

There’s absolutely loads of interesting data that you can play with here, but to start with here’s a query that finds the top 10 people I have replied to on Twitter:

let
    Source = GetFullTwitterArchive("C:\Users\Chris\Downloads\TwitterArchive\data\js\tweets"),
    #"Removed Other Columns" = Table.SelectColumns(Source,{"in_reply_to_screen_name"}),
    #"Grouped Rows" = Table.Group(#"Removed Other Columns", {"in_reply_to_screen_name"}, {{"Count", each Table.RowCount(_), type number}}),
    #"Filtered Rows" = Table.SelectRows(#"Grouped Rows", each ([in_reply_to_screen_name] <> null)),
    #"Sorted Rows" = Table.Sort(#"Filtered Rows",{{"Count", Order.Descending}}),
    #"Kept First Rows" = Table.FirstN(#"Sorted Rows",10)
in
    #"Kept First Rows"

Here’s the output as a table and chart:

image

image

Looks like Jamie Thomson is the winner by a wide margin!

Here’s an example of a  NodeXL graph I built from this data, showing the relationships between users who I have mentioned together in a single tweet:

image

You can download the sample workbook for this post, containing all the functions (but not my data) here.

Exporting All M Code From Power Query In Excel 2013

Here’s a tip that I found out about on the Power Query Technet forum that I thought was worth repeating. If you ever need a quick way of exporting all the M code for all of the queries in an Excel 2013 workbook, just open the Power Query window and click the Send a Frown button shown here:

image

Then, when the Send Feedback dialog appears, make sure Include Formulas is ticked then click OK:

image

When you do that, you’ll get an email created for you that contains a whole lot of debugging information, plus all of the M code for your queries:

image

Obviously, don’t send this email to Microsoft!

It’s quite easy to see the individual queries. You then need to go to your new workbook, create a new query by selecting the Blank Query option under Other Sources, and then open the Advanced Editor window and paste the code for each query in. However, when you do that you will need to modify the code a bit. There are three pieces of code you will need to delete:

  • At the beginning of the M code, where it says
    section Section1;
  • At the beginning of each query, a piece of code that contains the original name of the query:
    shared MyOriginalQueryName =
  • At the very end, a semi-colon

Of course in Excel 2016 we’ll have much better options for copying, pasting and moving queries in VBA, but until then we’ll have to live with hacks like this.

Running Your Own MDX And DAX Queries In Power BI Desktop

Every time there’s a new release of Power Query or Power BI Desktop, I always check to see if there are any interesting new M functions that have been added (I used #shared to do this, as detailed here). For the RTM version of Power BI Desktop I spotted two new functions:

image

As well as ODBC connections, we can now use OLEDB and ADO.NET data sources – although they aren’t shown in the UI yet. And you know what this means… with an OLEDB connection we can now run our own MDX and DAX queries against SSAS data sources! I assume this will be coming in Power Query in Excel soon too.

Here’s an example query showing how to use OleDB.Query() to run an MDX query against the Adventure Works DW cube in SSAS Multidimesional:

let
    Source = OleDb.Query(
              "Provider=MSOLAP.5;Data Source=localhost;
               Initial Catalog=Adventure Works DW 2008", 
              "select {measures.[internet sales amount]} on 0, 
               [date].[calendar].[calendar year].members on 1 
               from [adventure works]"
             )
in
    Source

As you can see, it’s pretty straightforward: you just need to supply a connection string and a query. You will need to tell Power BI Desktop which credentials to use when running the query the first time you connect to SSAS, and that’s probably going to be Windows:

image

You will also see a prompt the first time you run the query, asking for permission to run a Native Database Query:

image

This prompt will appear each time a different MDX query is run; you can turn off this prompt in the Options dialog on the Security tab by unchecking the Require user approval for new native database queries box:

image

Here’s the output of the MDX query from the example code:

image

Thoughts On the Microsoft/Pyramid Power BI Deal

Ever since Power BI first appeared, the number one request from customers has been the ability to publish reports and dashboards to an on-premises server rather than the cloud. There were two standard responses from Microsoft to this request:

  • la-la-la-I’m-not-listening-la-la-la-cloud-la-la-la-future-la-la-la
  • Maybe we’ll have something in SharePoint v.next

…neither of which were particularly satisfying. A lot of businesses just don’t feel comfortable with the cloud yet, and more importantly a lot of businesses can’t put their most valuable data in the cloud for a variety of legal and regulatory reasons.

Today’s announcement of a deal between Microsoft and Pyramid Analytics is big news because it means Microsoft have got a credible answer to the Power-BI-on-premises question at last. For details, read the blog posts here and here, if you haven’t already, plus the Pyramid press release and this page on the Pyramid site. It’s not a perfect solution – I had been hoping that Microsoft would unveil an on-prem Power BI server that they had been working on in secret – but it’s a lot better than what we had before. It also ties up nicely with existing on-premises SQL BI investments that customers may have, and does so without a cumbersome SharePoint dependency.

What has been announced, exactly? From the Pyramid press release:

Pyramid Analytics … today announced a strategic collaboration with Microsoft consisting of development collaboration and technology integration.

The output of this new collaboration is a range of new features in the Power BI Desktop at expected General Availability on July 24. Among those features will be an option to publish a Power BI Desktop file to Pyramid Analytics Server. This feature will enable an ease of integration between the Power BI Desktop and the Pyramid Analytics Server.

From the Pyramid blog post:

This was the result of a strategic collaboration agreement that included:

  • Pyramid Analytics help in the Power BI Desktop development
  • Technology integration to publish Power BI content to Pyramid Analytics Server
  • Go-to-market coordination so mutual customers and partners get the best of our technologies

Here are some screenshots that Pyramid gave me showing what the ‘Publish to Pyramid’ feature will look like in Power BI Desktop:

2015-07-21_18-43-53

2015-07-21_18-44-51

Obviously this is great news for Pyramid and I’m sure it will be a big boost for their business. They are certainly one of the leading third party client tools for SSAS and, I guess, the one with the biggest presence worldwide. Many of my customers are also customers of theirs, and I’ve always been impressed with their products. It’s interesting that this is a partnership rather than an acquisition… maybe, given the large number of ex-Proclarity guys at Pyramid, they had no desire to see history repeat itself?

For Microsoft, I think it’s the best possible solution in the circumstances. Ever since the Power BI reboot last year, Microsoft’s BI strategy has become a lot more pragmatic – something that I, and pretty much the whole of the Microsoft BI community, have welcomed. Rather than pursue a grandiose strategy that fails on the details, the new focus is now on [shock!] building a product that people not only want to buy, but can buy easily. Some compromises have had to be made based on the position that Microsoft has found itself in, though. I guess with all the resources that are being thrown at Power BI V2, plus the development effort that is going into on-premises BI features in SQL Server 2016, it proved easier to partner with a third party vendor that already has a mature product and spare development resources, than build something from scratch.

There are some downsides to all this, though. First of all, I feel sorry for the third-party client tool vendors that aren’t Pyramid, who must be feeling hard done by right now. That’s business, I suppose. Second, Pyramid’s on-premises solution is yet another dashboarding/reporting component that must be understood and fitted in to the Microsoft BI story along with Power BI Desktop, Excel, Reporting Services, Datazen, PerformancePoint (RIP), Excel Services and so on, making the Microsoft BI pro’s life even harder still. Similarly, the Pyramid brand is likely to confuse customers who really just want to buy a single, Microsoft-branded solution (and don’t get me started on the whole Power BI/Cortana Analytics Suite branding overlap thing either).

Overall, I’m very happy as a result of this news. What with this, and the RTM of Power BI v2 tomorrow, Microsoft is back as a serious contender in BI both on-premises and in the cloud, and is catching up with the competition incredibly quickly.