Sharing Data Between Organisations With Azure Data Share

If you’ve ever built a BI solution it’s likely you will have had to integrate third party data, and if that’s the case you will know how painful it often is to get your hands on that data. Badly designed portals you have to log into every week to download the data, CSV files emailed to you, APIs with complex authentication – it’s usually an unreliable, slow and manual process. This is why I was interested to read about a new Azure service that previewed this week called Azure Data Share that aims to provide a simple and secure way to share data between organisations.

You can read the announcement blog post here:

https://azure.microsoft.com/en-us/blog/announcing-preview-of-azure-data-share/

…read the documentation here:

https://azure.microsoft.com/en-us/blog/announcing-preview-of-azure-data-share/

…and watch an introductory video here:

https://channel9.msdn.com/Shows/Azure-Friday/Share-data-simply-and-securely-using-Azure-Data-Share

What could this be used for in Power BI? Well, just for fun I uploaded the contents of a Common Data Model folder to a storage account in one Azure account and used Azure Data Share to copy that folder into a storage account in another Azure account. In the destination the data was stored in Azure Data Lake Gen2 storage, so I was able to attach the CDM folder as a dataflow in Power BI. It took a lot of trial-and-error on my part to get the permissions on the various storage accounts working properly (I’m not an expert on this…) but it worked. This proves that third-party data can be exposed directly inside Power BI as a dataflow using Azure Data Share, which I think is pretty darned cool. If you’re a company that sells data and you want your customers to be able to consume that data easily in Power BI, I think this might be a good way to do it.

Why Power BI Slicers Sometimes Show Values That Do Not Exist

The problem of blank values being shown in a Power BI slicer is fairly well known, but did you know that in some circumstances slicers can show other values that you would not expect to see? While there’s a good reason for this it can nevertheless be mildly confusing for developers and end users alike when it happens.

Let’s take a look at one way this can happen. Here’s an M query that returns a table with up to five random numbers in:

let
    Source = 
        {1..5},
    RandomNumbers = 
        List.Transform(
            Source, 
            each 
                Number.Round(
                    Number.RandomBetween(
                        _, 
                        _+100
                        )
                , 0)
                ),
    RemoveDuplicates = 
        List.Distinct(RandomNumbers),
    ToTable = 
        Table.FromList(
            RemoveDuplicates, 
            Splitter.SplitByNothing(), 
            null, 
            null, 
            ExtraValues.Error
            ),
    ChangedType = 
        Table.TransformColumnTypes(
            ToTable,
            {{"Column1", Int64.Type}}
            )
in
    ChangedType

Here’s an example of the output:

image

The key point to notice is that every time this query refreshes, a potentially different set of numbers will be returned.

Now consider a report page built from this table with three visuals on it:

image

There is a slicer on the left with five items in it, a table showing the actual contents of the table (I’ve disabled visual interactions so the slicer doesn’t slice the table) with five rows and a card showing the output of the following measure:

Selected Number = 
SELECTEDVALUE(MyNumbers[Column1], "Nothing Selected")

In the screenshot above you can see I have selected the value 78 in the slicer and the measure – as you would expect – displays that value.

Now what happens when you refresh the dataset and the table contains a different set of numbers? Here’s what:

SlicerNewValues

 

Summarising what is shown here:

  • After the refresh has taken place the number 78 no longer appears in the underlying table, which contains five rows still
  • However, because 78 was selected before refresh took place, it still appears – and is still selected in – the slicer as an extra value meaning there are now six items visible in the slicer
  • Even though 78 appears and is selected, the measure returns “Nothing Selected” because the DAX SelectedValue() function has detected that no single value has been selected
  • As soon as another value in the slicer is selected (in this case 83), the value 78 disappears from the slicer

Something similar may occur when using the Sync Slicers functionality to synchronise two slicers with different source columns that contain different values (as described here). For example, say you have a second table created as a DAX calculated table with the following expression:

MySecondTable = ROW("Column2",-1)

image

If you create a slicer from this table, use Sync Slicers to synchronise it with the existing slicer on the report shown above:

image

…and then select the value –1 in the new slicer, you’ll see it results in the value –1 appearing as selected in the original slicer even though that value does not appear in the underlying column:

image

Once again, if you select something else in the slicer, the –1 value will disappear.

In my opinion this behaviour makes sense and shouldn’t be changed. In any case, in the real world I always try to avoid situations where the values in a slicer might disappear: if you have a dimensional model and are building slicers from dimension tables rather than fact tables, you should find that values are rarely removed from dimension tables, if ever. That said I can’t help wondering if there are situations where this behaviour could be useful, for example in scenarios where you want to show an initial selection that displays no data but which can’t subsequently be reselected by an end user. As always, I’m interested in your thoughts and comments.

Creating Basic Greater Than Or Less Than Rules With Power BI Conditional Formatting

Although the conditional formatting by rules feature in Power BI was released a long time ago, one very common cause of confusion is with how to implement basic “greater than” or “less than” rules. For example, say you have a table with the following data in it:

image

…and you want to highlight the rows where Sales are greater than or equal to 150. So you click on the table, go to the Conditional Formatting options for the Sales field, turn on Background Color formatting and click Advanced Controls:

image

…then choose to format by rules. You see this dialog:

image

In particular, the part of this dialog where you set up the rule:

image

…seems to suggest that you need to enter a “is greater than or equal to” condition and a “is less than” condition for the rule to be valid. This is not true, and you don’t need to enter some arbitrarily large number for the “is less than” condition to make it work. The following screenshot shows how you can set up a rule to highlight all rows where Sales are greater than or equal to 150:

image

The two things to notice are:

  • I’ve entered 150 in the first condition, as you would expect
  • I have deleted the 0 from the second “is less than” condition, leaving the textbox empty (meaning that the text “Maximum” is visible but greyed out)

Here’s the result:

image

Job done. What about a slightly more complex but nonetheless common scenario where values greater than 0 are shown as green, values equal to 0 as yellow and values less than 0 as red (with maybe some special handling for blanks too)? Here’s a sample table:

image

(The Dummy column is only there to make sure the Oranges row, which contains a blank value, is visible in the table)

Here’s a set of rules that does what we need:

image

And here’s the output:

image

Q&A Buttons And Power BI Service Live Connections

A quick Q&A-related tip. If you create a report in Power BI Desktop and use a Power BI service live connection:

image

…and then you put a Q&A button on the report, clicking on the button gives the following error message:

image

Q&A isn’t supported for reports that use DirectQuery data sources, live connections to older Analysis Services data sources, or non-English language models

However, this message only means that Q&A buttons do not work in Power BI Desktop with Power BI service live connections. If you publish the report to the Power BI Service you will find that the Q&A button does work correctly.

image

[Thanks to Seth Bauer and Aaron Meyers for the information]

DAX Median() Function Does Not Work On Tables With More Than 2 Billion Rows

An interesting – if obscure – fact I learned recently is that a small number of DAX functions such as Median() do not work on tables with more than 2 billion rows in Analysis Services Tabular, Azure AS and Power BI.

It’s quite easy to reproduce in Power BI. The following M expression returns a table with two billion and four rows:

let
    Source = 
    List.Repeat(
        {1,2,3,4},
        500000001
        ),
    #"Converted to Table" = 
    Table.FromList(
        Source, 
        Splitter.SplitByNothing(), 
        null, 
        null, 
        ExtraValues.Error
        ),
    #"Changed Type" = 
    Table.TransformColumnTypes(
        #"Converted to Table",
        {{"Column1", Int64.Type}}
        )
in
    #"Changed Type"

It takes some time to load this table  – around twenty minutes – but because there are only four distinct values in the table the resulting .pbix file is only 31KB thanks to the way Power BI compresses data.

If you load this table into your dataset, call it VeryBigTable and create the following measure:

Median Test = MEDIAN(VeryBigTable[Column1])

…and use the measure in a visual, you’ll see the following error:

image

The current query cannot be evaluated for the ‘VeryBigTable (42)’ table, because the table contains more than two billion rows.

What’s more, the error will always occur even if you apply a filter to the table that returns less than two billion rows. The same problem occurs with some other functions, such as Percentile(), but it’s worth pointing out that the vast majority of DAX functions work as normal with tables with more than two billion rows – for example, in the pbix file used here the Sum() and CountRows() functions not only work fine but return instantly.

Luckily, in the case of the Median() function, there is an easy workaround because you can calculate a median in other ways such as the one described on the DAX Patterns site here. The code is a lot more verbose but it works on a 2 billion+ row table.

image

Detailed Article On The Power Query Formula Firewall And Data Privacy Settings

While writing my Microsoft Business Applications Summit session on Power Query-related reasons for Power BI dataset refresh failures (which you can watch online here for free, along with all the other sessions), I remembered something important that I have been meaning to blog about for a while. It’s an excellent article by Ehren von Lehe of the Power Query dev team that contains the first fully detailed explanation of how the Power Query Formula Firewall and data privacy settings actually work. Called “Behind the scenes of the Data Privacy Firewall” you can read it on the Power Query MSDN forum here:

https://social.technet.microsoft.com/Forums/en-US/ca434e2d-88fe-4962-b46a-a1db51e8bd89/feedback-wanted-behind-the-scenes-of-the-data-privacy-firewall?forum=powerquery

It’s a must-read for any Power Query fan, and it answers a number of the questions I raised in my series of blog posts on the data privacy settings here.

Paginated Report Bear And The Future Of Analysis Services Multidimensional

“Who, or what, is Paginated Report Bear?” I hear you ask. Well, he’s the breakout social media star of 2019, a furry YouTube sensation whose incisive interviews of members of the Power BI development team have become renowned for their deep technical content and insights into the Power BI roadmap. If you’re not watching his videos on YouTube, you’re missing out. Guy In A Cube is so 2018.

Paginated Report Bear

[That’s me on the left and Paginated Report Bear on the right]

Anyway, one question I am asked all the time is what the future holds for Analysis Services Multidimensional. While there is no firm news on what’s happening here, two of Paginated Report Bear’s recent interviews have discussed this topic and are particularly revealing. If you’re an SSAS MD and MDX fan I strongly recommend you watch the interviews with Josh Caplan:

…and Amir Netz:

I also had the honour to be interviewed by him at the Microsoft Business Applications Summit this week and we discussed this topic too, although I would like to stress that unlike Amir and Josh I have absolutely no influence on the decisions made in this area; in any case, my opinions on this topic might surprise some of you.

%d bloggers like this: