Creating Sparklines And Small Multiples In Power BI Using The Google Image Charts API

The excellent work that David Eldersveld has been doing recently on using SVG images in Power BI has generated a lot of interest; now that the August 2018 release of Power BI Desktop allows you to set data categories on measures – and specifically the Image URL data category – David has been able to apply his work to the creation of sparklines and small multiples in Power BI. He’s already published an example in the Quick Measures gallery here; you might also want to check out this blog post and quick measure by Tom Martens too.

All this got me thinking: what if there was a simple web service that could generate charts, one that didn’t need any authentication, could work with simple GET requests and was free to use? Well guess what, there is one created by Google: the Google Image Charts API. It turns out that it makes it super easy to create sparklines and small multiples in Power BI with very little DAX needed.

Here are two examples of what’s possible with some simple DAX in a measure:

SparklineGoogle

ChangeChartTypeGoogle

Both examples use data from a table called Sales with the following contents:

image

The first example, the Sales Sparkline measure, has the following definition:

Sales Sparkline =
var BaseURL =
    "https://chart.googleapis.com/chart?"
var ChartType =
    "cht=ls"
var ChartScale =
    "&chds=0,10"
var ChartSize =
    "&chs=150x150"
var SeriesColour =
    "&chco=000000"
var ChartData =
    "&chd=t:" &
    CONCATENATEX(Sales, Sales[Sales], ",", Sales[Month])
return
    IF(
        HASONEVALUE(Sales[Product]),
        BaseURL & ChartType & ChartScale &
	ChartSize & SeriesColour & ChartData
    )

All I’m doing is building up the URL to call the API from the parameters listed here. The second measure is basically the same as the first, but the chart type is driven by a selection in a slicer. It’s not bullet-proof code but it shows how easy it is to use the API.

You can see the report live here, and download the sample .pbix file here.

I’m not going to pretend that I’m a data visualisation expert and I know these examples are very basic; however, I think there are a lot of possibilities here for someone with better data visualisation skills than me.

Now for the bad news: the Google Image Charts API has been deprecated since 2012! Here’s the statement on the Google website:

While the dynamic and interactive Google Charts are actively maintained, we officially deprecated the static Google Image Charts way back in 2012. This gives us the right to turn it off without notice, although we have no plans to do so.

Also, there’s a fair usage policy too:

There’s no limit to the number of calls per day you can make to the Google Chart API. However, we reserve the right to block any use that we regard as abusive.

As a result I would not be confident about using it in production. If thousands of Power BI users started using the API it might prompt Google to block traffic from Power BI or turn off the API completely!

I do think Microsoft really needs to address the lack of native support for sparklines and small multiples in Power BI – the fact we still don’t have them is frankly embarrassing (even Power View had them, for crying out loud). Although one solution could be a DAX function that returned a chart image – basically taking David’s work and wrapping it up in a more user-friendly form – another approach would be for Microsoft to build on the Report Page Tooltips functionality. Since we can already design mini reports and make them appear as tooltips, why not make it possible for the same mini reports to appear in a cell in a table or matrix? You could then have measures that returned the name of a report page as a text value, and a new data category for them that tells Power BI to display the contents of the report page in the cell instead of the text. That way you could create small multiples from any visual or set of visuals.

UPDATE: thanks also to Tom Martens for reminding me to point out that this technique involves sending data to a third party, something which might cause legal problems for you.

Colour Names Supported In Power BI Conditional Formatting

When I read about the new conditional formatting by values feature in the August 2018 release of Power BI Desktop, my first thought was to write a blog post on how DAX can  be used to generate the hex values for colours – but then Daniil Masyluk wrote an excellent post on that (and more) yesterday. I then got curious about what colour names are supported when you use a text name rather than a hex code, and Amanda Cofsky of the dev team told me that the official list is the one here:

https://www.w3schools.com/cssref/css_colors.asp

…although there are other names that might work, assuming they don’t contain punctuation. I loaded that list into Power BI and built the following report:

image

I’ve certainly learnt a few new colour names from this!

Analysis Services Import Performance Improvements In The August 2018 Release Of Power BI

While I was playing around with the new release (August 2018) of Power BI Desktop I noticed there was an undocumented change: similar to the OData improvements I blogged about here, there is a new option in the AnalysisServices.Database() and AnalysisServices.Databases() M functions that turns on a newer version of the MDX generation layer used by the Power Query engine. Like the OData improvements it is an option called Implementation=”2.0”, used like this:

AnalysisServices.Databases(
	"localhost", 
	[
		TypedMeasureColumns=true, 
		Implementation="2.0"
	]
)

…and also, as with the OData improvements, you will need to manually edit any existing M queries to take advantage of this.

In fact first heard about this option in a comment on this blog post back in January, but as I was told by the dev team that it hadn’t been tested properly I didn’t blog about it. However as it is now set by default in the M code generated by the Power Query Editor, I guess it’s ready for production use. I’m told it should improve the performance of M queries that import data from Analysis Services – and I would be very interested to hear from anyone who tests this about how much of an improvement they see.

I’ve done a little bit of testing myself and I can see there are indeed some differences in the MDX generated. For example, I created an M query that connected to the Adventure Works DW database and returned all combinations of customer and day name where the Internet Sales Amount measure is greater than 100. In the old version of the MDX generation layer (ie without Implementation=”2.0” set) the following MDX query is generated:

SELECT
    {
        [Measures].[Internet Sales Amount]
    }ON 0,
    SUBSET(
        NONEMPTY(
            CROSSJOIN(
                [Date].[Day Name].[Day Name].ALLMEMBERS,
                [Customer].[Customer].[Customer].ALLMEMBERS
            ),
            {
                [Measures].[Internet Sales Amount]
            }
        ),
        4096
    )
    PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON 1
FROM(
    SELECT
        FILTER(
            CROSSJOIN(
                [Date].[Day Name].[Day Name].ALLMEMBERS,
                [Customer].[Customer].[Customer].ALLMEMBERS
            ),
            (
                NOT((
                    ISEMPTY(
                        [Measures].[Internet Sales Amount]
                    )
                OR
                    ISEMPTY(
                        100
                    )
                ))
            AND
                ([Measures].[Internet Sales Amount] > 100)
            )
        )ON 0
    FROM [Adventure Works]
)CELL PROPERTIES VALUE

 

However with the Implementation=”2.0” option set, the following query is generated:

SELECT
    {
        [Measures].[Internet Sales Amount]
    }ON 0,
    SUBSET(
        NONEMPTY(
            FILTER(
                CROSSJOIN(
                    [Date].[Day Name].[Day Name].ALLMEMBERS,
                    [Customer].[Customer].[Customer].ALLMEMBERS
                ),
                (
                    NOT(ISEMPTY(
                        [Measures].[Internet Sales Amount]
                    ))
                AND
                    ([Measures].[Internet Sales Amount] > 100)
                )
            ),
            {
                [Measures].[Internet Sales Amount]
            }
        ),
        4096
    )
    PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON 1
FROM [Adventure Works] CELL PROPERTIES VALUE

The difference between the two is that the first query uses a subselect to do the filtering whereas the second does not; subselects in MDX are not necessarily bad, but I generally try to avoid using them unless I need to. There may be other differences in the way the MDX is generated in the new version but I haven’t had a chance to do any detailed testing.

Comparing The Performance Of CSV And Excel Data Sources In Power Query

My posts from two weeks ago (see here and here) on using Process Monitor to troubleshoot the performance of Power Query queries made me wonder about another question: how does the performance of reading data from CSV files compare to the performance of reading data from Excel files? I think most experienced Power Query users in either Power BI or Excel know that Excel data sources perform pretty badly but I had never done any proper tests. I’m not going to pretend that this post is a definitive answer to this question (and once again, I would be interested to hear your experiences) but hopefully it will be thought-provoking.

To start off, I took the 153.6MB CSV file used in my last few posts and built a simple query that applied a filter on one text column, then removed all but three columns. The query ran in 9 seconds and using the technique from my last post I was able to draw the following graph from a Process Monitor log file showing how Power Query reads data from the file:

image

Nothing very surprising there. Then I opened the same CSV file in Excel and saved the data as an xlsx file with one worksheet and no tables or named ranges; the resulting file was only 80.6MB. Finally I created a duplicate of the first query and pointed it to the Excel file instead. The resulting query ran in 59 seconds – around 6 times slower! Here’s a comparison with the performance of this query with the first query:

image

The black line in the graph above is the amount of data read (actually the offset values showing where in the file the data is read from, which is the same thing as a running total when Power Query is reading all the data) from the Excel file; the green line is the amount of data read from the CSV file (the same data shown in the first graph above). A few things to mention:

  • Running Process Monitor while this second query was refreshing had a noticeable impact on its performance – in fact it was almost 20 seconds slower
  • The initial values of 80 million bytes seem to be where data is read from the end of the Excel file. Maybe this is Power Query reading some file metadata? Anyway, it seems as though it takes 5 seconds before it starts to read the data needed by the query.
  • There’s a plateau between the 10 and 20 second mark where not much is happening; this didn’t happen consistently and may have been connected to the fact that Process Monitor was running

In any case you don’t need to study this too hard to understand that the performance of reading data from an xlsx format Excel file is terrible compared to the performance of reading data from a CSV. So, if you have a choice between these two formats, for example when downloading data, it seems fair to say that you should always choose CSV over xlsx.

Announcing V2 Of The “Planning A Power BI Enterprise Deployment” White Paper

It’s my pleasure to announce that version 2 of the white paper that I co-wrote with Melissa Coates, “Planning A Power BI Enterprise Deployment”, is now live and can be downloaded from here.

The title should give you a pretty good idea of the subject matter: the aim is to help anyone who intends to roll out Power BI within their organisation understand the different components of Power BI, the ways it can be deployed, how the licensing works, how it can connect to different types of data source, how it should be monitored and administered, how security works and so on. In fact, at 188 pages, it’s more of a book than a white paper and it is substantially expanded, updated and improved compared to the previous version. One of the aims that Melissa and I had when we started on this new version was to clarify certain topics that we know everyone struggles with, such as what the differences between the various Premium tiers are, what embedding is and what licenses are necessary to do it, and how sharing and collaboration should be managed. There’s also a certain amount of new information about how Power BI will be changing in the next few months (you should also read the very new October 18 release notes for Power BI and data integration if you haven’t done so already). I certainly learned a lot myself while writing it.

I would like to say a particular thank you to my co-author Melissa who was, once again, a pleasure to work with. She is extremely knowledgeable (especially about things like the new Azure data-related services that I know very little about), a great writer, very good at designing tables and charts (which I’m awful at), very reliable and above all very patient. I am also incredibly grateful to Adam Wilson and Lukasz Pawlowski from Microsoft who spent many long hours on calls with us explaining how things work in Power BI and what’s coming in the pipeline, and also to Meagan Longoria and the large number of other Microsoft employees who reviewed it.

I hope you enjoy reading it!

Using Process Monitor To Find Out How Much Data Power Query Reads From A File

This post is really just a quick follow-on from my post earlier this week on using Process Monitor to troubleshoot Power Query performance issues with file-based data sources, which I suggest you read before carrying on. I realised, after playing around with Process Monitor some more, that the ReadFile operation actually tells you how much data is being read from a file when a Power Query query is running. For example, here’s a sample of some of the ReadFile operations captured while running the unoptimised version of the query I talked about in my last post:

image

Since Process Monitor can export captured events to a CSV file, it’s pretty easy to load the events into Power BI, filter the events down to only the ReadFile operations, parse the Detail column to extract the Offset values (which I’m sure you can work out how to do if you’re reading a post like this), and then draw a graph showing how much data gets read from a file when a query is run. Here’s what the graph looks like for the unoptimised version of the query from my previous blog post, with relative time on the x axis and the amount of data read  in bytes on the y axis:

image

In that post I noted that there were six reads of the file – and while that’s clear from the graph above, it’s also possible to see that the first read does not read the whole contents of the file while the next five do (the file is 149MB). So maybe I was right that there is one complete read of the file for each row in the output query? What is that first, partial read for, I wonder?

Using Process Monitor To Troubleshoot Power Query Performance Issues With File Data Sources

Troubleshooting Power Query performance issues in Power BI and Excel can be difficult because it’s a bit of a black box: there’s nothing in the UI to tell you what’s going on inside the Power Query engine and the diagnostic logs are very difficult to interpret. With relational data source like SQL Server you can use tools like SQL Server Profiler to see the queries that are being run by Power Query, and I blogged recently about using Fiddler to troubleshoot OData performance issues; but what about file-based data sources, which often present the most challenges regarding performance?

Process Monitor, a free tool from Microsoft, allows you to monitor file system activity in real-time and even having spent a limited amount of time using it I can already tell that it can provide a lot of information to help identify performance issues with file-based data sources. Take, for example, the scenario I described in my recent post on improving the performance of merge operations. In that post (which I suggest you read before you carry on) I mentioned that it looked as though the Power Query engine was reading data from one of the source files multiple times and Process Monitor confirms that this indeed the case.

Disclaimer: I am not going to pretend to be an expert on Process Monitor, the Windows file system or the internals of Power Query. The rest of this post contains the results of my experiments and the conclusions I have drawn from them, which may or may not be correct!

There are a lot of resources online describing Process Monitor (I also took a look at this book, which is very helpful, as is the help file) and it’s also fairly intuitive, so I’m not going to attempt to explain the basics of how to use it, but with a bit of trial-and-error I set up a simple filter consisting of:

  • Operation is CreateFile
  • Operation is CloseFile
  • Path is <the full path of the main csv file used as the data source>

image

…to watch what happened when the unoptimised query refreshed:

image

Some things to note about the output shown above:

  • I had the “Enable data preview to download in the background” option turned off just to make sure I only saw activity relating to the refresh
  • For the Power Query engine it seems as though the Process Name is Microsoft.MashupContainer.NetFX40.exe.
  • There are obvious pairs of CreateFile/CloseFile operations
  • Some of these pairs, where the detail for the CreateFile operation is “Desired Access: Read Attributes”, seem to be for reading file metadata and are very fast
  • The pairs where the CreateFile operation has a detail of “Desired Access: Generic Read”, seem to be where the contents of the file are read (the first of these pairs is highlighted in the screenshot). The Relative Time column shows the time elapsed since Process Monitor started capturing events, and the difference between the Relative Time values for the CreateFile and CloseFile operations in these pairs is around 5-10 seconds.
  • It looks like the Power Query engine reads some or all of the data from the csv file six times in total. This tallies roughly with the values shown in the Power BI UI for the amount of data read from disk, although it does not tally with my guess that it was reading the data from the file once for each of the five rows in the output query.
  • The difference in the Relative Time values of the first and last events is around 45 seconds; the query itself takes around 55 seconds to run.

With the optimisation described in the blog post (ie doing a “Remove Duplicates” on one of the columns) in place, Process Monitor shows the following instead:

image

As you can see, it looks like Power Query now only reads from the file twice and the difference in time between the first and last event is now only around 8 seconds.

This is clearly useful: we can now see when Power Query opens a file, how often it opens a file, and how long it holds the file open for each time. It also raises a number of other questions which I can’t answer properly yet:

  • Why does Power Query need to open a file multiple times? Actually this isn’t a surprise to anyone who has spent a lot of time with Power Query and has a vague understanding of how it works, but it seems fair to say that the unoptimised query in this example was slow because Power Query had to open and read from the file multiple times.
  • What is Power Query doing exactly when it has the file open? Is it reading some or all of the data in the file? What is it doing for the rest of the time?
  • Are some file formats inherently slower than others as far as Power Query is concerned? It seems so: for example Excel files seem to be  a lot slower than csv files.

So yet more research is required… but still very interesting, don’t you think? If I’ve made any mistakes here, or if you use Process Monitor to see what happens when you run your queries and you find something else useful, please leave a comment below.

%d bloggers like this: