Analysing Dataset Refresh In Power BI Premium Using SQL Server Profiler

A few weeks ago I showed how XMLA Endpoints allow you to connect SQL Server Profiler to Power BI Premium. As well as looking at query execution times this also means you can see in more detail what happens when a dataset is refreshed, for example so you can find out exactly how long a refresh took, understand which tables inside the dataset contribute most to refresh times or which calculated columns or calculated tables take the longest to create. Remember that refresh performance in Power BI Desktop may be different to refresh performance in the Power BI Service where you’re running on different hardware, may be going through an on-premises gateway to reach your data sources and where other operations may also be consuming resources while refresh is taking place.

To do this you need to create a trace in the way I describe in that previous post and include (at least) the following two pairs of trace events:

  • Command Begin/End
  • Progress Report Begin/End

Once the trace is running and you start a refresh, you’ll see a lot of events generated:

Trace

What does this all mean though? Generally speaking any books or blog posts that talk about processing in Analysis Services Tabular will also apply to refresh in Power BI since the two engines are very similar, although there are many options available in Analysis Services that are not (yet) available for Power BI. For example, my colleage Richard Tkachuk has just written a blog post on Analysis Services processing here that is a good introduction to the subject; chapters 11, 12 and 15 of Marco and Alberto’s book “Tabular modelling in Microsoft SQL Server Analysis Services” provide a lot more detail. If you’re interested in the performance of your Power Query/M queries this post of mine from last year is worth reading too.

It looks a lot more complex than it actually is though. The columns in the trace you should focus on are the ones shown in the screenshot above: EventClass, EventSubclass, TextData and Duration. In EventClass you’ll see pairs of Command Begin/End and Progress Report Begin/End events; EventSubclass and TextData give you more information about each event; and Duration will give you the amount of time taken for each event (it only appears for the “end” events in the pairs) in milliseconds. A refresh is represented by a single Command, so when you’re looking for the events in your trace that relate to a refresh you’ll need to look for a pair of Command Begin/End events with the word “Refresh” visible in amongst a lot of XML in the TextData column and with a lot of Progress Report Begin/End events in between:

Refresh

The Command End event of this pair will give you the time taken for the refresh in the Duration column:

CommandEnd

Finding the amount of time needed to refresh individual tables is not so straightforward because (as Richard Tkachuk describes in his blog post mentioned above) refreshing a table consists of a number of different jobs. The easiest thing to do is to look for a Progress Report End event with EventSubclass 59 where the TextData column starts with “Finished processing partition…” followed by the name of your table:

Table

The Duration column for this event will give you the amount of time taken for the majority of work needed to refresh a table.

There is, of course, a lot more that you can see in these traces and a lot more detail you can get by adding other events, but I’ll stop here for now. If you find anything interesting while you’re looking at traces of your own please let me know by leaving a comment!

Power BI, HTTP/2, Internet Explorer 11 And Older Versions Of Windows

A very quick performance tip: do you have users consuming Power BI reports with Internet Explorer 11 and an older Windows OS like Windows 8.1 or Windows 7? If so, their reports are likely to be slower because over the last few months some important performance optimisations were made in Power BI that rely on the HTTP/2 protocol, and while all modern browsers like Edge and Chrome have supported HTTP/2 for a long time now, Internet Explorer 11 only supports HTTP/2 on Windows 10. If you are using Internet Explorer 11 you can check whether HTTP/2 is enabled (and it should be by default) by going to Internet Options and the Advanced tab and seeing if the Use HTTP2 option is checked:

Internet Options

Making One Power BI Measure Appear In Multiple Folders

Back in 2018, when I wrote a detailed post on how to create nested display folders for measures in Power BI, I mentioned that unlike in Analysis Services it was not possible to make a Power BI measure appear in two or more folders simultaneously. The other day on Twitter Deepak Agrawal pointed out that at some point since I wrote that post the situation has changed, so here’s a quick post showing how it now works.

Say you have a simple Power BI dataset with a table, a column and a measure called My Measure:

Initial State

If you go to the Model pane in Power BI Desktop and click on the measure so that the Properties pane appears, and then enter a list of folder names separated by semi-colons such as:

First Folder;Second Folder

After State

…you’ll see that all the folders in your list are visible in the Fields pane (if they weren’t there already) and your measure appears inside each of the folders. There’s still only one measure – it has not been duplicated – it’s just that it appears in more than one folder. This can be very useful if you have lots of measures that need to be organised into folders in several different ways.

Bonus observation: I hadn’t noticed this before, but you can also drag and drop measures into different folders in the Fields pane. I suspect that’s been possible for ages but I never tried it before.

Visualising Power BI Premium And Azure Analysis Services Query Parallelism

In my last post I showed how to connect SQL Server Profiler up to a Power BI Premium dataset but I didn’t give you any examples of why this might be useful. In this post I’ll show you how you can use a Profiler trace to visualise all the queries run by a Power BI report, see when they start to run, see which ones run in parallel with each other and see what the overall time taken to run all the queries is.

Why is this important? When you’re tuning the performance of a Power BI report the first thing to do is to look at the performance of the individual DAX queries run and make them run as fast as possible. However when a Power BI report is rendered any one query is likely to be run at the same time as several other queries run for the same report, and this will have an impact on its performance. How much of an impact there is will depend on how many queries need to be run and the number of back-end v-cores available on your Premium capacity, or the number of QPUs available on your Azure Analysis Services instance if you’re using a Live connection to AAS. The more v-cores/QPUs you have available, the more of the work needed for a query that can be run in parallel; you can see a table listing the number of v-cores for each Premium SKU here, and the number of QPUs for each Azure Analysis Services SKU here. As a result of this if you have reports with a large number of visuals that generate slow DAX queries, scaling up your Power BI Premium capacity or AAS instance may improve overall report performance. Reducing the number of visuals on your report and/or reducing the number of visuals needed to display the same information will also reduce the number of queries that need to be run and therefore improve overall performance.

As I showed last week, SQL Server Profiler can be used to create a trace that logs all the queries run against a Power BI Premium dataset in the same way as it can be used with Azure Analysis Services. Assuming that you have a trace running that uses only the Query End event, this will give you a list of all the queries that are being run along with their start time, end time, duration and a lot of other interesting information. A table with all this data in can still be difficult to interpret though, so I built a Power BI template for a report that visualises all these queries and helps you understand the amount of parallelism that is taking place. You can download the template file here.

To use it, first you need a trace file. Make sure that no-one else is running reports on the Premium capacity you want to test (creating a Power BI Embedded capacity for testing purposes is a good idea) and then, when the trace is running, refresh your report using the technique I described in the “Use the network tab” section of this blog post. This will also allow you to correlate what you see in the trace with the information you see in the DevTools tab in the browser.

Then save the trace file you can created to XML by going to File/Save As/Trace XML File:

SaveToXML

Next, open the Power BI template file and when prompted, enter the full path of the trace XML file you just created:

TemplateOpening

A new Power BI report will then be created. If you want to point the report to a different trace XML file all you need to do is change the value of the TraceXMLFile Power Query parameter.

On the first page you’ll see the name of the trace XML file you connected to plus a bar chart showing each Query End event (with each query identified by a number) on the y axis and the duration of each query on the x axis:

Waterfall

It’s not quite a simple bar chart though. What I’ve done is:

  • Found the start time of the first query run
  • Calculated the start time of every other query in the file relative to this first start time (although, unfortunately, Profiler only gives you start times rounded to the nearest second which means you can’t know exactly when a query starts)
  • Created a stacked bar chart where the first value in the stack is this relative start time and the second value is the duration of the query in seconds
  • Made the colour of the relative start time transparent, so you only see the blue sections of the bar for the query durations. This gives you a waterfall-like effect and allows you to see which queries are run in parallel. This also makes it easy to see the total amount of time taken to run your queries, from the start of the first query to the end of the last query, which is just as useful to know as the duration of any single query.
  • There’s also a drillthrough page so you can right-click on a bar and see a table with the DAX query for the query you clicked on, as well as its start time and duration.

It’s a very basic report, I know, and I would be interested to know if you have any ideas about other ways of visualising this data. What’s more, a visual like this raises more questions than I know how to answer… yet. For example, one thing I want to investigate is the effect that query interleaving has on this graph and both perceived and actual report performance. So stay tuned for more blog posts on this subject!

 

 

 

Connecting SQL Server Profiler To Power BI Premium

Back in December when I wrote a series of posts on testing the performance of Power BI reports in the browser, I mentioned that it was important to test in the browser because some aspects of the performance of a report may be different there compared to in Power BI Desktop. Following on from this, if you’re testing performance of a report in the browser you are also going to want to take a closer look at the DAX queries generated by your report, even if it is just to check that what you see there is the same as what you see in Performance Analyzer in Power BI Desktop. If your report uses a Live Connection to Analysis Services this is easy to do using either SQL Server Profiler, Azure Analysis Services’s diagnostic logging feature or XEvents. If you’re using a dataset stored in Power BI we have a range of options for monitoring what’s going on including Usage Metrics and the Premium Capacity Metrics Apps and of course there’s also DAX Studio, but for an old-school guy like me, connecting to a Power BI Premium workspace using SQL Server Profiler is a great way to go to get detailed information about what’s going on when queries run. In this blog post I’ll show you how to connect Profiler to Power BI Premium.

First you’ll need to install the latest version of SQL Server Management Studio, which includes SQL Server Profiler – you can get it here. If you have an older version installed you’ll probably need to upgrade. And before anyone leaves a comment about Profiler being deprecated, let me point you to the note on this page:

Note

The feature that allows you to connect Profiler to a Power BI Premium workspace is XMLA Endpoints: it’s in preview right now but basically this allows you to connect any tool that works with Azure Analysis Services up to Power BI Premium. SQL Server Profiler wants to connect to an instance of Analysis Services; XMLA Endpoints mean that you can connect it to a dataset in a Power BI Premium workspace but for this to happen you need to know the url for Profiler to connect to. You can find this by going to your workspace, clicking on Settings:

Workspace Settings

and then going to the Premium tab and copying the Workspace Connection string:

Workspace Connection

You can then open up Profiler, go to the File menu and select New Trace and a connection dialog will appear:

ProfilerConnection1

In this dialog:

  • Set the Server type to Analysis Services
  • In Server name paste the Workspace Connection that you copied from Power BI earlier
  • In Authentication select Azure Active Directory – Universal with MFA and enter your username

Next click the Options button and go to the Connection Properties tab and on the Connect to database dropdown select <Browse server…>:

ProfilerConnection2

Click Yes on the dialog that appears and then choose the name of the dataset in your workspace that you want to connect to in the Connect to database dropdown. If you don’t do this you’ll get errors later on.

Next you’ll see the Properties dialog:

ProfilerProperties1

The default template is Blank, which means no events are selected to monitor; to select events go to the Events Selection tab:

ProfilerProperties2

…select the events you want, and click Run to start tracing.

Trace

Which events should you choose? That’s a big topic and not one that I have time to go into here, but the Query End event is perhaps the one I look at most – it’s fired every time a query finishes executing and gives you a lot of important information such as the start time, end time and duration of the query in milliseconds. Books such as “The Definitive Guide To DAX” have a lot of information on using Profiler with Analysis Services and a lot of that information is relevant to Power BI Premium too. In future blog posts I dare say I’ll show you some interesting things you can do using Profiler and Power BI too…

Adding Your Own Messages To Power Query Query Diagnostics

A quick point: while the Power Query Query Diagnostics functionality is relatively new, it’s based on Power BI/Power Query trace logging that has been around for a while. I’ve just realised that this means you can use the Diagnostics.Trace M function that I blogged about back in 2016 to add your own messages to the output of Query Diagnostics. Using the example query from that blog post, here’s what you’ll see in the detailed Query Diagnostics output query when that query is run (I’ve removed all but the important columns to make it easier to read):

TraceOutput

The amount of detail you get with Query Diagnostics can be overwhelming; inserting your own messages should make it a lot easier to work out what is happening, when and how often – especially in scenarios where the query name and step name aren’t shown.

Make Your Power BI Report Run Faster By Showing The Same Data In Fewer Visuals

Do you have a Power BI report that has a row of card visuals on it, something like this:

Cards

?

It’s a very common thing to do, but if your report is too slow to render you may find that a design like this is part of the problem. Now look at the following report which contains a single matrix that has been configured to look as un-matrix-like as possible but shows the same data:

Matrix

This is likely to perform faster, but why?

Before we go any further, I don’t want you to go and change your reports if you’re not going to get any benefit from doing so. Use Performance Analyzer (as shown here) to determine which visuals on your report are the cause of slow performance – there’s no point redesigning visuals that are fast anyway.

As a general rule the more visuals you put on a report page the slower it’s going to get. It’s logical if you think about it: the more visuals there are, the more queries have to be run against your dataset and the more work Power BI has to do to render the report. I know there is a tendency to try to pack as much information onto a page as possible and this often happens when someone else has designed the report you’re trying to build, but you should always try to resist this. Splitting a single large page into multiple smaller pages, using slicers or filters to reduce the amount of data shown at any one time and avoiding gigantic Excel-like tables are a good idea.

Once you’ve done that you need to see whether you can reduce the number of visuals needed to display the same amount of information: in the example above, the four separate cards show the same data as the single matrix. In part this is because the cards generate four separate DAX queries against the dataset to get the data they need whereas the matrix only generates one DAX query. There is a certain overhead to running a DAX query, so reducing the number of DAX queries needed to get the same amount of data is a good thing. More importantly, in this example Power BI can get the four values required for the single DAX query generated by the matrix much more efficiently than it can in the four separate DAX queries needed by the cards. This is because of something called “DAX Fusion”, which my colleague Phil Seamark blogged about in great detail here, and this can make a signficant difference to performance (Marco Russo’s post on why Analyze In Excel reports may be slower than Power BI reports here also contains some useful information on DAX Fusion).

This is only a simple example; if you want to see a really sophisticated demonstration of replacing several visuals with a matrix I suggest you watch this video by another colleague of mine, Miguel Myers. And of course it doesn’t just apply to cards: any time you can replace multiple visuals with a single visual (remember to be careful using custom visuals though) you’re likely to gain some performance.

An M Function To Help You Explore Power Query Diagnostics Data

Last week’s post showed an M function that took Power Query diagnostics data and formatted in a way that made it suitable for visualisation in a Power BI Decomposition Tree visual. This is great for understanding what’s going on at a high level, but by doing this you also lose a lot of detailed information from the diagnostics logs that could be useful for performance tuning. This week I have an M function for you that takes a different approach to solving the same problem that might be more appealing for the hardcore Power Query fans out there.

Here’s the code for the function:

As before you need to pass a query created by the Power Query diagnostics functionality to the function:

InvokeFunction

[Note: I recommend calling the function on the “detailed” diagnostics query and not the summarised view, because I’ve found what might be a bug with the summarised view]

The function will then return a table with one row for each query that is evaluated while the diagnostics were running:

QueryList

[Note: You may find that some query evaluations have nulls instead of the name of the query; that’s an issue with the diagnostics data that will be addressed soon]

You could get the same thing simply by filtering the raw diagnostics data down to the rows where the Path column contains 0, but I’ve added two extra things that make the data much easier to understand.

First of all, the original data contains columns called “Exclusive Duration” and “Exclusive Duration (%)”, which tell you the amount of time each operation took in seconds and what percentage this was of the entire query evaluation. However, the data itself is in a parent/child structure so while a parent operation may have a very small duration it may have many slow descendant operations. Therefore my function adds two new columns called “Exclusive Duration (Including Child Operations)” and “Exclusive Duration (%)(Including Child Operations)”. The first of these new columns returns the total duration of each operation and all its child operations, while the second returns the percentage that the duration of each operation and all its child operations make up of the whole query duration. At the root level, this means the first of these new columns shows the total duration of each query in seconds and the second of these new columns should always show 100%.

Durations

Secondly, the function adds a new column called Child Rows that contains a nested table containing the child operations associated with each operation:

ChildRows

You can either click on the Table link in this column to navigate to a new table containing all the child operations, or use the Expand/Aggregate button in the top-right hand corner to keep the current operations and join the nested rows onto them (I strongly recommend you do the former). Experienced Power Query developers will be familiar with the concept of nested tables and I think this provides an easy way of navigating the parent/child operation structure that is similar to what you see with a Decomposition Tree but keeps all the detail columns that the diagnostics data provides you with and means you don’t have to leave the Power Query Editor.

I would love to hear your feedback about this and know if there are any bugs or other information that could be added. You can download a sample pbix file for this function here.

 

Visualising Power Query Diagnostics Data In A Power BI Decomposition Tree

Recently I’ve been working with the Power Query team to come up with some ways to help developers understand the data returned by the new Power Query diagnostics functionality. In this, the first of two posts, I’m going to share a Power Query function that reformats diagnostics data in a way that makes it easy to visualise using the Power BI Decomposition Tree visual.

Here’s the code for the function:

Here’s how to use it:

  1. Copy the code above into a new blank query in the Power Query Editor window, and give the query a meaningful name like DiagnosticsForDecompTree
  2. Pass in the name of a query created by using the Query Diagnostics functionality (you’ll be able to choose any query in your pbix file using the dropdown box) and invoke the functionInvokeFunction
  3. Load the output of the new query that gets created by invoking the function into your Power BI dataset
  4. Add a new Decomposition Tree visual to a report page
  5. Drag the Exclusive Duration column into the Analyze well of the visual
  6. Drag all the columns whose names start with “Level” (ie “Level.1”, “Level.2” and so on) into the Explain By well of the visualWell
  7. Add a filter or slicer on the Query column to filter the data shown in the visual down to a single query evaluation.

The output will look something like this:

DecompTree

What does this function do? The original output of a diagnostics query has one row for each event; each event may have one or more child event, and the Path column shows the details of this parent/child structure. The function takes this structure and expands it out to multiple columns, so for each event you see all of its ancestor events in a separate column on the same row. Since each event’s duration only gives you the time taken for it to run and doesn’t include the time taken its child events, when you break down an event in the Decomposition Tree you’ll see some child events with the name (Blank) – these give you the duration of just the parent event and make sure everything adds up properly. This function should work for both the detailed diagnostics output as well as the summary output.

You can download a sample pbix file showing an example of how to use it here.

This is only a first draft of the function, so I’d really appreciate it if you could try this with your own diagnostics data and check if it works properly and also if it produces useful output. Please leave your feedback in a comment either here or on the Github Gist. There are almost certainly going to be bugs!

Of course the real question is is how you interpret what you see in the Decomposition Tree and how you can use it to improve the performance of your queries. That, my friends, is a journey we’re on together, but I think that being able to visualise diagnostics data easily is an important first step to being able to understand what’s going on inside the Power Query engine when a query is evaluated.

[Thanks to Jorge Gomez Basanta, Colin Popell and Matt Masson for their help with this code]

 

Sending Power BI Alerts To Users Via Teams Using Power Automate And Adaptive Cards

A few months ago my esteemed colleague Patrick LeBlanc published a great video showing how you could use Power Automate (formerly known as Microsoft Flow) to send an email when a Power BI alert is triggered. This post is going to cover almost the same ground, except that I’m going to show you how to send an alert to a user via Teams and instead of dynamically generating text for the message I’m going to use an Adaptive Card to format the alert data.

Although Adaptive Cards have been around for a while now, I’m slightly embarrassed to admit that I only learned about them a few weeks ago when I read this blog post. Full details about Adaptive Cards can be found at https://adaptivecards.io/; for those of you too lazy to click the link, here’s the official description and a short explainer video:

Adaptive Cards are platform-agnostic snippets of UI, authored in JSON, that apps and services can openly exchange. When delivered to a specific app, the JSON is transformed into native UI that automatically adapts to its surroundings. It helps design and integrate light-weight UI for all major platforms and frameworks.

 

There’s documentation on how Adaptive Cards can be used in Power Automate here and a comprehensive video here:

 

When I first saw them I knew that they would be perfect way to send Power BI alerts to people. Here’s a simple example showing how to use them for this purpose.

First of all, I set up a simple Power BI report and dashboard containing a card visual with a value in and created an alert on it, just as Patrick shows in his video. Then I went to the Adaptive Cards designer at https://adaptivecards.io/designer/ and created an Adaptive Card to display alert data:

AdaptiveCardsDesigner

Then I created a simple Flow in Power Automate with the “When a data driven alert is triggered” trigger (again, just as Patrick shows) but then used the “Post your own adaptive card as the Flow bot to a user” action:

Flow1

I pasted the JSON definition of the Adaptive Card I created in the designer earlier into the message of this action, and made it dynamic:

Flow2

…and having done that, once the alert was triggered in Power BI I got a message in Teams from the Flow bot with the alert data shown in my Adaptive Card:

AdaptiveCardTeams

My design skills are very limited and this is only my first attempt at creating an Adaptive Card, but hopefully you can see what is possible. You could also use a different data source, such as the “Transform data using Power Query” action (similar to what I showed here), instead of using a Power BI alert to supply the data.

Even better, as the blog post I referred to earlier explains, Power Automate now has a preview feature that allows you to wait for a response from an Adaptive Card that collects user input as well as displays data. Imagine a scenario where:

  • A Power BI alert is triggered because the number of widgets in your warehouse is getting dangerously low
  • This alert triggers a Flow that sends a message as an Adaptive Card to a user via Teams which not only displays the current stock level but also asks the user how many widgets to order
  • The Flow waits for a response and, when it gets one, goes away and orders that number of widgets

Hmm, this sounds like a good subject for a future blog post…

 

%d bloggers like this: