A few weeks ago I wrote a blog post about how you can now link event data from Workspace Monitoring to data in the Fabric Capacity Metrics App using OperationId values. In the latest (3.4.0) release of DAX Studio there’s a new feature that you might have missed that link queries run from there to Workspace Monitoring and the Capacity Metrics App in the same way.
I connected DAX Studio to a published Power BI semantic model, turned on Server Timings and ran a DAX query. When the query finished I clicked the Info button at the top of the Server Timings pane, which opened the Query Information dialog shown below, and then I copied the Request ID value from there:
The Request ID is the same thing as the OperationId value in Workspace Monitoring and the Capacity Metrics app. I was then able to use this value in a KQL query in Workspace Monitoring like so:
SemanticModelLogs
| where Timestamp > ago(1h)
| where toupper(OperationId) == "5D4A4F47-370D-4634-B67B-E4B58CB067A8"
| project Timestamp, OperationName, OperationDetailName, EventText, Status, StatusCode
| order by Timestamp asc
[Note that you need to convert the OperationID value in Workspace Monitoring to uppercase to match it to the Request ID from DAX Studio]
What’s the value of doing this? Probably not much for an individual query because the information you see in Workspace Monitoring is exactly the same as you get in Server Timings in DAX Studio, and DAX Studio shows this information in a way that’s much easier to understand. If you’re testing a number of DAX queries, though, then having all this data available in Workspace Monitoring means you can do detailed comparisons of different runs using KQL.
What is really useful is being able to find the CU usage on a capacity of a DAX query run from DAX Studio – something that is very important when tuning DAX queries. While DAX Studio (and Workspace Monitoring) will give you a CPU Time value for a DAX query, for reasons I won’t go into here you won’t be able to reverse engineer the algorithm that converts CPU Time to CUs. However by linking the RequestId from DAX Studio to the OperationIds seen on the Timepoint Detail and the new Timepoint Item Detail page in newer versions of the Capacity Metrics app (as discussed here) you don’t need to care about that. You can simply take the Request ID from DAX Studio and find it in the Capacity Metrics app. For example on the Timepoint Item Detail (preview) page you can either find the value in the OperationId slicer or add the OperationId column to the lower table using the Select Optional Column(s) slicer and look for it there:
Since I wrote about the Snowflake query tags generated by Power BI earlier this year, one important piece of functionality has been added: an OperationID that allows you to link a SQL query in Snowflake to events in Workspace Monitoring and the Capacity Metrics app. Let’s see some examples.
I created a DirectQuery semantic model connected to Snowflake in a workspace with Workspace Monitoring enabled and ran a report. Looking in the Snowflake monitoring page I clicked on a SQL query that I knew had been generated by Power BI and found the query tag:
The ActivityId in the query tag is the OperationId of the query, and as you’ll know if you have read my recent posts, you can use the OperationId to get more details on the DAX query that generated this SQL query in Workspace Monitoring. Here’s the KQL I used to query Workspace Monitoring:
SemanticModelLogs
| where Timestamp > ago(1hr)
| where OperationId == "377da3e0-900c-474f-aca2-c6bb6cd4d5a6"
| project Timestamp, OperationName, EventText, DurationMs
And here are the results, showing the events associated with this DAX query:
Once you’ve found the events associated with a DAX query in Workspace Monitoring you can then get all kinds of other useful information, such as the IDs of the report and visual that generated the DAX query which are found in the ApplicationContext column in the SemanticModelLogs table.
Here’s more information about this DAX query from the new Timepoint Detail (Preview) page in the Capacity Metrics App, with the OperationId column showing the same ID from the query tag (see here for more details on this feature of the Capacity Metrics App):
This trick also works for Import mode refreshes. Here’s the query tag from Snowflake for a SQL query generated by Power BI during the refresh of an Import mode semantic model:
Again, you can use this ID to query Workspace Monitoring to get all the activity associated with the refresh operation:
SemanticModelLogs
| where Timestamp > ago(1hr)
| where OperationId == "8f552c8e-4f7c-4376-b663-198f7f310d09"
| project Timestamp, OperationName, EventText, DurationMs
And again you can find the OperationId for the refresh in the Timepoint Detail (Preview) page of the Capacity Metrics App (you can search for an ID in the Operation ID slicer on this page too):
This is going to make it a lot easier to troubleshoot slow DirectQuery reports or Import mode refreshes when you’re using Snowflake as a source.
[Thanks to my colleague Thierry Houy for this information]
The key to using Log Analytics and Workspace Monitoring to troubleshoot throttling is the Execution Metrics event. Queries and semantic model refreshes always have an Execution Metrics event associated with them, and you can make the connection between all events for a specific operation using the OperationId column in Workspace Monitoring (as mentioned in my last post) or the XmlaRequestId column in Log Analytics – these columns contain the same information, it’s just that the column name was changed in Workspace Monitoring. When a query or a refresh is delayed because of throttling the Execution Metrics event will include a metric called capacityThrottlingMs, which gives the total amount of time in ms that the query or refresh was delayed before it was allowed to execute.
The documentation for Execution Metrics contains a sample KQL query for Log Analytics that uses Execution Metrics to return aggregated data on throttling but doesn’t explain how it works. Here’s a version of that query rewritten for Workspace Monitoring that returns the individual query and refresh operations that were throttled in the last five days:
let
EventsLast5Days =
SemanticModelLogs
| where Timestamp > (5d)
| where OperationName in ("CommandEnd", "QueryEnd", "DiscoverEnd", "ExecutionMetrics");
let
ExecutionMetricsEvents =
EventsLast5Days
| where OperationName == "ExecutionMetrics"
| project ExecutionMetrics = EventText, OperationId
| extend EM = parse_json(ExecutionMetrics)
| extend capacityThrottlingMs = toint(EM.capacityThrottlingMs)
| project-away EM, ExecutionMetrics
| where capacityThrottlingMs > 0;
let
QueryDiscoverRefreshEvents =
EventsLast5Days
| where OperationName in ("CommandEnd", "QueryEnd", "DiscoverEnd")
| project Timestamp, OperationName, OperationDetailName, QueryText = EventText, DurationMs, CpuTimeMs, OperationId, ExecutingUser;
ExecutionMetricsEvents
| join kind=inner QueryDiscoverRefreshEvents on OperationId
| project-away OperationId1
Here’s how this query works:
The EventsLast5Days variable gets all the events in the last 5 days which have an OperationDetailName of either CommandEnd (which is fired when a refresh finishes), QueryEnd (which is fired when a DAX or MDX query finishes), DiscoverEnd (which is fired when a Discover command, which is used to retrieve metadata from the Power BI engine, finishes) or ExecutionMetrics.
The ExecutionMetricsEvents variable takes the EventsLast5Days variable and filters it down to just the ExecutionMetrics event. It then parses the JSON value containing the metrics in the EventText column, extracts the capacityThrottlingMs metric if it is present, and filters the events down to just those where capacityThrottlingMs is greater than 0.
The QueryDiscoverRefreshEvents variable takes the EventsLast5Days variable and filters it down to just the CommandEnd, DiscoverEnd and QueryEnd events.
Finally there is an inner join between the ExecutionMetricsEvents variable and the QueryDiscoverRefreshEvents variable on the OperationId column, which results in a table of the CommandEnd, DiscoverEnd and QueryEnd events where throttling was present.
A couple of important things to point out:
The duration of an event in Workspace Monitoring, given in the DurationMs column, does not include any delays added as a result of capacity throttling. As a result you can have queries that take 0ms but which were delayed by 20 seconds – something you might not realise if you only look at the DurationMs column.
Having the DAX queries that are run when you display a Power BI report is bad enough, but throttling delays are also applied to Discover commands, and Power BI reports (and indeed every client tool) will run a number of these before they run the queries they needs to render a report. This means that the impact of throttling on report performance can be a lot greater than you might expect.
The new Timepoint Item Detail page in the Capacity Metrics App, in preview at the time of writing, includes a slicer on the OperationId column. This means that you can do the opposite of what I showed in my previous blog post and filter data in the Capacity Metrics App using an OperationId found from a query in Workspace Monitoring. This would allow you to find the percentage of the available CUs on a capacity that a given query or refresh used.
It’s the week of Fabcon Europe and you’re about to be overwhelmed with new Fabric feature announcements. However there is a new blink-and-you’ll-miss-it feature that appeared in the latest version of the Fabric Capacity Metrics App (released on 11th September 2025, version 47) that won’t get any fanfare but which I think is incredibly useful – it allows you to link the Power BI operations (such as queries or refreshes) you see in the Capacity Metrics App back to Workspace Monitoring, Log Analytics or Profiler so you can get details such as the query text.
Let’s say you’re in the Capacity Metrics App in the existing Timepoint Detail page. On the top right hand corner of both the “Interactive operations for time range” and “Background operations for time range” tables there is a dropdown box that allows you to display additional columns. This box now contains an option to display the OperationId column:
After you’ve added this column you’ll see it contains a GUID:
There is also, incidentally, a new page in preview called Timepoint Item Detail (preview) which is reached through the new Timepoint Summary (preview) page and which will eventually replace the Timepoint Detail page. If you haven’t seen this you should check it out: I think it’s a big improvement. This also has a dropdown box that allows you to show the OperationId column in its versions of the “Interactive operations for time range” and “Background operations for time range” tables.
This page also has a dropdown box at the top that allows you to filter operations by OperationId.
The OperationId is a unique identifier for each Power BI operation. Right clicking on this value and selecting Copy/Copy value to copy it:
…means that you can use this value to cross-reference with the log data you find in Workspace Monitoring, Log Analytics or Profiler. For example I have a workspace with Workspace Monitoring enabled and found the following OperationId in the Capacity Metrics App: a7a2d4d4-2a9b-4535-b65a-a0cc0389d821. The following KQL query run on Workspace Monitoring:
let
OperationIdFromCapacityMetrics = "a7a2d4d4-2a9b-4535-b65a-a0cc0389d821";
SemanticModelLogs
| where OperationId == OperationIdFromCapacityMetrics
| where OperationName == "QueryEnd"
| project Timestamp, ItemName, EventText, DurationMs, OperationDetailName
| order by Timestamp
…returns the query text for the DAX query associated with this interactive operation:
There’s a lot of other information you can get by writing KQL queries from Workspace Monitoring (for some examples see here) such as the IDs of the visual and the report that generated the query. If you’re using Log Analytics or Profiler there is no OperationId column – it’s called XmlaRequestId in Log Analytics and RequestId in Profiler – but the same information is available there too.
This is very useful for admins trying to identify why a capacity is overloaded: it means that you can now see the details of expensive queries or refresh operations and understand why they are causing problems. Make sure you upgrade your Capacity Metrics App to the latest version and enable Workspace Monitoring on all important workspaces so you can do this!
[Thanks to Tim Bindas and Lukasz Pawlowski for letting me know about this]
Over the last few years one topic I have blogged about several times is how to link the detailed data about DAX query execution that can be found in Log Analytics – and now Workspace Monitoring – to the visual in a Power BI report that generated those DAX queries, something that is extremely useful when you’re performance tuning Power BI reports. My first post here from 2021 showed how write the KQL for Log Analytics but the problem has always been how to get the IDs of the visuals in a Power BI report. You can get the IDs from the definition of the Power BI report, as shown here, and Sandeep Pawar has a great post on some other methods here, but all these methods were superseded in the March release of Power BI with the ability to copy the IDs by right clicking on the visual in a Power BI report when editing it (thank you Rui Romano!).
This made me realise that it’s time to revisit my first post on how to get the query details in KQL since the column names in Workspace Monitoring are slightly different from Log Analytics and, indeed, the KQL needed can be simplified from my original version. Here’s a KQL query that you can run in a KQL Queryset connected to your Monitoring Eventhouse:
let VisualId = "InsertVisualIdHere";
SemanticModelLogs
| search VisualId
| project Timestamp, OperationName, OperationDetailName, DurationMs, EventText, OperationId
| order by Timestamp asc
Once you’ve copied the visual’s ID from the report (remember you need to specifically enable this feature and that you need to be in Edit mode) by right-clicking on it and selecting “Copy object name”:
…then you just need to paste the ID into the let statement in the first line of the KQL query and run it:
You can find the documentation for the columns in the SemanticModelLogs table here and the documentation for the events here. The events for a specific query all have the same value in the OperationId column.
Bonus fact: you can now run queries against Workspace Monitoring using Semantic Link Labs, as documented here, which makes it much easier to do other fun stuff with this data. For example, I can imagine there are ways to visualise DAX query and semantic model refresh events in Python that would make them much easier to analyse, but that’s something for a future blog post.
I’m a big fan of Phil Seamark’s “visualise your refresh” pbix file, which allows you to analyse the different operations of an Import mode semantic model refresh operation (similar functionality has now been built into Semantic Link Labs now too). In particular I like how it allows you to visualise operations inside a refresh as a Gantt chart, which makes it easy to see how much parallelism you’re getting when you refresh – and as I discussed here, the amount of parallelism can have a massive effect on how long a refresh takes. However Gantt charts are just one way to visualise this data and after one of my colleagues showed me the KQL range() function I realised there was a different way to approach this problem using KQL and Fabric Workspace Monitoring.
Here’s the KQL query I came up with to find the number of partitions that are refreshing in parallel at any given point in time when a semantic model is refreshed:
SemanticModelLogs
//Filter by Timestamp for performance
| where Timestamp > ago(1h)
//Filter by the OperationId of the refresh
| where OperationId == "insert OperationId of refresh here"
//Only get the events fired when partition refresh finishes
| where OperationDetailName =="Process" and OperationName =="ProgressReportEnd"
//Find the start time of the partition refresh
| extend StartTime = datetime_add("Millisecond", -1 * DurationMs, Timestamp)
//Create a list of all the seconds where the refresh was in progress
| extend StartSecond = range(bin(StartTime,1s), bin(Timestamp,1s), 1s)
| mv-expand StartSecond
//Aggregate the data and get the number of partitions being refreshed in each one-second bin
| summarize PartitionsBeingRefreshed=count() by todatetime(StartSecond)
//Sort in ascending order by one-second bins
| order by StartSecond asc
What this query does is:
Filter the events in the SemanticModelLogs table to just those for the refresh whose OperationId you enter
Filter again so you only get the ProgressReportEnd events that are fired when a partition refresh finishes
Calculates the start time of the partition refresh by taking the finish time and subtracting the duration of the refresh
Creates a table with one row for each second that each partition refresh was in progress
Aggregates this table so you get the number of partition refresh operations that were in progress for each one second bin
All events associated with a single semantic model refresh can be identified by the value in the OperationId column in the SemanticModelLogs table in Workspace Monitoring. To find it, just run a KQL query on the SemanticModelLogs table to get all the events from the time period when your refresh took place; it will be easy to spot the events associated with the refresh by looking at the contents of the EventText column, so all you need to do is copy the value from the OperationId column from one of these events.
I refreshed a semantic model with six dimension tables and one fact table containing ten partitions and here’s what the query above showed when visualised as a line chart in a KQL Queryset:
As you can see, at first four partitions are refreshing in parallel; this quickly drops and then rises, and for the first half of the refresh six partitions are refreshing in parallel. At about the two-thirds point this drops to four partitions refreshing in parallel and as they complete the line chart drops to one. Since six partitions refreshing in parallel is the maximum I allowed for this refresh you can see it’s reasonably efficient.
To get more detail on what is being refreshed, here’s a variation of the KQL query above that doesn’t perform the final aggregation:
This data can then be used in a Power BI report and visualised as a stacked bar chart:
The advantage of this is that you can see which partitions are refreshing when: in this case you can see that it was the partitions from the dimension tables that refreshed first, followed by the partitions from the fact table.
If you’re not using Fabric Workspace Monitoring it should be easy to adapt the queries above to work with Power BI Log Analytics integration instead – it’s just a matter of changing a few column names.
There’s one problem I can think of with this approach: if there are multiple small partitions being refreshed that take under a second there’s a chance that you’ll see inflated parallelism numbers in the results at some points. I don’t think that’s a major problem though and overall I think this is quite a useful way to understand how much parallelism you’re getting during a refresh.
[Thanks to Matthew Farrow for the inspiration here – check out his excellent, detailed series of articles on LinkedIn starting here about understanding Fabric capacities and billing]
I’ve always been a big fan of using Log Analytics to analyse Power BI engine activity (I’ve blogged about it many times) and so, naturally, I was very happy when the public preview of Fabric Workspace Monitoring was announced – it gives you everything you get from Log Analytics and more, all from the comfort of your own Fabric workspace. Apart from my blog there are lots of example KQL queries out there that you can use with Log Analytics and Workspace Monitoring, for example in this repo or Sandeep Pawar’s recent post. However what is new with Workspace Monitoring is that if you store these queries in a KQL Queryset you can create alerts in Activator, so when something important happens you can be notified of it.
What type of things should you, as an admin, be notified of? I can think of lots, but to demonstrate what’s possible let’s take the example of DAX query errors. I created a simple semantic model and in it I created the following measure:
Measure Returning Error = ERROR("Demo Error!")
It uses the DAX Error() function, so every time a report visual contains this measure it will return a custom error:
This makes it very easy to generate errors for testing; in the real world the kind of errors you would want to look out for are ones to do with broken DAX measures (maybe referring to other measures or columns that no longer exist) or ones where the query memory limit has been exceeded.
I published this semantic model to a workspace with Workspace Monitoring enabled and then created a report and deliberately created visuals that generated this error.
I then created a KQL Queryset and used the following KQL query to get all the Error events in the last hour for this exact error:
SemanticModelLogs
| where Timestamp > ago(1hr)
| where OperationName =="Error"
//filter for exactly the custom error number generated by my DAX
| where StatusCode == "-1053163166"
| extend app = tostring(parse_json(ApplicationContext))
| project
Timestamp,
EventText,
ExecutingUser,
modelid = extract_json("$.DatasetId", app),
reportId = extract_json("$.Sources[0].ReportId", app),
visualId = extract_json("$.Sources[0].VisualId", app),
consumptionMethod = extract_json("$.Sources[0].HostProperties.ConsumptionMethod", app)
A few things to note about this query:
The filter on StatusCode allows me to only return the errors generated by the Error function – different errors will have different error numbers but there is no single place where these error numbers are documented, unfortunately.
The last half of the query parses the ApplicationContext column to get the IDs of the semantic model, report and visual that generated the error (something I blogged about here) and where the user was, for example the Power BI web application, when the error occurred (something I blogged about here).
Finally, I created an Activator alert from this KQL Queryset by clicking the “Set alert” button shown in the top right-hand corner of the screenshot above to send me an email every time this error occurred, checking (by running the KQL query) every hour:
I customised the contents of the email alert inside Activator:
And sure enough, after an hour, I started getting email alerts for each error:
The great thing about the combination of Workspace Monitoring, KQL and Activator is the flexibility you get. For example, generating one alert per error would probably result in too many alerts to keep track of; instead you could write your KQL query to aggregate the data and only get one alert per user and error type, or error type and visual. As more and more sources of data are added to Workspace Monitoring and more functionality is added – see the public roadmap for details – then being a Fabric admin will get easier and easier.