Linking Queries Run From DAX Studio To Workspace Monitoring And The Capacity Metrics App

A few weeks ago I wrote a blog post about how you can now link event data from Workspace Monitoring to data in the Fabric Capacity Metrics App using OperationId values. In the latest (3.4.0) release of DAX Studio there’s a new feature that you might have missed that link queries run from there to Workspace Monitoring and the Capacity Metrics App in the same way.

I connected DAX Studio to a published Power BI semantic model, turned on Server Timings and ran a DAX query. When the query finished I clicked the Info button at the top of the Server Timings pane, which opened the Query Information dialog shown below, and then I copied the Request ID value from there:

The Request ID is the same thing as the OperationId value in Workspace Monitoring and the Capacity Metrics app. I was then able to use this value in a KQL query in Workspace Monitoring like so:

SemanticModelLogs
| where Timestamp > ago(1h)
| where toupper(OperationId) == "5D4A4F47-370D-4634-B67B-E4B58CB067A8"
| project Timestamp, OperationName, OperationDetailName, EventText, Status, StatusCode
| order by Timestamp asc

[Note that you need to convert the OperationID value in Workspace Monitoring to uppercase to match it to the Request ID from DAX Studio]

What’s the value of doing this? Probably not much for an individual query because the information you see in Workspace Monitoring is exactly the same as you get in Server Timings in DAX Studio, and DAX Studio shows this information in a way that’s much easier to understand. If you’re testing a number of DAX queries, though, then having all this data available in Workspace Monitoring means you can do detailed comparisons of different runs using KQL.

What is really useful is being able to find the CU usage on a capacity of a DAX query run from DAX Studio – something that is very important when tuning DAX queries. While DAX Studio (and Workspace Monitoring) will give you a CPU Time value for a DAX query, for reasons I won’t go into here you won’t be able to reverse engineer the algorithm that converts CPU Time to CUs. However by linking the RequestId from DAX Studio to the OperationIds seen on the Timepoint Detail and the new Timepoint Item Detail page in newer versions of the Capacity Metrics app (as discussed here) you don’t need to care about that. You can simply take the Request ID from DAX Studio and find it in the Capacity Metrics app. For example on the Timepoint Item Detail (preview) page you can either find the value in the OperationId slicer or add the OperationId column to the lower table using the Select Optional Column(s) slicer and look for it there:

Snowflake Query Tags In Power BI And Workspace Monitoring

Since I wrote about the Snowflake query tags generated by Power BI earlier this year, one important piece of functionality has been added: an OperationID that allows you to link a SQL query in Snowflake to events in Workspace Monitoring and the Capacity Metrics app. Let’s see some examples.

I created a DirectQuery semantic model connected to Snowflake in a workspace with Workspace Monitoring enabled and ran a report. Looking in the Snowflake monitoring page I clicked on a SQL query that I knew had been generated by Power BI and found the query tag:

Here are the contents of the query tag:

{"PowerQuery":true,"Host":"PBI_SemanticModel_MWC","HostContext":"PowerBIPremium-DirectQuery","ActivityId":"377da3e0-900c-474f-aca2-c6bb6cd4d5a6"}

The ActivityId in the query tag is the OperationId of the query, and as you’ll know if you have read my recent posts, you can use the OperationId to get more details on the DAX query that generated this SQL query in Workspace Monitoring. Here’s the KQL I used to query Workspace Monitoring:

SemanticModelLogs
| where Timestamp > ago(1hr)
| where OperationId == "377da3e0-900c-474f-aca2-c6bb6cd4d5a6"
| project Timestamp, OperationName, EventText, DurationMs

And here are the results, showing the events associated with this DAX query:

Once you’ve found the events associated with a DAX query in Workspace Monitoring you can then get all kinds of other useful information, such as the IDs of the report and visual that generated the DAX query which are found in the ApplicationContext column in the SemanticModelLogs table.

Here’s more information about this DAX query from the new Timepoint Detail (Preview) page in the Capacity Metrics App, with the OperationId column showing the same ID from the query tag (see here for more details on this feature of the Capacity Metrics App):

This trick also works for Import mode refreshes. Here’s the query tag from Snowflake for a SQL query generated by Power BI during the refresh of an Import mode semantic model:

{"PowerQuery":true,"Host":"PBI_SemanticModel_MWC","HostContext":"PowerBIPremium-Import","ActivityId":"8f552c8e-4f7c-4376-b663-198f7f310d09"}

Again, you can use this ID to query Workspace Monitoring to get all the activity associated with the refresh operation:

SemanticModelLogs
| where Timestamp > ago(1hr)
| where OperationId == "8f552c8e-4f7c-4376-b663-198f7f310d09"
| project Timestamp, OperationName, EventText, DurationMs

And again you can find the OperationId for the refresh in the Timepoint Detail (Preview) page of the Capacity Metrics App (you can search for an ID in the Operation ID slicer on this page too):

This is going to make it a lot easier to troubleshoot slow DirectQuery reports or Import mode refreshes when you’re using Snowflake as a source.

[Thanks to my colleague Thierry Houy for this information]

Linking Fabric Warehouse SQL Queries And Spark Jobs To The Capacity Metrics App

Following on from my post two weeks ago about how to get the details of Power BI operations seen in the Capacity Metrics App using the OperationId column on the Timepoint Detail page, I thought it was important to point out that you can do the same thing with TSQL queries against a Fabric Warehouse/SQL Endpoint and with Spark jobs. These two areas of Fabric are outside my area of expertise so please excuse any mistakes or simplifications, but I know a lot of you are Fabric capacity admins so I hope you’ll find this useful.

First of all, how do you find the details of TSQL queries run against a Fabric Warehouse or SQL Endpoint that you see in the Capacity Metrics App? This is actually documented here, but let’s see a simple example. For TSQL queries run on a Fabric Warehouse or SQL Endpoint, the contents of the OperationId column represent the Distributed Statement Id of a query. In the Timepoint Detail page, in the Background Operations table with the OperationId column selected in the Optional Columns dropdown, you can take a single SQL Endpoint Query operation and copy the OperationId value (in this case 5BE63832-C0C7-457D-943B-C44FD49E5145):

…and then paste it into a TSQL query against the queryinsights.exec_requests_history DMV like so:

SELECT distributed_statement_id, start_time, command
FROM queryinsights.exec_requests_history 
WHERE distributed_statement_id = '5BE63832-C0C7-457D-943B-C44FD49E5145';

…and you can get the actual SQL query that was run plus lots of other useful information:

For Spark jobs, the OperationId of an operation in the Capacity Metrics App represents the Livy Id of the job. Here’s an operation of type Notebook Run seen in the Background Operations table on the Timepoint Detail page:

In this case the OperationId value of 41c5dc84-534d-4d21-b3fd-7640705df092 of the job matches the Livy Id of the job seen on the Run Details tab in the Fabric Monitoring Hub:

Unfortunately at the time of writing other Fabric workloads do not yet emit an OperationId or, if they do, may not emit an OperationId that can be linked back to other monitoring data. But as always, if things change or I find out more, I’ll let you know.

How To Get The Details Of Power BI Operations Seen In The Capacity Metrics App

It’s the week of Fabcon Europe and you’re about to be overwhelmed with new Fabric feature announcements. However there is a new blink-and-you’ll-miss-it feature that appeared in the latest version of the Fabric Capacity Metrics App (released on 11th September 2025, version 47) that won’t get any fanfare but which I think is incredibly useful – it allows you to link the Power BI operations (such as queries or refreshes) you see in the Capacity Metrics App back to Workspace Monitoring, Log Analytics or Profiler so you can get details such as the query text.

Let’s say you’re in the Capacity Metrics App in the existing Timepoint Detail page. On the top right hand corner of both the “Interactive operations for time range” and “Background operations for time range” tables there is a dropdown box that allows you to display additional columns. This box now contains an option to display the OperationId column:

After you’ve added this column you’ll see it contains a GUID:

There is also, incidentally, a new page in preview called Timepoint Item Detail (preview) which is reached through the new Timepoint Summary (preview) page and which will eventually replace the Timepoint Detail page. If you haven’t seen this you should check it out: I think it’s a big improvement. This also has a dropdown box that allows you to show the OperationId column in its versions of the “Interactive operations for time range” and “Background operations for time range” tables.

This page also has a dropdown box at the top that allows you to filter operations by OperationId.

The OperationId is a unique identifier for each Power BI operation. Right clicking on this value and selecting Copy/Copy value to copy it:

…means that you can use this value to cross-reference with the log data you find in Workspace Monitoring, Log Analytics or Profiler. For example I have a workspace with Workspace Monitoring enabled and found the following OperationId in the Capacity Metrics App: a7a2d4d4-2a9b-4535-b65a-a0cc0389d821. The following KQL query run on Workspace Monitoring:

let
OperationIdFromCapacityMetrics = "a7a2d4d4-2a9b-4535-b65a-a0cc0389d821";
SemanticModelLogs
| where OperationId == OperationIdFromCapacityMetrics
| where OperationName == "QueryEnd"
| project Timestamp, ItemName, EventText, DurationMs, OperationDetailName
| order by Timestamp

…returns the query text for the DAX query associated with this interactive operation:

There’s a lot of other information you can get by writing KQL queries from Workspace Monitoring (for some examples see here) such as the IDs of the visual and the report that generated the query. If you’re using Log Analytics or Profiler there is no OperationId column – it’s called XmlaRequestId in Log Analytics and RequestId in Profiler – but the same information is available there too.

This is very useful for admins trying to identify why a capacity is overloaded: it means that you can now see the details of expensive queries or refresh operations and understand why they are causing problems. Make sure you upgrade your Capacity Metrics App to the latest version and enable Workspace Monitoring on all important workspaces so you can do this!

[Thanks to Tim Bindas and Lukasz Pawlowski for letting me know about this]