Sending Alerts From Fabric Workspace Monitoring Using KQL Querysets And Activator

I’ve always been a big fan of using Log Analytics to analyse Power BI engine activity (I’ve blogged about it many times) and so, naturally, I was very happy when the public preview of Fabric Workspace Monitoring was announced – it gives you everything you get from Log Analytics and more, all from the comfort of your own Fabric workspace. Apart from my blog there are lots of example KQL queries out there that you can use with Log Analytics and Workspace Monitoring, for example in this repo or Sandeep Pawar’s recent post. However what is new with Workspace Monitoring is that if you store these queries in a KQL Queryset you can create alerts in Activator, so when something important happens you can be notified of it.

What type of things should you, as an admin, be notified of? I can think of lots, but to demonstrate what’s possible let’s take the example of DAX query errors. I created a simple semantic model and in it I created the following measure:

Measure Returning Error = ERROR("Demo Error!") 

It uses the DAX Error() function, so every time a report visual contains this measure it will return a custom error:

This makes it very easy to generate errors for testing; in the real world the kind of errors you would want to look out for are ones to do with broken DAX measures (maybe referring to other measures or columns that no longer exist) or ones where the query memory limit has been exceeded.

I published this semantic model to a workspace with Workspace Monitoring enabled and then created a report and deliberately created visuals that generated this error.

I then created a KQL Queryset and used the following KQL query to get all the Error events in the last hour for this exact error:

SemanticModelLogs
| where Timestamp > ago(1hr)
| where OperationName =="Error" 
//filter for exactly the custom error number generated by my DAX
| where StatusCode == "-1053163166"
| extend app = tostring(parse_json(ApplicationContext))
| project 
Timestamp,
EventText, 
ExecutingUser, 
modelid = extract_json("$.DatasetId", app), 
reportId = extract_json("$.Sources[0].ReportId", app),
visualId = extract_json("$.Sources[0].VisualId", app),
consumptionMethod = extract_json("$.Sources[0].HostProperties.ConsumptionMethod", app)

A few things to note about this query:

  • The filter on StatusCode allows me to only return the errors generated by the Error function – different errors will have different error numbers but there is no single place where these error numbers are documented, unfortunately.
  • The last half of the query parses the ApplicationContext column to get the IDs of the semantic model, report and visual that generated the error (something I blogged about here) and where the user was, for example the Power BI web application, when the error occurred (something I blogged about here).

Finally, I created an Activator alert from this KQL Queryset by clicking the “Set alert” button shown in the top right-hand corner of the screenshot above to send me an email every time this error occurred, checking (by running the KQL query) every hour:

I customised the contents of the email alert inside Activator:

And sure enough, after an hour, I started getting email alerts for each error:

The great thing about the combination of Workspace Monitoring, KQL and Activator is the flexibility you get. For example, generating one alert per error would probably result in too many alerts to keep track of; instead you could write your KQL query to aggregate the data and only get one alert per user and error type, or error type and visual. As more and more sources of data are added to Workspace Monitoring and more functionality is added – see the public roadmap for details – then being a Fabric admin will get easier and easier.

4 thoughts on “Sending Alerts From Fabric Workspace Monitoring Using KQL Querysets And Activator

    1. Good question: it’s not billable right now, but it can be fairly large. Even for the simple tests I’m doing I saw it use about 2% of a P1. I believe some optimisations are planned though.

  1. I have been looking at the logging and whilst i can see dashboards against a semantic model i am failing to see paginated reports against the same semantic model. Is there something i am missing, or is what i am seeing correct?

Leave a Reply to Martin O'LoughlinCancel reply