Sending Alerts From Fabric Workspace Monitoring Using KQL Querysets And Activator

I’ve always been a big fan of using Log Analytics to analyse Power BI engine activity (I’ve blogged about it many times) and so, naturally, I was very happy when the public preview of Fabric Workspace Monitoring was announced – it gives you everything you get from Log Analytics and more, all from the comfort of your own Fabric workspace. Apart from my blog there are lots of example KQL queries out there that you can use with Log Analytics and Workspace Monitoring, for example in this repo or Sandeep Pawar’s recent post. However what is new with Workspace Monitoring is that if you store these queries in a KQL Queryset you can create alerts in Activator, so when something important happens you can be notified of it.

What type of things should you, as an admin, be notified of? I can think of lots, but to demonstrate what’s possible let’s take the example of DAX query errors. I created a simple semantic model and in it I created the following measure:

Measure Returning Error = ERROR("Demo Error!") 

It uses the DAX Error() function, so every time a report visual contains this measure it will return a custom error:

This makes it very easy to generate errors for testing; in the real world the kind of errors you would want to look out for are ones to do with broken DAX measures (maybe referring to other measures or columns that no longer exist) or ones where the query memory limit has been exceeded.

I published this semantic model to a workspace with Workspace Monitoring enabled and then created a report and deliberately created visuals that generated this error.

I then created a KQL Queryset and used the following KQL query to get all the Error events in the last hour for this exact error:

SemanticModelLogs
| where Timestamp > ago(1hr)
| where OperationName =="Error" 
//filter for exactly the custom error number generated by my DAX
| where StatusCode == "-1053163166"
| extend app = tostring(parse_json(ApplicationContext))
| project 
Timestamp,
EventText, 
ExecutingUser, 
modelid = extract_json("$.DatasetId", app), 
reportId = extract_json("$.Sources[0].ReportId", app),
visualId = extract_json("$.Sources[0].VisualId", app),
consumptionMethod = extract_json("$.Sources[0].HostProperties.ConsumptionMethod", app)

A few things to note about this query:

  • The filter on StatusCode allows me to only return the errors generated by the Error function – different errors will have different error numbers but there is no single place where these error numbers are documented, unfortunately.
  • The last half of the query parses the ApplicationContext column to get the IDs of the semantic model, report and visual that generated the error (something I blogged about here) and where the user was, for example the Power BI web application, when the error occurred (something I blogged about here).

Finally, I created an Activator alert from this KQL Queryset by clicking the “Set alert” button shown in the top right-hand corner of the screenshot above to send me an email every time this error occurred, checking (by running the KQL query) every hour:

I customised the contents of the email alert inside Activator:

And sure enough, after an hour, I started getting email alerts for each error:

The great thing about the combination of Workspace Monitoring, KQL and Activator is the flexibility you get. For example, generating one alert per error would probably result in too many alerts to keep track of; instead you could write your KQL query to aggregate the data and only get one alert per user and error type, or error type and visual. As more and more sources of data are added to Workspace Monitoring and more functionality is added – see the public roadmap for details – then being a Fabric admin will get easier and easier.

Refreshing A Power BI Semantic Model With Eventstreams, Pipelines And Activator

Following on from my last post where I showed how to send data from Power Automate to a Fabric Eventstream, in this post I’m going to show how to use it to solve one of my favourite problems: refreshing a Power BI semantic model that uses an Excel workbook stored in OneDrive as a source when that Excel workbook is modified.

Now before I go on I want to be clear that I know this is a ridiculously over-engineered and expensive (in terms of CUs) solution, and that you can do almost the same thing just using Power Automate or in several other different ways – see my colleague Mark Pryce-Maher’s recent videos on using Fabric Open Mirroring with Excel for example. I’m doing this to teach myself Fabric Eventstreams and Activator and see what’s possible with them. Please excuse any mistakes or bad practices.

To start off, I built a Power BI semantic model containing a single table, connected to an Excel workbook stored in OneDrive for Business:

I then built a Fabric Pipeline to refresh the semantic model, using the newly-enhanced Semantic Model Refresh activity:

Next, I created a new Fabric Eventstream and created a slightly more sophisticated version of the Power Automate flow in my previous post that ran every 30 seconds, checked to see if the Excel file was modified, and if it was, sent an event to the Eventstream:

The Compose action in this flow generates some JSON that is sent to the Eventstream and contains the name of the file that has been altered and a dummy value called EventCounter that always contains 1:

I then connected the Eventstream to a new Activator:

In the Activator, I created a new object called ExcelWorkbook. I used the FileName field as the unique identifier and added the EventCounter field as a property:

Finally I created a new rule on the EventCounter property that aggregated the events over 1 minute windows and then fired when the total number of events in each window changed to the value 0 and stayed at 0 for 1 minute:

This is where the real value of Activator’s rules comes in. As I said, it’s easy using other approaches to refresh a semantic model when an Excel file changes. The problem with doing this, however, is that Excel files in OneDrive for Business autosave on a regular basis and if you make several changes to the file over the space of several minutes, you’ll end up trying to refresh the semantic model all the time. This can lead to excessive CU usage on your capacity because you’re refreshing too often as well as situations where one refresh fails because another is already in progress. What the rule above does is only trigger a refresh when changes to the Excel workbook have been detected (so the Total is greater than 0), then there has been a minute where no changes have been detected (so the Total changes to 0) and there has been a further minute where no changes have been detected (so the Total stays at 0). Since Activator rules can trigger Fabric items, I hooked my rule up to the Pipeline shown above to run the refresh:

This screenshot of the Definition tab of the rule shows a pattern of events where there were four one-minute windows where the Excel file was edited and the rule fired three times:

The problem with this rule is that if the Excel file is edited constantly for a long period, so that there is no period of inactivity, the semantic model would not get refreshed until that period of activity is over, which might be too long to wait. I guess the solution would be another rule that detects periods of, say, 10 minutes when the file has been edited constantly and refreshes the model even if there is no period of inactivity.

All in all I think this shows how the combination of Power BI/Power Platform and new features in Fabric can be used to solve old problems in new, interesting ways. I’m looking forward to learning more about Fabric Real-Time Intelligence and Activator!

[Thanks to James Hutton from the Activator team for his help with this post]