Refreshing A Power BI Semantic Model With Eventstreams, Pipelines And Activator

Following on from my last post where I showed how to send data from Power Automate to a Fabric Eventstream, in this post I’m going to show how to use it to solve one of my favourite problems: refreshing a Power BI semantic model that uses an Excel workbook stored in OneDrive as a source when that Excel workbook is modified.

Now before I go on I want to be clear that I know this is a ridiculously over-engineered and expensive (in terms of CUs) solution, and that you can do almost the same thing just using Power Automate or in several other different ways – see my colleague Mark Pryce-Maher’s recent videos on using Fabric Open Mirroring with Excel for example. I’m doing this to teach myself Fabric Eventstreams and Activator and see what’s possible with them. Please excuse any mistakes or bad practices.

To start off, I built a Power BI semantic model containing a single table, connected to an Excel workbook stored in OneDrive for Business:

I then built a Fabric Pipeline to refresh the semantic model, using the newly-enhanced Semantic Model Refresh activity:

Next, I created a new Fabric Eventstream and created a slightly more sophisticated version of the Power Automate flow in my previous post that ran every 30 seconds, checked to see if the Excel file was modified, and if it was, sent an event to the Eventstream:

The Compose action in this flow generates some JSON that is sent to the Eventstream and contains the name of the file that has been altered and a dummy value called EventCounter that always contains 1:

I then connected the Eventstream to a new Activator:

In the Activator, I created a new object called ExcelWorkbook. I used the FileName field as the unique identifier and added the EventCounter field as a property:

Finally I created a new rule on the EventCounter property that aggregated the events over 1 minute windows and then fired when the total number of events in each window changed to the value 0 and stayed at 0 for 1 minute:

This is where the real value of Activator’s rules comes in. As I said, it’s easy using other approaches to refresh a semantic model when an Excel file changes. The problem with doing this, however, is that Excel files in OneDrive for Business autosave on a regular basis and if you make several changes to the file over the space of several minutes, you’ll end up trying to refresh the semantic model all the time. This can lead to excessive CU usage on your capacity because you’re refreshing too often as well as situations where one refresh fails because another is already in progress. What the rule above does is only trigger a refresh when changes to the Excel workbook have been detected (so the Total is greater than 0), then there has been a minute where no changes have been detected (so the Total changes to 0) and there has been a further minute where no changes have been detected (so the Total stays at 0). Since Activator rules can trigger Fabric items, I hooked my rule up to the Pipeline shown above to run the refresh:

This screenshot of the Definition tab of the rule shows a pattern of events where there were four one-minute windows where the Excel file was edited and the rule fired three times:

The problem with this rule is that if the Excel file is edited constantly for a long period, so that there is no period of inactivity, the semantic model would not get refreshed until that period of activity is over, which might be too long to wait. I guess the solution would be another rule that detects periods of, say, 10 minutes when the file has been edited constantly and refreshes the model even if there is no period of inactivity.

All in all I think this shows how the combination of Power BI/Power Platform and new features in Fabric can be used to solve old problems in new, interesting ways. I’m looking forward to learning more about Fabric Real-Time Intelligence and Activator!

[Thanks to James Hutton from the Activator team for his help with this post]

Send Data From Power Automate To Fabric Using Eventstreams

Fabric’s Real-Time Intelligence features are, for me, the most interesting things to learn about in the platform. I’m not going to pretend to be an expert in them – far from it – but they are quite easy to use and they open up some interesting possibilities for low-code/no-code people like me. The other day I was wondering if it was possible to send events and data from Power Automate to Fabric using Eventstreams and it turns out it is quite easy to do.

Here’s a really simple illustration. I created a new Eventstream in a Fabric workspace, then clicked the “Use custom endpoint” option and published the Eventstream so it was ready to send data to:

Then, in Power Automate, I built a super-simple flow with a “Manually trigger a flow” trigger and then the “Send event” action from the Event Hubs connector:

This works because you can send data from Power Automate to the Eventstream custom endpoint using the Event Hub protocol. To set up the connection to the Eventstream in Power Automate I entered a name for the connection, chose the “Access Key” authentication type and then copied the “Connection string-primary key” value from the Eventstream custom endpoint, shown in the first screenshot above (you need to click the eye icon next to it in order to copy it to the clipboard) and pasted it into the “Connection String” property in the Power Automate connector:

Then I configured the “Send event” action like so:

I had to manually paste the Event Hub Name, shown in the first screenshot above, as a custom value because I got an error when I tried to use Power Automate’s dropdown box to find the name. I put the Timestamp from the “Manually trigger a flow” trigger into the Content property.

After saving the flow I was then able to trigger it from my browser or the Power Automate mobile app, and every time the flow ran it sent the time of the run to my Eventstream:

Very basic but it proves the point. What could you use this for? Power Automate has a huge number of connectors for Microsoft and third party services: for example you can trigger a flow when a file stored in OneDrive for Business is modified or when new content is published to an RSS feed. Sending these events and their related data on to a Fabric Eventstream not only means you can capture them for analysis in OneLake or an EventHouse but also send the events on to Fabric Activator, which can apply sophisticated rules so you can be alerted when something happens and specific conditions are met in Teams or via email, run Fabric items such as notebooks or even trigger Power Automate flows. I’ll explore some of these scenarios in more detail in future blog posts.

Calling The Power BI Enhanced Refresh API From Power Automate, Part 6: Cancelling Dataset Refreshes

The thing that got me excited about the Power BI Enhanced Refresh API, and which inspired me to start this series of posts, was the fact that for the first time it gives you a way of cancelling Power BI dataset refreshes. In this last post in the series I’ll show you how you can add an Action to your Power Automate custom connector to cancel a refresh and how you can use it in a Power Automate Flow.

Adding a new Action is straightforward and very similar to what I’ve shown in previous posts in this series. As before, click on the New action button on the Definition stage of the custom connector wizard, fill in the General section with appropriate names:

…then use the Import from sample option to populate the Request section using the DELETE verb and the following URL for the Cancel Refresh In Group API endpoint:

https://api.powerbi.com/v1.0/myorg/groups/{groupId}/datasets/{datasetId}/refreshes/{refreshId}

The refreshId, which uniquely identifies the refresh operation you which to cancel can be obtained from the Action I described in part 4 of this series that can be used to kick off a refresh.

Using this new Action in a Flow is more interesting. Here’s an example Flow that kicks off a dataset refresh but cancels it if it goes on for more than 120 seconds; it’s similar to, but more complex than, the Flow I described in this post that checks the status of a refresh. At the top level it looks like this:

The first two actions kick off a full refresh of a dataset:

The next two Actions initialise two variables:

  • ElapsedRefreshDurationSeconds is an integer used to hold the amount of time in seconds that the refresh has been running for each time it is checked
  • StopCheckingRefreshStatus is a boolean variable used to determine whether the upcoming Do until loop should be terminated or not

The contents of the Do until loop look like this:

After waiting 30 seconds the status of the refresh is checked. If the status is “Unknown” then the refresh is ongoing and the next step is to check how long it has been running for; otherwise the refresh has completed at the StopCheckingRefreshStatus variable can be set to true so the Do until loop can be terminated. Drilling into the “Is refresh ongoing” conditional Action:

If the refresh is still in progress then the current elapsed duration in seconds of the refresh is calculated using the following expression (this blog post showed me how to calculate durations in Power Automate):

div(sub(ticks(utcNow()), ticks(outputs('Get_status_of_one_dataset_refresh')?['body/startTime'])),10000000)

If that elapsed duration is more than 120 seconds then the refresh should be cancelled; otherwise no action is taken. Drilling into the Yes branch of the conditional Action:

The refresh is cancelled by calling the Action created at the beginning of this post; after the cancellation the StopCheckingRefreshStatus variable is set to true. Here are details of the Actions from the Yes branch that do this:

That’s it for this post and this series. Since I started the series over a month ago a few other articles on similar subjects have been published on LinkedIn that you might want to check out, by my colleagues Romain Casteres and Rui Romano; as more enterprise BI solutions are built on Power BI, I know there will be a lot more people implementing complex refresh scenarios using Power Automate/Logic Apps and the Enhanced Refresh API.

Calling The Power BI Enhanced Refresh API From Power Automate, Part 5: Getting A Dataset’s Refresh History

In the last post in this series I showed how to get the status of an individual Power BI dataset refresh. In this post I’ll show how to get the history of a dataset’s refreshes and save it to a file in OneDrive for Business – useful if you want to do some analysis on refresh performance or the causes of failures.

This is one of the most straightforward things to do with the Power BI Enhanced Refresh API , especially once you’ve already got a custom connector created in the way I’ve shown in the previous posts in this series. You just need to add a new Action to your custom connector, fill in the General section:

…then in the Request section use Import from sample with the GET verb on the following URL (for Get Refresh History In Group):

https://api.powerbi.com/v1.0/myorg/groups/{groupId}/datasets/{datasetId}/refreshes?$top={$top}

The only thing to do after this is edit the optional $top query parameter (which allows you to limit the number of refreshes whose history you return) so it takes an integer rather than a string:

Update the custom connector and you can now use it in a simple Flow to save the JSON returned by the call to the API into a JSON file:

You can then connect to this data very easily via Power Query, for example:

Rather than trigger a Flow like this manually, in the real world you’d want to run it on a schedule, perhaps every day. You’d then end up with a folder full of JSON files you could analyse and as much history stored as you wanted. It should be possible to load and dedupe this data (multiple JSON files will contain overlapping refresh history) using Power BI dataflows and/or datamarts but that’s something I’ll leave for another blog post…

Calling The Power BI Enhanced Refresh API From Power Automate, Part 4: Getting The Status Of a Refresh

So far in this series (see part 1, part 2 and part 3) I’ve looked at how you can create a Power Automate custom connector that uses the Power BI Enhanced Refresh API to kick off a dataset refresh. That’s only half the story though: once the refresh has been started you need to know if it has finished and, if so, whether it finished successfully or not. In this post I’ll show how to do this.

When you start a refresh using the Enhanced Refresh API Power BI returns a unique identifier for that refresh operation and you will need to modify your Power Automate custom connector to make it easy to capture and use that identifier. You can do this on the Definition stage of the custom connector wizard for an Action that kicks off a refresh (ie any of the Actions I built in the previous posts in this series) by going to the Response section, clicking on the Add default response button and pasting the following sample response (from here in the docs) into the Headers box and clicking Import:

x-ms-request-id: 87f31ef7-1e3a-4006-9b0b-191693e79e9e
Location: https://api.powerbi.com/v1.0/myorg/groups/f089354e-8366-4e18-aea3-4cb4a3a50b48/datasets/cfafbeb1-8037-4d0c-896e-a46fb27ff229/refreshes/87f31ef7-1e3a-4006-9b0b-191693e79e9e

It’s the x-ms-request-id custom header that will contain the unique identifier for the refresh.

The next step is to create a new Action to check the status of the refresh using the Get Refresh Execution Details In Group API endpoint. To do this, click on the New action button on the Definition stage of the custom connector wizard and fill in the details in the General section:

Then fill in the Request section. Using Import from sample, select the verb GET, enter the following URL:

https://api.powerbi.com/v1.0/myorg/groups/{groupId}/datasets/{datasetId}/refreshes/{refreshId}

Then in the Response section click Add default response and paste in the (rather long) sample JSON response from the docs found here. The response should look like this:

You can now use this new Action in a Flow. Here’s a simple example:

The first two Actions here, “Manually trigger a flow” and “Incremental Refresh”, kick off a refresh in the way I’ve shown in the previous posts in this series. The “Do until” Action is where the interesting stuff happens:

What this does is:

  • First of all, the Delay Action waits for 30 seconds. There’s no point checking the status of a refresh immediately after it has been started, and in this case 30 seconds is a reasonable amount of time to wait. Depending on how long your refresh takes you may want to use a longer delay.
  • Then call the new Action created above to check the status of the refresh that has just been started, using the x-ms-request-id value returned by the Incremental Refresh Action.
  • If the status (returned by the Get status of one dataset refresh Action) is not Unknown then the refresh has completed and the Do until loop can be terminated. If the status is Unknown then the refresh has either not started or is still in progress so the loop should be run again, ie it will wait for another 30 seconds and then check the status again.

Finally, once the refresh has completed the Condition Action sends an email telling me the outcome: if the status is Completed then the refresh has succeeded; if it is not then the status tells you at a high-level what went wrong (you can look at the extendedStatus and the contents of the objects JSON array for more details).

Calling The Power BI Enhanced Refresh API From Power Automate, Part 3: Incremental Refresh Options

If you’ve read part 1 or part 2 of this series you’ll know how you can create a Power Automate custom connector to call the Power BI Enhanced Refresh API and get fine-grained control over your refreshes. In this post I will take a look at the two parameters in the Enhanced Refresh API for datasets that use incremental refresh: applyRefreshPolicy and effectiveDate. They are documented here but, as always, some worked examples are helpful to really understand how they work.

Before I carry on, I’ll assume you have created a custom connector for Power Automate that supports these two parameters. The easiest way to do this is to use the following JSON for the body when using “Import from sample” to create a new Action:

{
    "type": "Full",
    "commitMode": "transactional",
    "applyRefreshPolicy": true,
    "effectiveDate": "12/31/2013"
}

The Action should look like this in the Test stage of the custom connector wizard:

Let’s start with effectiveDate. By default, incremental refresh allows you to do things like “refresh only the last month of data” with the unstated assumption that “last month” means “last month relative to today’s date“. If the data in a table is loaded on a monthly, quarterly or yearly basis then this default behaviour of incremental refresh may lead to unexpected performance problems or data loss. The effectiveDate parameter allows you to specify a date to use instead of today’s date as the starting point for incremental refresh.

For example, I have a dataset connected to the old Adventure Works DW SQL Server sample database. If I configure incremental refresh on the FactInternetSales table as follows:

…and refresh in the Power BI Service, I can see in SQL Server Management Studio that the following partitions have been created to hold the data in the table:

As you can see I have yearly partitions created relative to today’s date. Unfortunately my old copy of the Adventure Works DW database only has data for the years 2001 to 2004, which means that after the refresh no data is loaded into the dataset. However, if I refresh from Power Automate using my custom connector and I set effectiveDate to 31st December 2004 like so:

…then the following partitions get created in the FactInternetSales table and all my data is loaded into them:

Now let’s consider applyRefreshPolicy. If you have set up incremental refresh on a table then when you do a normal scheduled refresh of your dataset only some of the data is reloaded, saving you a lot of time. However there will be occasions where you want to override this behaviour and force a full refresh of a table even though incremental refresh has been configured: for example some values in your historic data may have been updated, so the only safe way to ensure the correct data is in Power BI is to reload the entire table. I’ve seen people schedule a full refresh of their data every week or every month just in case there were any updates that they weren’t informed of. Setting applyRefreshPolicy to false (the default is true) allows you to do this.

For example, the FactInternetSales table in the dataset described above is configured to refresh only the last two years of data after the initial full refresh, which means that with an effectiveDate of 31st December 2004 only the 2004 and 2003 partitions are refreshed. However, if you set applyRefreshPolicy to false (in Power Automate false for a boolean parameter will appear as a “No”), like so:

…then all partitions in the FactInternetSales table will get refreshed.

It’s important to mention that the applyRefreshPolicy parameter only works with certain settings for the type parameter, as described here, and certain settings for the commitMode parameter, as described here.

Of course you can use these two parameters in TMSL refresh commands without using the Enhanced Refresh API or Power Automate, but I think they are particularly useful when used from a Power Automate custom connector because of the extra refresh scheduling flexibility you get with Power Automate: you may want to run a normal refresh every day but a full refresh with applyRefreshPolicy set to false once a week, for example, or you may want to only refresh one fact table out of many every quarter and set effectiveDate when you do that.

Calling The Power BI Enhanced Refresh API From Power Automate, Part 2: Refreshing Specific Tables And Partitions

In part 1 of this series I showed how you could create a very simple custom connector for Power Automate that allows you to call the Power BI Enhanced Refresh API. However, I didn’t show what I think is the main reason you’d want to build a custom connector: the ability to refresh specific tables or partitions in a dataset rather than the entire dataset. The main reason you’d want to do this is to speed up refresh performance, and there are two reasons why it can do so:

  1. Some tables in your dataset may not need refreshing every time because the data in them hasn’t changed, so not refreshing them saves you time and CPU.
  2. Refreshing the partitions in a table and increasing the maxParallelism property can also significantly speed up refresh performance, as I showed here.

This is a very common requirement and there are a lot of blog posts and videos out there on the subject but I think using a Power Automate custom connector is probably the easiest way of doing this, although it only works for datasets in a Premium or PPU capacity.

To implement an Action that does this in an existing connector, click on the New Action button and follow the instructions from my last post. Fill in the fields in the General box:

Then click on the Import from sample button, select the POST verb, enter the URL

	https://api.powerbi.com/v1.0/myorg/groups/{groupId}/datasets/{datasetId}/refreshes

…leave the Headers box blank and then, in the Body box, enter the sample Body from here:

{
    "type": "Full",
    "commitMode": "transactional",
    "maxParallelism": 2,
    "retryCount": 2,
    "objects": [
        {
            "table": "DimCustomer",
            "partition": "DimCustomer"
        },
        {
            "table": "DimDate"
        }
    ]
}

[Note that you can’t have two Actions with the same URL in the same custom connector]

There’s some extra work to do here though. Click on the dropdown on the body parameter in the Request box and click Edit:

Next, click Edit under “table” in the Parameter section:

…and edit the title and description to reflect the fact that this parameter will contain the list of tables and parameters you want to be refreshed:

Once you’ve saved the connector you can test it – but one thing that caught me out is that the Test step in the custom connector designer doesn’t actually handle the tables and partitions parameter properly (something mentioned in passing here) so you’ll want to test this in a real flow. Here’s an example that refreshes the DimDate table and two partitions (FactInternetSales2001 and FactInternetSales2003) of the FactInternetSales table:

Notice that you can add as many tables or partitions to be refresh as you want by clicking the Add new item button; if you want to refresh a table just enter the table name and leave the partition box empty, but if you want to refresh a partition you need to fill in both the table name and the partition name.

And that’s it, at least for now. Next time I’ll take a look at some of the options for datasets that use incremental refresh.

Calling The Power BI Enhanced Refresh API From Power Automate, Part 1: Creating A Basic Custom Connector

I love the new Power BI Enhanced Refresh API: it allows you to do things like refresh individual tables in your dataset, override incremental refresh policies, control the amount of parallelism, cancel refreshes and a lot more, while being easier to use than the XMLA Endpoint. However, like the XMLA Endpoint, one problem remains: how can you schedule a dataset refresh using it? One option is to create a custom connector for Power Automate (similar to what I described here for the Export API, before the Power BI export actions for Power Automate had been released): this not only allows you to schedule more complex refreshes but also gives you more flexibility over scheduling and do things like send emails if refreshes fail.

There’s no point going into the details of creating a custom connector for a Power BI API endpoint because it’s been done before, most notably by Konstantinos Ioannou who has a very detailed walkthrough here which I strongly suggest you read. There’s only one thing that has changed since he wrote that post: the Power BI App Registration Tool is now here. You also need to give the app you create the “Read and write all datasets” permission:

When you get to the Definition stage of creating the connector there are some choices to make. The Enhanced Refresh API has a lot of functionality and it could be very complicated to build a custom connector that supports everything – especially if you or your users don’t need all that functionality, or if a lot of options could confused your users. As a result it could be better to only expose a subset of the functionality – and that’s what I’ll do in this first post.

Let’s take a few basic options to start off with: the refresh type (ie do you want to do a full refresh, clear the data out of the dataset etc?), the commit mode (do you want everything you’re refreshing to be refreshed in single transaction?), the maximum amount of parallelism and the number of retries if refresh fails. Click on the New Action button and fill in the details in the General section:

Then, in the Request section, click on Import from sample and select the verb POST, enter the following URL:
https://api.powerbi.com/v1.0/myorg/groups/{groupId}/datasets/{datasetId}/refreshes

…leave the Headers box empty and then enter the following in the Body box:

{
    "type": "Full",
    "commitMode": "transactional",
    "maxParallelism": 2,
    "retryCount": 2
}

This will create an Action that allows you to set the type, commitMode, maxParallelism and retryCount options. The Request section of the Definition step should look like this:

You can finish creating the connector as per Konstantinos’s instructions after that.

Finally, to test this connector in a flow, you can build a new instant cloud flow that looks like this:

[You can find the workspace ID (the groupId for the API) and the dataset ID by going to the dataset’s Settings page in the Power BI Service and getting them from the URL as detailed here]

You will also need to turn off the Asynchronous Pattern option in the Settings dialog of the action:

You should now have a flow which can kick off a dataset refresh with a few options. This is only the beginning though: there are more options that can be added, and this flow only starts a refresh – it doesn’t tell you whether the refresh succeeded or not, or allow you to cancel a refresh, or anything else fun like that. We’ll investigate all of these things and more in future posts in this series.

Refreshing Excel Power Query Queries With VBA And Power Automate For Desktop

At the end of last year two new, fairly minor, improvements were made to Excel VBA’s support for Power Query: you can now refresh and delete individual Power Query queries. These methods are available now for everyone using the Insiders build of Excel. Not particularly interesting on its own (I wrote a post a few years ago about Excel VBA and Power Query if you’re interested) but it got me thinking about this subject again. Also, at the end of last year, I upgraded to Windows 11 which has Power Automate for desktop built in. Power Automate desktop makes it super easy to automate everyday tasks and it has great support for Excel – including the ability to run Excel VBA macros. So I wondered: can you use Power Automate for desktop to automatically refresh your Excel Power Query queries? Yes, you can!

Here’s a simple example. I created a Power Query query called GetTheDateAndTime that returns the current date and time in a table with one row and column. Here’s the M code for the query:

#table(type table [RunDate=datetime], {{DateTime.FixedLocalNow()}})

Next, I created a VBA macro called RunPQQuery to refresh this Power Query query using the new refresh method I mentioned earlier:

Sub RunPQQuery()
    ActiveWorkbook.Queries("GetTheDateAndTime").Refresh
End Sub

I then saved the Excel workbook as a .xlsm file.

Next I opened Power Automate for desktop and created a new desktop flow following the instructions here to open Excel, run the macro and close Excel again. I realised that if I closed Excel immediately after running the macro it would close Excel before the Power Query query had finished, so I added a delay of ten seconds after running the macro to give it time to finish. There are probably more sophisticated ways to solve this problem: for example you could read the value of a cell in the table returned by the query that you knew would change, then after running the query loop until the value you’ve read has changed. Here’s my desktop flow:

Finally I created a cloud flow to run this desktop flow:

And that’s it! A very simple example but very easy to implement.

Generating Excel Reports Using Power BI, Cube Formulas, Office Scripts And Power Automate

Now that Excel reports connected to Power BI datasets work in Excel Online it opens up a lot of new possibilities for doing cool things with Office Scripts and Power Automate. Here’s a simple example showing how all these technologies can be put together to automatically generate batches of Excel reports from a template.

Step 1: Create a template report in Excel using cube formulas

In Excel on the desktop I created a new Excel file, created a connection to a Power BI dataset and then built a simple report using Excel cube formulas:

Here are the Excel formulas for the table on the left:

This report uses data from the UK’s Land Registry (one of my favourite data sources) and shows the average price paid and number of sales broken down by property type for a single county (specified in cell B2 of this report – in the screenshot above data for Bournemouth is shown). Here’s the formula in B2:

=CUBEMEMBER("Price Paid", "[Property Transactions].[County].[All].[BOURNEMOUTH]")

This formula is referenced by all the CUBEVALUE formulas in the body of the table so they are all sliced by the selected county.

After doing this, I saved the file to OneDrive for Business.

Step 2: Create an Office Script to change the county shown in cell B2

The aim of this exercise is to generate one copy of the report above for each county in a list of counties, so the next thing I did was create a parameterised Office Script that takes the name of a county and changes the county name used in the formula in cell B2. To do this I opened the Excel report in Excel Online, started the script recorder, changed the formula in B2 and then stopped recording. I then edited this script to take a parameter for the county name (called county) to use in the formula. Here’s the script:

function main(workbook: ExcelScript.Workbook, county: string) {
	let selectedSheet = workbook.getActiveWorksheet();
	// Set range B2 on selectedSheet
	selectedSheet.getRange("B2").setFormulaLocal("=CUBEMEMBER(\"Price Paid\", \"[Property Transactions].[County].[All].[" + county + "]\")");
}

Step 3: Create a list of counties to pass to the script

Next, I created a second Excel workbook containing a table that contained the county names to pass to the script and saved this to OneDrive for Business too:

Step 4: Create Power Automate flow to call the script once for each county in the Excel table

Last of all, I created a Power Automate flow that reads the county names from the table in the previous step, runs the script for each county, creates a copy of the original Excel report after each script run and then saves it to a folder. Here’s the flow at a high level:

In more detail, here’s the setup for the ‘List rows present in a table’ action:

Here’s the ‘Run script’ action:

Here’s the expression used to get the current county name in the loop:

items('CountyLoop')?['Counties']

…and here’s the expression used to create the destination file path:

concat('/Office Scripts Reporting/Reports/',items('CountyLoop')?['Counties'],'.xlsx')

Running this flow results in three Excel workbooks being created, one for each county with the county name in the workbook name, stored in a folder like so:

Here’s the report in BATH AND NORTH EAST SOMERSET.xlsx:

Of course I could do other things at this point like email these workbooks to different people, but there’s no need to overcomplicate things – I hope you’ve got the idea.

A few last points to make:

  • Office Scripts don’t seem to work with PivotTables connected to Power BI datasets yet – I’m sure it’s just a matter of time before they do though
  • How is this different from using Power Automate to call the Power BI export API? A paginated report can be exported to Excel but this method gives you a lot more flexibility because it allows you to use a lot more Excel functionality, not jus the functionality that paginated reports can use in its exports. It also gives you a report that is connected live back to a dataset using cube functions, not static data.
  • Generating large numbers of Excel reports like this is not something I like to encourage – why not view your report in the Power BI portal, especially now you can view live Excel reports connected to datasets there too? – but I know it’s something that customers ask for .
  • I haven’t done any performance testing but I suspect that this method may be faster than using the Power BI export API in Power Automate.