Calling The Power BI Enhanced Refresh API From Power Automate, Part 6: Cancelling Dataset Refreshes

The thing that got me excited about the Power BI Enhanced Refresh API, and which inspired me to start this series of posts, was the fact that for the first time it gives you a way of cancelling Power BI dataset refreshes. In this last post in the series I’ll show you how you can add an Action to your Power Automate custom connector to cancel a refresh and how you can use it in a Power Automate Flow.

Adding a new Action is straightforward and very similar to what I’ve shown in previous posts in this series. As before, click on the New action button on the Definition stage of the custom connector wizard, fill in the General section with appropriate names:

…then use the Import from sample option to populate the Request section using the DELETE verb and the following URL for the Cancel Refresh In Group API endpoint:

https://api.powerbi.com/v1.0/myorg/groups/{groupId}/datasets/{datasetId}/refreshes/{refreshId}

The refreshId, which uniquely identifies the refresh operation you which to cancel can be obtained from the Action I described in part 4 of this series that can be used to kick off a refresh.

Using this new Action in a Flow is more interesting. Here’s an example Flow that kicks off a dataset refresh but cancels it if it goes on for more than 120 seconds; it’s similar to, but more complex than, the Flow I described in this post that checks the status of a refresh. At the top level it looks like this:

The first two actions kick off a full refresh of a dataset:

The next two Actions initialise two variables:

  • ElapsedRefreshDurationSeconds is an integer used to hold the amount of time in seconds that the refresh has been running for each time it is checked
  • StopCheckingRefreshStatus is a boolean variable used to determine whether the upcoming Do until loop should be terminated or not

The contents of the Do until loop look like this:

After waiting 30 seconds the status of the refresh is checked. If the status is “Unknown” then the refresh is ongoing and the next step is to check how long it has been running for; otherwise the refresh has completed at the StopCheckingRefreshStatus variable can be set to true so the Do until loop can be terminated. Drilling into the “Is refresh ongoing” conditional Action:

If the refresh is still in progress then the current elapsed duration in seconds of the refresh is calculated using the following expression (this blog post showed me how to calculate durations in Power Automate):

div(sub(ticks(utcNow()), ticks(outputs('Get_status_of_one_dataset_refresh')?['body/startTime'])),10000000)

If that elapsed duration is more than 120 seconds then the refresh should be cancelled; otherwise no action is taken. Drilling into the Yes branch of the conditional Action:

The refresh is cancelled by calling the Action created at the beginning of this post; after the cancellation the StopCheckingRefreshStatus variable is set to true. Here are details of the Actions from the Yes branch that do this:

That’s it for this post and this series. Since I started the series over a month ago a few other articles on similar subjects have been published on LinkedIn that you might want to check out, by my colleagues Romain Casteres and Rui Romano; as more enterprise BI solutions are built on Power BI, I know there will be a lot more people implementing complex refresh scenarios using Power Automate/Logic Apps and the Enhanced Refresh API.

Calling The Power BI Enhanced Refresh API From Power Automate, Part 5: Getting A Dataset’s Refresh History

In the last post in this series I showed how to get the status of an individual Power BI dataset refresh. In this post I’ll show how to get the history of a dataset’s refreshes and save it to a file in OneDrive for Business – useful if you want to do some analysis on refresh performance or the causes of failures.

This is one of the most straightforward things to do with the Power BI Enhanced Refresh API , especially once you’ve already got a custom connector created in the way I’ve shown in the previous posts in this series. You just need to add a new Action to your custom connector, fill in the General section:

…then in the Request section use Import from sample with the GET verb on the following URL (for Get Refresh History In Group):

https://api.powerbi.com/v1.0/myorg/groups/{groupId}/datasets/{datasetId}/refreshes?$top={$top}

The only thing to do after this is edit the optional $top query parameter (which allows you to limit the number of refreshes whose history you return) so it takes an integer rather than a string:

Update the custom connector and you can now use it in a simple Flow to save the JSON returned by the call to the API into a JSON file:

You can then connect to this data very easily via Power Query, for example:

Rather than trigger a Flow like this manually, in the real world you’d want to run it on a schedule, perhaps every day. You’d then end up with a folder full of JSON files you could analyse and as much history stored as you wanted. It should be possible to load and dedupe this data (multiple JSON files will contain overlapping refresh history) using Power BI dataflows and/or datamarts but that’s something I’ll leave for another blog post…

Calling The Power BI Enhanced Refresh API From Power Automate, Part 4: Getting The Status Of a Refresh

So far in this series (see part 1, part 2 and part 3) I’ve looked at how you can create a Power Automate custom connector that uses the Power BI Enhanced Refresh API to kick off a dataset refresh. That’s only half the story though: once the refresh has been started you need to know if it has finished and, if so, whether it finished successfully or not. In this post I’ll show how to do this.

When you start a refresh using the Enhanced Refresh API Power BI returns a unique identifier for that refresh operation and you will need to modify your Power Automate custom connector to make it easy to capture and use that identifier. You can do this on the Definition stage of the custom connector wizard for an Action that kicks off a refresh (ie any of the Actions I built in the previous posts in this series) by going to the Response section, clicking on the Add default response button and pasting the following sample response (from here in the docs) into the Headers box and clicking Import:

x-ms-request-id: 87f31ef7-1e3a-4006-9b0b-191693e79e9e
Location: https://api.powerbi.com/v1.0/myorg/groups/f089354e-8366-4e18-aea3-4cb4a3a50b48/datasets/cfafbeb1-8037-4d0c-896e-a46fb27ff229/refreshes/87f31ef7-1e3a-4006-9b0b-191693e79e9e

It’s the x-ms-request-id custom header that will contain the unique identifier for the refresh.

The next step is to create a new Action to check the status of the refresh using the Get Refresh Execution Details In Group API endpoint. To do this, click on the New action button on the Definition stage of the custom connector wizard and fill in the details in the General section:

Then fill in the Request section. Using Import from sample, select the verb GET, enter the following URL:

https://api.powerbi.com/v1.0/myorg/groups/{groupId}/datasets/{datasetId}/refreshes/{refreshId}

Then in the Response section click Add default response and paste in the (rather long) sample JSON response from the docs found here. The response should look like this:

You can now use this new Action in a Flow. Here’s a simple example:

The first two Actions here, “Manually trigger a flow” and “Incremental Refresh”, kick off a refresh in the way I’ve shown in the previous posts in this series. The “Do until” Action is where the interesting stuff happens:

What this does is:

  • First of all, the Delay Action waits for 30 seconds. There’s no point checking the status of a refresh immediately after it has been started, and in this case 30 seconds is a reasonable amount of time to wait. Depending on how long your refresh takes you may want to use a longer delay.
  • Then call the new Action created above to check the status of the refresh that has just been started, using the x-ms-request-id value returned by the Incremental Refresh Action.
  • If the status (returned by the Get status of one dataset refresh Action) is not Unknown then the refresh has completed and the Do until loop can be terminated. If the status is Unknown then the refresh has either not started or is still in progress so the loop should be run again, ie it will wait for another 30 seconds and then check the status again.

Finally, once the refresh has completed the Condition Action sends an email telling me the outcome: if the status is Completed then the refresh has succeeded; if it is not then the status tells you at a high-level what went wrong (you can look at the extendedStatus and the contents of the objects JSON array for more details).

Calling The Power BI Enhanced Refresh API From Power Automate, Part 3: Incremental Refresh Options

If you’ve read part 1 or part 2 of this series you’ll know how you can create a Power Automate custom connector to call the Power BI Enhanced Refresh API and get fine-grained control over your refreshes. In this post I will take a look at the two parameters in the Enhanced Refresh API for datasets that use incremental refresh: applyRefreshPolicy and effectiveDate. They are documented here but, as always, some worked examples are helpful to really understand how they work.

Before I carry on, I’ll assume you have created a custom connector for Power Automate that supports these two parameters. The easiest way to do this is to use the following JSON for the body when using “Import from sample” to create a new Action:

{
    "type": "Full",
    "commitMode": "transactional",
    "applyRefreshPolicy": true,
    "effectiveDate": "12/31/2013"
}

The Action should look like this in the Test stage of the custom connector wizard:

Let’s start with effectiveDate. By default, incremental refresh allows you to do things like “refresh only the last month of data” with the unstated assumption that “last month” means “last month relative to today’s date“. If the data in a table is loaded on a monthly, quarterly or yearly basis then this default behaviour of incremental refresh may lead to unexpected performance problems or data loss. The effectiveDate parameter allows you to specify a date to use instead of today’s date as the starting point for incremental refresh.

For example, I have a dataset connected to the old Adventure Works DW SQL Server sample database. If I configure incremental refresh on the FactInternetSales table as follows:

…and refresh in the Power BI Service, I can see in SQL Server Management Studio that the following partitions have been created to hold the data in the table:

As you can see I have yearly partitions created relative to today’s date. Unfortunately my old copy of the Adventure Works DW database only has data for the years 2001 to 2004, which means that after the refresh no data is loaded into the dataset. However, if I refresh from Power Automate using my custom connector and I set effectiveDate to 31st December 2004 like so:

…then the following partitions get created in the FactInternetSales table and all my data is loaded into them:

Now let’s consider applyRefreshPolicy. If you have set up incremental refresh on a table then when you do a normal scheduled refresh of your dataset only some of the data is reloaded, saving you a lot of time. However there will be occasions where you want to override this behaviour and force a full refresh of a table even though incremental refresh has been configured: for example some values in your historic data may have been updated, so the only safe way to ensure the correct data is in Power BI is to reload the entire table. I’ve seen people schedule a full refresh of their data every week or every month just in case there were any updates that they weren’t informed of. Setting applyRefreshPolicy to false (the default is true) allows you to do this.

For example, the FactInternetSales table in the dataset described above is configured to refresh only the last two years of data after the initial full refresh, which means that with an effectiveDate of 31st December 2004 only the 2004 and 2003 partitions are refreshed. However, if you set applyRefreshPolicy to false (in Power Automate false for a boolean parameter will appear as a “No”), like so:

…then all partitions in the FactInternetSales table will get refreshed.

It’s important to mention that the applyRefreshPolicy parameter only works with certain settings for the type parameter, as described here, and certain settings for the commitMode parameter, as described here.

Of course you can use these two parameters in TMSL refresh commands without using the Enhanced Refresh API or Power Automate, but I think they are particularly useful when used from a Power Automate custom connector because of the extra refresh scheduling flexibility you get with Power Automate: you may want to run a normal refresh every day but a full refresh with applyRefreshPolicy set to false once a week, for example, or you may want to only refresh one fact table out of many every quarter and set effectiveDate when you do that.

Calling The Power BI Enhanced Refresh API From Power Automate, Part 2: Refreshing Specific Tables And Partitions

In part 1 of this series I showed how you could create a very simple custom connector for Power Automate that allows you to call the Power BI Enhanced Refresh API. However, I didn’t show what I think is the main reason you’d want to build a custom connector: the ability to refresh specific tables or partitions in a dataset rather than the entire dataset. The main reason you’d want to do this is to speed up refresh performance, and there are two reasons why it can do so:

  1. Some tables in your dataset may not need refreshing every time because the data in them hasn’t changed, so not refreshing them saves you time and CPU.
  2. Refreshing the partitions in a table and increasing the maxParallelism property can also significantly speed up refresh performance, as I showed here.

This is a very common requirement and there are a lot of blog posts and videos out there on the subject but I think using a Power Automate custom connector is probably the easiest way of doing this, although it only works for datasets in a Premium or PPU capacity.

To implement an Action that does this in an existing connector, click on the New Action button and follow the instructions from my last post. Fill in the fields in the General box:

Then click on the Import from sample button, select the POST verb, enter the URL

	https://api.powerbi.com/v1.0/myorg/groups/{groupId}/datasets/{datasetId}/refreshes

…leave the Headers box blank and then, in the Body box, enter the sample Body from here:

{
    "type": "Full",
    "commitMode": "transactional",
    "maxParallelism": 2,
    "retryCount": 2,
    "objects": [
        {
            "table": "DimCustomer",
            "partition": "DimCustomer"
        },
        {
            "table": "DimDate"
        }
    ]
}

[Note that you can’t have two Actions with the same URL in the same custom connector]

There’s some extra work to do here though. Click on the dropdown on the body parameter in the Request box and click Edit:

Next, click Edit under “table” in the Parameter section:

…and edit the title and description to reflect the fact that this parameter will contain the list of tables and parameters you want to be refreshed:

Once you’ve saved the connector you can test it – but one thing that caught me out is that the Test step in the custom connector designer doesn’t actually handle the tables and partitions parameter properly (something mentioned in passing here) so you’ll want to test this in a real flow. Here’s an example that refreshes the DimDate table and two partitions (FactInternetSales2001 and FactInternetSales2003) of the FactInternetSales table:

Notice that you can add as many tables or partitions to be refresh as you want by clicking the Add new item button; if you want to refresh a table just enter the table name and leave the partition box empty, but if you want to refresh a partition you need to fill in both the table name and the partition name.

And that’s it, at least for now. Next time I’ll take a look at some of the options for datasets that use incremental refresh.

Calling The Power BI Enhanced Refresh API From Power Automate, Part 1: Creating A Basic Custom Connector

I love the new Power BI Enhanced Refresh API: it allows you to do things like refresh individual tables in your dataset, override incremental refresh policies, control the amount of parallelism, cancel refreshes and a lot more, while being easier to use than the XMLA Endpoint. However, like the XMLA Endpoint, one problem remains: how can you schedule a dataset refresh using it? One option is to create a custom connector for Power Automate (similar to what I described here for the Export API, before the Power BI export actions for Power Automate had been released): this not only allows you to schedule more complex refreshes but also gives you more flexibility over scheduling and do things like send emails if refreshes fail.

There’s no point going into the details of creating a custom connector for a Power BI API endpoint because it’s been done before, most notably by Konstantinos Ioannou who has a very detailed walkthrough here which I strongly suggest you read. There’s only one thing that has changed since he wrote that post: the Power BI App Registration Tool is now here. You also need to give the app you create the “Read and write all datasets” permission:

When you get to the Definition stage of creating the connector there are some choices to make. The Enhanced Refresh API has a lot of functionality and it could be very complicated to build a custom connector that supports everything – especially if you or your users don’t need all that functionality, or if a lot of options could confused your users. As a result it could be better to only expose a subset of the functionality – and that’s what I’ll do in this first post.

Let’s take a few basic options to start off with: the refresh type (ie do you want to do a full refresh, clear the data out of the dataset etc?), the commit mode (do you want everything you’re refreshing to be refreshed in single transaction?), the maximum amount of parallelism and the number of retries if refresh fails. Click on the New Action button and fill in the details in the General section:

Then, in the Request section, click on Import from sample and select the verb POST, enter the following URL:
https://api.powerbi.com/v1.0/myorg/groups/{groupId}/datasets/{datasetId}/refreshes

…leave the Headers box empty and then enter the following in the Body box:

{
    "type": "Full",
    "commitMode": "transactional",
    "maxParallelism": 2,
    "retryCount": 2
}

This will create an Action that allows you to set the type, commitMode, maxParallelism and retryCount options. The Request section of the Definition step should look like this:

You can finish creating the connector as per Konstantinos’s instructions after that.

Finally, to test this connector in a flow, you can build a new instant cloud flow that looks like this:

[You can find the workspace ID (the groupId for the API) and the dataset ID by going to the dataset’s Settings page in the Power BI Service and getting them from the URL as detailed here]

You will also need to turn off the Asynchronous Pattern option in the Settings dialog of the action:

You should now have a flow which can kick off a dataset refresh with a few options. This is only the beginning though: there are more options that can be added, and this flow only starts a refresh – it doesn’t tell you whether the refresh succeeded or not, or allow you to cancel a refresh, or anything else fun like that. We’ll investigate all of these things and more in future posts in this series.

Refreshing Excel Power Query Queries With VBA And Power Automate For Desktop

At the end of last year two new, fairly minor, improvements were made to Excel VBA’s support for Power Query: you can now refresh and delete individual Power Query queries. These methods are available now for everyone using the Insiders build of Excel. Not particularly interesting on its own (I wrote a post a few years ago about Excel VBA and Power Query if you’re interested) but it got me thinking about this subject again. Also, at the end of last year, I upgraded to Windows 11 which has Power Automate for desktop built in. Power Automate desktop makes it super easy to automate everyday tasks and it has great support for Excel – including the ability to run Excel VBA macros. So I wondered: can you use Power Automate for desktop to automatically refresh your Excel Power Query queries? Yes, you can!

Here’s a simple example. I created a Power Query query called GetTheDateAndTime that returns the current date and time in a table with one row and column. Here’s the M code for the query:

#table(type table [RunDate=datetime], {{DateTime.FixedLocalNow()}})

Next, I created a VBA macro called RunPQQuery to refresh this Power Query query using the new refresh method I mentioned earlier:

Sub RunPQQuery()
    ActiveWorkbook.Queries("GetTheDateAndTime").Refresh
End Sub

I then saved the Excel workbook as a .xlsm file.

Next I opened Power Automate for desktop and created a new desktop flow following the instructions here to open Excel, run the macro and close Excel again. I realised that if I closed Excel immediately after running the macro it would close Excel before the Power Query query had finished, so I added a delay of ten seconds after running the macro to give it time to finish. There are probably more sophisticated ways to solve this problem: for example you could read the value of a cell in the table returned by the query that you knew would change, then after running the query loop until the value you’ve read has changed. Here’s my desktop flow:

Finally I created a cloud flow to run this desktop flow:

And that’s it! A very simple example but very easy to implement.

Generating Excel Reports Using Power BI, Cube Formulas, Office Scripts And Power Automate

Now that Excel reports connected to Power BI datasets work in Excel Online it opens up a lot of new possibilities for doing cool things with Office Scripts and Power Automate. Here’s a simple example showing how all these technologies can be put together to automatically generate batches of Excel reports from a template.

Step 1: Create a template report in Excel using cube formulas

In Excel on the desktop I created a new Excel file, created a connection to a Power BI dataset and then built a simple report using Excel cube formulas:

Here are the Excel formulas for the table on the left:

This report uses data from the UK’s Land Registry (one of my favourite data sources) and shows the average price paid and number of sales broken down by property type for a single county (specified in cell B2 of this report – in the screenshot above data for Bournemouth is shown). Here’s the formula in B2:

=CUBEMEMBER("Price Paid", "[Property Transactions].[County].[All].[BOURNEMOUTH]")

This formula is referenced by all the CUBEVALUE formulas in the body of the table so they are all sliced by the selected county.

After doing this, I saved the file to OneDrive for Business.

Step 2: Create an Office Script to change the county shown in cell B2

The aim of this exercise is to generate one copy of the report above for each county in a list of counties, so the next thing I did was create a parameterised Office Script that takes the name of a county and changes the county name used in the formula in cell B2. To do this I opened the Excel report in Excel Online, started the script recorder, changed the formula in B2 and then stopped recording. I then edited this script to take a parameter for the county name (called county) to use in the formula. Here’s the script:

function main(workbook: ExcelScript.Workbook, county: string) {
	let selectedSheet = workbook.getActiveWorksheet();
	// Set range B2 on selectedSheet
	selectedSheet.getRange("B2").setFormulaLocal("=CUBEMEMBER(\"Price Paid\", \"[Property Transactions].[County].[All].[" + county + "]\")");
}

Step 3: Create a list of counties to pass to the script

Next, I created a second Excel workbook containing a table that contained the county names to pass to the script and saved this to OneDrive for Business too:

Step 4: Create Power Automate flow to call the script once for each county in the Excel table

Last of all, I created a Power Automate flow that reads the county names from the table in the previous step, runs the script for each county, creates a copy of the original Excel report after each script run and then saves it to a folder. Here’s the flow at a high level:

In more detail, here’s the setup for the ‘List rows present in a table’ action:

Here’s the ‘Run script’ action:

Here’s the expression used to get the current county name in the loop:

items('CountyLoop')?['Counties']

…and here’s the expression used to create the destination file path:

concat('/Office Scripts Reporting/Reports/',items('CountyLoop')?['Counties'],'.xlsx')

Running this flow results in three Excel workbooks being created, one for each county with the county name in the workbook name, stored in a folder like so:

Here’s the report in BATH AND NORTH EAST SOMERSET.xlsx:

Of course I could do other things at this point like email these workbooks to different people, but there’s no need to overcomplicate things – I hope you’ve got the idea.

A few last points to make:

  • Office Scripts don’t seem to work with PivotTables connected to Power BI datasets yet – I’m sure it’s just a matter of time before they do though
  • How is this different from using Power Automate to call the Power BI export API? A paginated report can be exported to Excel but this method gives you a lot more flexibility because it allows you to use a lot more Excel functionality, not jus the functionality that paginated reports can use in its exports. It also gives you a report that is connected live back to a dataset using cube functions, not static data.
  • Generating large numbers of Excel reports like this is not something I like to encourage – why not view your report in the Power BI portal, especially now you can view live Excel reports connected to datasets there too? – but I know it’s something that customers ask for .
  • I haven’t done any performance testing but I suspect that this method may be faster than using the Power BI export API in Power Automate.

Exporting Data From Websites For Use In Power BI Using Power Automate UI Flows

As a Power BI developer I know how Power Query makes it easy to get data from web sources, but I also know there are limits to what it can do. I can get data from tables on web pages, I can get data from web services, but when confronted with a website where you have to click a button to download a file there’s a problem. Take https://data.police.uk/data/ for example:

This site allows you to download crime data for England and Wales, but to do so you need to check some boxes and then click the “Generate file” button at the bottom (yes I know there’s also an API, but this is only an example). You can’t click a button in Power Query.

Enter Power Automate UI Flows. I’ve been a fan of Power Automate for a while now but it was only the release of the new Power Automate Desktop preview last week that I decided to check out its Robotic Process Automation (RPA) capabilities. Basically, if you’re still doing lots of manual downloading, clicking, moving files etc before you can load your data into Power BI then Power Automate UI Flows can do all this for you automatically by recording and replaying everything you do.

Now I’m not a UI Flows expert by any stretch of the imagination, but within an hour I was able to create a UI Flow that:

  • Opened a browser and went to https://data.police.uk/data/ (quick tip: if you want to save a file from a browser you need to use the special Automation Browser, not Chrome, Edge or Firefox)
  • Checked the boxes to download crime, outcome and stop and search data for Thames Valley Police
  • Clicked the Generate file button
  • Clicked the Download Now button on the next screen when it appeared
  • Downloaded the resulting zip file to a particular location
  • Unzipped the file to a subfolder
  • Deleted the zip file

Here’s what it looked like in Power Automate Desktop:

Now this is my first ever UI Flow so it’s probably not as robust as it could be, but it seemed to work ok. I was then able to call it from Power Automate and refresh a dataset once the UI Flow had finished:

I guess the point of this post is not to tell you how to use Power Automate UI Flows to extract data from web pages for use with Power BI, just that it’s there for you to use if you need it. It’s important to be aware of the other tools that are available in the Power Platform and understand how they can be used alongside Power BI, rather than focusing on trying to solve your problems with Power BI alone. Not only are there a lot of cool things happening with Power Automate that are directly relevant to Power BI (such as the new Export actions and the upcoming ability to kick off a Flow when a Power BI dataflow finishes refreshing – see the demo at 33:00 here) but there are other features in Power Automate like UI Flows that can be used for BI purposes, such as the ability to run an Office Script from a Flow which can be useful for preparing Excel data for Power BI.

If you want to learn more about Power Automate Desktop there’s a good multi-part tutorial on YouTube starting here; there’s also a three-part series on extracting data from web pages too (part 1, part 2, part 3) although it uses an example that Power Query handles much more elegantly 😉

Calling The Power BI Export API From Power Automate, Part 2: Creating A Flow That Exports A Paginated Report To A CSV File

In my last post I showed how to create a Power Automate custom connector for the new endpoints in the Power BI REST API for exporting a report to a file. In this post I’ll show you how to use this custom connector in a flow in Power Automate.

The three endpoints in the Power BI API that need to be called to export a report (and which have been included in the custom connector from my last post) are as follows:

  • Export To File In Group must be called first, to start the process of exporting either a Power BI report or a paginated report to a file. Exporting may take some time, though, so the exported file doesn’t get returned at this point. In the headers of the response there is a retry-after value in seconds telling you how long you must wait initially, before…
  • …calling Get Export To File Status In Group to find out whether the export is ready or not. If it isn’t ready you again need to wait the number of seconds specified in the retry-after header before checking the status again. If it is ready, you can then…
  • …call Get File Of Export To File In Group to download the export file.

This is all described in the C# example code here, so the challenge is to translate this into a flow. In my case, I want to export the following paginated report to a CSV file:

Here’s what my flow looks like at the top level:

To make things easy to understand I’m using a manual trigger to start the flow and hard-coding the report I want to export. The second Action calls Export To File In Group:

You will need to go into the Settings for this Action and set the Asynchronous Pattern option to Off:

As I said, in the response from this first API call the retry-after header tells you how many seconds you should wait before checking the export status for the first time. This value is used in the next Action, which adds the delay:

Once this delay has passed there’s a Do loop that will call the Get Export To File Status In Group endpoint up to ten times to check if the export is ready:

This call uses the export id found in the response from the call to Export To File In Group:

If this call returns that the status is Running or NotStarted, then it will again wait for the number of seconds specified in the retry-after header; if not then there is no delay:

The expression on the Do loop breaks the loop if the export status is Succeeded or Failed:

@or(
equals(body(‘Call_Export_Status_endpoint_to_check_if_export_is_ready’)?[‘status’], ‘Succeeded’),
equals(body(‘Call_Export_Status_endpoint_to_check_if_export_is_ready’)?[‘status’], ‘Failed’)
)

After the loop, there is a check to see if the export status is Succeeded:

If the export status is Succeeded, then the file is downloaded and saved to OneDrive for Business:

Otherwise, the flow sends a failure notification email:

And here’s the exported CSV file viewed in Excel Online:

And that’s it. I know this flow isn’t as robust as it could be, but I hope it gives you an idea of how to use the custom connector to export Power BI reports in Power Automate; as I said last time, I have very little Power Automate experience so please forgive any newbie mistakes! If you can see a way to improve on what I’ve done here, please leave a comment.

Automated exports from a Power BI paginated report to a CSV file is already very useful but there are other fun things we can do with this – and in my next post I’ll show you another, less obvious example.