If you have a lot of Power BI semantic models that are scheduled to refresh at the same time in the Service then you may find that some of them fail with the following error:


You’ve exceeded the capacity limit for dataset refreshes. Try again when fewer datasets are being processed.
[Note: “dataset” is the old name for a Power BI semantic model. Someone should update the error message.]
What causes it? Each Fabric or Power BI Premium capacity SKU can support (and “support” is the operative word here, as we shall see) a certain number of concurrent semantic model refreshes. These limits are documented here in the Model Refresh Parallelism column of the table on that docs page:

The error itself is documented here and I’ve mentioned it myself in a previous post here, but the interesting thing about the limit on the number of concurrent refreshes there’s a lot more to it than you might expect – Power BI is very forgiving.
Before I go any further, it’s important to make clear that this error is nothing to do with how many CUs you are using on your capacity at the time of the error, although the limits are in place to stop you overloading your capacity: running multiple semantic model refreshes at the same time could cause a sizeable increase in CU consumption even after smoothing.
For example, to investigate how this limit is applied I created an F2 capacity, added a workspace to that capacity, and uploaded several identical Power BI semantic models to that workspace. I used some Power Query magic to control how long those semantic models took to refresh.
For my first test I configured two semantic models so they took 120 seconds to refresh and started a manual refresh on both at the same time. Now, looking at the table above, you might think that because an F2 supports one concurrent semantic model refresh then I would get an error but no, both semantic models refreshed successfully and both refreshes took 120 seconds. The published limit is the number of semantic models that Power BI guarantees that can be refreshed concurrently; in practice the limit may be exceeded.
Next, I started a manual refresh on six semantic models that were all configured to take 120 seconds to refresh. Again, they all refreshed successfully and all took 120-122 seconds to refresh. Finally I started a manual refresh on fifteen semantic models that again were configured to take 120 seconds to refresh and this is where I saw something different. All of the semantic models refreshed successfully in the end, and none showed the warning triangle in the first screenshot above. Most of the semantic models took 120-122 seconds to refresh but some took longer. For example, take a look at this Refresh History for one of the models:

The overall refresh was successful but took 305 seconds, not 120 seconds. This is explained by the refresh failing immediately with the “You’ve exceeded the capacity limit for dataset refreshes” error, then the Service waiting a minute to retry the refresh (for more information on automatic refresh retries see here) which resulted in the same error occuring again, then the Service waiting for a further two minutes before retrying the refresh again, at which point it succeeded and took 122 seconds.
So you can see what I mean when I say Power BI is very forgiving about these limits. It’s also worth mentioning that scheduled refreshes don’t always happen at exactly the time they are scheduled for. The Service may wait several minutes after the scheduled time before it tries the first refresh. This is what is meant by the statement in the docs here that “You can schedule and run as many refreshes as required at any given time, and the Power BI service runs those refreshes at the time scheduled as a best effort.“
In other tests with the same number of semantic models but longer refresh times, I was able to observe a scenario where a refresh scheduled for 17:30 did not start until almost eight minutes after that time and then failed nine times before it succeeded; note that the amount of time the Service waited to retry after the second failure went up to five minutes:

Of course Power BI can’t keep retrying indefinitely and eventually refreshes will fail with the “You’ve exceeded the capacity limit for dataset refreshes” error. Here’s the Refresh History for a semantic model where refresh ultimately failed after four retries (this took a lot of concurrent, slow refreshes to repro):

If you’re encountering this error then the solution is obvious: reduce the number of refreshes that are happening at any given time. But how do you know which refreshes are scheduled for when and how long they will take? The Refresh Schedule page for your capacity in the Admin Portal gives you a summary of the number of semantic models that are predicted to be refreshed in a 30 minute time slot and how long they are likely to take. The Fabric Monitoring Hub gives you details of historical activity. And if you have Workspace Monitoring or Log Analytics configured on your workspace you can get a lot of detail on what happens when refreshes are run, including seeing when the “You’ve exceeded the capacity limit” error occurs and refreshes retry.
Once you know what is being refreshed and when, you need to do two things. First see if you can reduce the number of times any given semantic model is refreshed. It’s pretty common for users to configure their model to refresh multiple times a day even if the actual data source only changes once a day, for example, so easy wins may be possible. Second, tuning the amount of time refreshes take can also reduce the amount of concurrent refreshes: if your semantic models refresh quickly it’s less likely it will overlap with other refreshes. Tuning data sources, tuning Power Query, increasing refresh parallelism, removing unnecessary columns or tables, tuning the DAX used in calculated columns and tables or replacing those calculated columns and tables with pre-calculated data in in the data source, and using incremental refresh are some of the things you will need to look at. Scaling up to a larger capacity, or buying an additional (possibly smaller) capacity and moving some workspaces over to it will also of course also solve the problem because the limits are per capacity.
In summary, what this shows is that the published, supported limits on the number of concurrent semantic model refreshes in the Power BI Service are a lot lower than what is achievable in practice. This is very important in self-service BI scenarios because it means refreshes are a lot less likely to fail than they would otherwise. But if you are refreshing a lot of semantic models, exceed the published limits on a regular basis and find some of your refreshes fail then you have no choice but to take some of the actions described above to get back under the limits.















































