This is a very late addition to the series of posts I wrote back in 2024 and which started here on Power BI memory errors. It’s about a very rare error that is hard to deal with and often temporary but since people do run into it from time to time I decided to write about it so there is some useful information available about it online.
The error, which can occur when you refresh a semantic model or render a report, has two associated error messages:
The operation has been cancelled because there is not enough memory available for the application. If using a 32-bit version of the product, consider upgrading to the 64-bit version or increasing the amount of memory available on the machine.
or more commonly:
You have reached the maximum allowable memory allocation for your tier. Consider upgrading to a tier with more available memory
The error number associated with this error is 0xC11C0005 or -1055129595.
What causes it? This needs a bit of explanation and what follows is an over-simplification…
When you publish a Power BI semantic model to the Service it runs on one of hundreds of physical machines – nodes – alongside other semantic models published by other people. The Service always tries to put your semantic model on a node that has enough memory and CPU available for it to be queried or refreshed; if it decides that isn’t the case, the semantic model will be moved to a different node. The tricky thing is that the amount of memory or CPU available depends on whether the other semantic models on the same node are being refreshed or queried at any given time and how resource-intensive those queries and refreshes are. The limits on memory consumption that I wrote about in the previous posts in this series are there to stop any one semantic model consuming too much memory and causing problems for the other semantic models on the same node. While the algorithms used to determine which semantic models should be held together on a given node are very sophisticated (and are being improved all the time), sometimes something unexpected happens and the necessary resources aren’t available for a refresh or query. The errors above happen when the node your semantic model is being held on is under memory pressure.
What can you do about it? That’s a hard question to answer but it depends on whether your semantic model is part of the problem or not. If you only get this error once (and as I said, it’s a very rare error indeed) then you can ignore it – it’s just bad luck. However if you get this error repeatedly then it’s very likely that your semantic model is causing a memory spike and even if you aren’t hitting any other memory limit you are probably coming close and you should do some tuning. If you get this error when rendering a report you should look at the DAX queries generated by your visuals and work out whether you can reduce their memory usage by remodelling your data or rewriting the DAX in your measures. If you get this error when refreshing your semantic model you should see if you can reduce its size by remodelling your data or reduce memory consumption in other ways, for example by removing calculated columns or calculated tables and replacing them with columns and tables in your data source. For more information on how to measure memory consumption for a query or refresh, see the other posts in this series.