Power BI Semantic Model Memory Errors, Part 1: Model Size

You probably know that semantic models in Power BI can use a fixed amount of memory. This is true of all types of semantic model – Import, Direct Lake and DirectQuery – but it’s not something you usually need to worry about for DirectQuery mode. The amount of memory they can use depends on whether you’re using Shared (aka Pro) or a Premium/Fabric capacity, and if you’re using a capacity how large that capacity is. In Shared/Pro the maximum amount of memory that a semantic model can use is 1GB; if you are using a capacity then the amount of memory available for models in each SKU is documented in the table here in the Max Memory column:

What counts as “memory usage” though? More importantly, how can you breach this limit and what do all of the different memory-related error messages that you might see mean? In this series I will try to answer these questions, and in this post I will look at one particular error you see when your model needs to use more memory than it is allowed to.

First of all it’s important to understand that the amount of memory used by a semantic model is not the same as the amount of data “in” the model. The diagram below shows how model memory usage can be broken down. The data in the columns and tables of your model, along with supporting objects like relationships (represented by the blue box in the diagram below) makes up just one part of the overall model memory usage. In addition, more memory is needed to store data associated with row-level security, user sessions, caches and so on (represented by the orange box in the diagram below).

Both Import mode and Direct Lake models can page data in and out of memory as required, so the whole model may not be in memory at any given time. However, in order for a query to run, the data it needs must be in memory and cannot be paged out until the query has finished with it. Therefore out of all the memory consumed by a semantic model, at any given time, some of that memory is “evictable” because it isn’t in use while some of it is “non-evictable” because it is being used. Evictable memory may be paged out of memory for a variety of reasons, for example because the model is nearing its allowed memory limit.

One further factor to take into account is the memory used by queries that are running on the model (the purple boxes in the diagram above). While each query has a limit on the amount of memory it can use – I mentioned the Query Memory Limit in this post but I will revisit it later on in this series – the total amount of memory used by queries also contributes to the overall memory use of a semantic model. If you have a large number of concurrent queries running, even if no single query uses much memory, this can contribute a lot to the overall memory usage of your model.

In summary then, the total amount of memory used by a semantic model is made up of three groups:

  1. The data in the tables in your model (the blue box above)
  2. Supporting data for RLS security roles, sessions and caches (the orange box above)
  3. Data used by queries (the purple boxes above)

When the sum of these three groups exceeds the total amount of memory allowed for your model, and no data can be evicted from memory to reduce this sum, then you’ll get an error.

To illustrate this I created a new F2 capacity, which has a 3GB limit on the amount of memory used by a semantic model, loaded a table (called SourceData) with 3.5 million rows of random numbers stored as text into a Lakehouse, then created a new custom Direct Lake semantic model on it. I set the Direct Lake Behavior property on the model to “Direct Lake only” to prevent fallback to DirectQuery mode.

After creating the model I used DAX Studio’s Model Metrics feature with the “Read statistics from data” option turned off to find the amount of data stored in memory (ie the blue box value).

Unsurprisingly, at this stage, the size of the model was very small: only 8KB.

I then turned the “Read statistics from data” option on, knowing that this would force data to be paged into memory. This showed the total potential size of the model to be 4.25GB:

I was initially confused by this because this is already well over the 3GB limit, but it was pointed out to me that what is probably happening is that DAX Studio runs a number of DMV queries to get the data needed to calculate this value and when this happens different parts of the model are paged in and out of memory. It was certainly very slow for DAX Studio to calculate the Model Metrics when I did this which fits with the paging in/out theory.

Finally, I ran a simple DAX query to get the top 10 rows from the SourceData table:

EVALUATE TOPN(10, SourceData)

This query ran for about ten seconds and then failed with the following error message:

Resource Governing: We cannot complete the requested operation because there isn’t enough memory (consumed memory 4620 MB, memory limit 3072 MB). Either reduce the size of your dataset, such as by limiting the amount of in-memory data, or host the dataset on a Fabric or Premium capacity with a sufficient memory size. See https://go.microsoft.com/fwlink/?linkid=2159753 to learn more.

[The error code associated with this message is 0xC13E0006 or -1052901370]

This is the error that you get when your model needs to use more memory than it is allowed to use for the capacity SKU it is running on. The query references every column from the only table in the model, which means the whole table – which is the whole model – would have to be paged in to memory for the query to run, but the whole model requires more memory than is available on an F2 capacity.

If you aren’t getting this exact error message then something slightly different might be happening. In future posts in this series I will look at some of these other errors including the query memory limit and the command memory limit.

[Thanks to Marius Dumitru for the information in this post]

15 thoughts on “Power BI Semantic Model Memory Errors, Part 1: Model Size

  1. @chris Webb, whether this memory limit is restricted to the single dataset or it will be applicable if I have multiple datasets.

      1. Silly question, is the max memory limit a ceiling for all concurrently loaded semantic models? i.e. if it’s 10GiB I can, simplistically, have either 2 x 5 GiB or 1 x 10 GiB?

      2. The limit applies for each model. So if it’s 10GB you can have as many 10GB models in memory at the same time as you want.

  2. I’ve got this record very often when I was having just 16 GB Ram on my local computer. I have scenario where calculated tables were really lifesaver (dynamic multi-KPI – 30 scorecard with 12 time intelligence versions and quite complicated conditional formatting with tooltips and drillable).

    Star Schema couldn’t handle it performance-wise, so I needed pre-aggregation. Time Intelligence in Power Query or SQL is outside of my expertise, so yes – calculated tables where use to precompute numbers for scorecard. The evaluation of calculated tables was so heavy, that it throw up out of memory error. Solution? Buy up to 64 RAM :D. I was quite angry at myself that I couldn’t figure out more smart version.

    1. If you’re getting memory errors on your local PC then yes, you can either buy more memory and/or change the memory limits in Desktop: https://blog.crossjoin.co.uk/2023/06/18/the-visual-has-exceeded-the-available-resources-error-in-power-bi-desktop/ But your local PC is not really relevant; you can always develop with a smaller amount of sample data there. What is important is what happens when you publish to the Service and whether your model is too large for the limits there.

  3. If you allowed for direct query fallback, would you still get a memory error?

    Looking forward to more discussion on tips and tricks for how to work around these issues. With a very large star schema model, how do we give our users access to the data while providing appropriate guardrails to keep them from hitting memory limits or causing these errors for others?

  4. Hi Chris, Thanks for the blog.

    It is a pretty common thing for PBI developers to get the governor error:
    “Resource Governing: This operation was canceled because there wasn’t enough memory to finish….”

    If you google, you find that this often happens in the “calculate” processing phase for calculated tables and columns. Those are the prime culprits. After reading the explanation from other users, they say it is because the model has to be “uncompressed” in ram to perform the calculation and store the results.

    It is a hard-to-manage problem. And so I usually avoid doing anything too fancy in calculated tables and columns. However I was shocked one day when I found out that similar concerns apply to custom user hierarchies. You would think that the creation of the hierarchy would be a “first-class” operation during processing and would not have the same extreme memory requirements as they other types of calculations that happen. However I gave up, because of memory constraints, and I had to forego my user hierarchies simply because the model would not allow them to be generated.

    The data wasn’t huge. The underlying model was about 3 GB after “dataOnly” processing. The table in question had no calculated columns, only a single user hierarchy. When the “dataOnly” proceeding was completed and I proceeded with the “calculate” processing, that consumed all my available P1 capacity. It was a fairly simple 3-level hierarchy with a high cardinality only on the bottom level. The resource governing error said that the model used 22 GB at the point the processing was stopped.

    The underlying partitions of that table were only about 50 million rows each, and I thought breaking into partitions would help, but they didn’t make any difference. I’m guessing that user-hierarchies have to be evaluated and stored on the whole table all at once.

    I never really found a path forward, and the only lesson learned was that tabular models don’t really like to have a custom hierarchy on a large table. Oddly the a tabular model has no problem creating the “implicit” (single-column) hierarchy on that high-cardinality column of data. But it didn’t want to create MY custom hierarchy with two additional columns in it.

    For illustration, here is a blog, showing what these spikes can do to the ram:
    https://prologika.com/high-memory-usage-and-calculated-columns/
    … this illustration relates to calculated columns but a similar thing applies to hierarchies in Power BI.

    I suspect you are very familiar with the high spikes in processing during the “calculate” phase, but perhaps you aren’t aware that this applies to hierarchies. It seems a bit counter-intuitive based on my background with multidimensional OLAP. Hierarchies were a pretty fundamental component of a dimension, even the degenerate ones, and I never had any reason to forego the creation of a hierarchy that users requested.

  5. Hi Chris, Thanks for the blog.

    It is a pretty common thing for PBI developers to get the governor error:
    “Resource Governing: This operation was canceled because there wasn’t enough memory to finish….”

    If you google, you find that this often happens in the “calculate” processing phase for calculated tables and columns. Those are the prime culprits. After reading the explanation from other users, they say it is because the model has to be “uncompressed” in ram to perform the calculation and store the results.

    It is a hard-to-manage problem. And so I usually avoid doing anything too fancy in calculated tables and columns. However I was shocked one day when I found out that similar concerns apply to custom user hierarchies. You would think that the creation of the hierarchy would be a “first-class” operation during processing and would not have the same extreme memory requirements as they other types of calculations that happen. However I gave up, because of memory constraints, and I had to forego my user hierarchies simply because the model would not allow them to be generated. A 3 GB partition takes over 20 GB as it tries to generate a single user hierarchy. Crazy.

  6. @Chris Webb, Can you please share your thoughts on the throttling error that usually occurs. Is there any way to monitor and fix the same.

Leave a Reply