If you’ve looked into using Copilot with Power BI, you’ll know that you need a capacity to use it: a P1/F64 or greater. Something I learned recently, though, is that the current Power BI Copilot capabilities are only accessible via a side pane to a report and it’s the report that needs to be stored in a workspace on a P1/F64+ capacity and that’s where the CUs for Copilot are consumed. If the report has a live connection to a semantic model stored in a workspace not on a capacity (which we refer to at Microsoft as “Shared”, but is widely known as “Pro”) then Copilot still works!
For example, I published a semantic model to a workspace stored on Shared/Pro:

I then built a report with a Live Connection to that semantic model and saved it to a workspace on a P1 capacity:

I then opened this report, used Power BI Copilot, and saw that a) it worked and b) the CUs were attributed to the workspace where the report was stored when I looked in the Capacity Metrics App:

This is particularly interesting if you have a large number of semantic models that are stored in Shared/Pro workspaces today but still want to use Power BI Copilot. It means you don’t have to move your semantic models into a workspace on a capacity, which in turn means that refreshing those models or querying them won’t consume CUs on a capacity, which in turn means that you’ll be able to support more users for Power BI Copilot on your capacities.
In the future you’ll have even more flexibility. In the sessions “Microsoft Fabric: What’s New And What’s Next” and “Boost productivity with Microsoft Fabric, Copilot, and AI” at Ignite last week we announced a number of new Power BI Copilot capabilities coming soon, such as integration with AI Skills and a new “immersive” Copilot experience. We also announced Fabric AI Capacities, coming early 2025, which will allow you to direct all Copilot activity to certain capacities regardless of whether your reports or models are stored on Shared/Pro or Premium, or which capacity they are stored on (see more details from the 16:00 minute mark in the recording of the “Boost productivity with Microsoft Fabric, Copilot and AI” session).

Nice. But I think it’s worth a more technical article about how to do it in details
Hey Chris, appreciate the post as always! Do we have a sense of how CU intensive these operations are can be expected to be? There are obviously lots of dependencies (semantic model size, model approach, etc.), but I’m wondering if you can give some guidance. My org is looking to test adoption, but we’re worried with a pretty crowded P2, we might not be able to handle what I’m anticipating will be heavy CU operations.
This post might help https://blog.crossjoin.co.uk/2024/03/17/how-much-does-copilot-cost-in-microsoft-fabric/ although at the beginning of November this year we cut the CU cost of Copilot by 50%