If you’re using deployment pipelines with Direct Lake semantic models in Power BI you’ll have found that when you deploy your model from one stage to another by default the model still points to the Lakehouse it was originally bound to. So, for example, if you deploy your model from your Development stage to your test stage, the model in the Test stage still points to the Lakehouse in the Development stage. The good news is that you can use the deployment rules feature of deployment pipelines to make sure the model in the Test stage points to a Lakehouse in the Test stage and in this post I’ll show you how.
To illustrate how to do this I created a workspace to represent a dev environment with a Lakehouse containing one table, called MyTable, containing the following data:

I then created a second workspace to represent a test environment with a Lakehouse containing a table with the same name and the same schema but with different data:

Then, back in the dev workspace I created a custom semantic model pointing to the mydata table and built a report on it to show the data:

Here’s what the dev workspace looked like:

I then created a deployment pipeline, assigned the two workspaces to the Development and Test stages of it, and configured it so that the semantic model and report would be deployed from Development to Test:

I then did a deployment and, as mentioned, although the report and the semantic model were moved to the Test stage workspace, the model still pointed to the Lakehouse in the Development stage workspace so the report showed the same data as before:

To fix this I created a deployment rule using the “Data source rules” option. This is where things get tricky. When you create the rule you can see that the custom model on the From side has a Database and a Server property to know which Lakehouse it is pointing to:

This means that on the To side you need to choose the “Other” option in the dropdown box and enter values for the Database and Server:

What do these properties mean?
- The Database property is the ID of the SQL Endpoint of the Lakehouse the model is pointing to and you can find it by opening the SQL Endpoint UI of the Lakehouse in the browser and copying it from the part of the URL that immediately follows “lakehouses”:

- The Server property is the SQL connection string of the SQL Endpoint, which can be found by clicking the Gear icon on the menu in the SQL Endpoint UI to open the Settings pane:

I changed the settings in the data source rule to the values from the Lakehouse in the Test workspace and then redeployed. After doing that the report in my Test workspace showed the data from the Lakehouse in the Test workspace (note: I had to manually refresh the report first to do this to flush out stale data from the cache):

Just what I needed to read! Love these posts!!!!
Hi Chris – really appreciate your site.
I’ve been trying to do this with semantic models direct querying dataflows with enhanced compute on. I’m still on Premium Capacity so no options to play with Fabric objects.
I see 2 issues
1. I am unable to change the source of the model to use the new target dataflow. The option is greyed out against the TEST version of the newly deployed semantic model
2. More seriously I see an issue with Excel (and possible RDL) reports pointing to the model.
When I tried to convert a semantic model to direct query I found that existing reports would not longer work. The error I get is something like fields needing to be fully qualified/
This leaves us stuck on import mode as the work to migrate all our reports across, even though the model definition is really the same, is too great.
Any thoughts or articles on this please?
My first thought is: why do you want to do DirectQuery on a dataflow? It’s possible but not a good idea. If you’re working with the kind of volumes that dataflows support you should be using Import mode.
Thanks Chris. Got it working by removing parameterization of source within dataflow.
Now as to why am I trying to DQ a dataflow. Good question and probably not a good answer.
Reading various articles of best practice I am trying to move from
1. Source -> Complicated Semantic Model to
2. Source -> Staging Dataflows -> Linked Computer Dataflows -> Semantic Model
This divides and conquers the power query of which there is a lot in a model with lots of sources, some of them enriched by many more sources until they can eventually be folded into the model by the businesses
3. Am actually trying to break apart our model so that it could be easier to manage and possibly port to a Datamart (or whatever is mature in Fabric these days) where some models can remain as Import models and other models can use DQ if possible.
The end goal is to make sure that reports that touch the semantic model, whether it be Import or DQ, are not affected.
Hence the second part of the question about reports not being interchangeable from an old Import storage model to a Direct Query storage model.
Thanks for your time.
Be as critical as you must be. It’s fine as it’ll save me hours pursuing technological dead ends or work for works sake.
Also really helps with orchestration. Refreshing the staging dataflow refreshes the linked entity dataflow and a DQ model does not need a refresh. Once I need to import that it means added a trigger or schedule that does not come out of the box. I’m also worried about getting charged to hold that data in multiple models when in some cases we can just look through to the dataflow
The most important point is that you should only consider DirectQuery when you’re dealing with extremely large data volumes or when you need extremely low latency. No scenario where you’re using Dataflows matches either of those. DirectQuery is never going to perform as well as Import mode and relies a lot the performance of the underlying source which, in this case, you can’t tune. So based on this my advice is do not use DirectQuery on Dataflows however tempting it might seem.
Thank you Chris.
I will not be using DQ on dataflows now. It is possible but will not be worth my while.
I’ll be looking up some more articles on your site in earnest to see whether Dataflows have a matching use case for us.
I really liked the idea to separate out entities and use the Web engines for editing only as the massive Power BI model was getting harder and harder to manage locally and we can no longer download it due to incremental refresh.
We still have one semantic model that is used rarely (once a month) that pulls 2 unique tables and the rest of the model it needs it picks up via DQ of the existing main model (which we do not want to slow down with those large tables). No complaints so far and if performance ever needed to be ramped up via Import then it could be done by moving all the shared entities into Dataflows if benchmarking shows that refresh times and space usage is acceptable.
Thanks for all your help.
Hi Chris.
Very helpful post and appreciate the detail. Used the solution you provided and worked well with one quirk which might be noteworthy.
The database identifier you showed is a GUID value and I retrieved my environments database identifier GUID value from the the part of the URL as shown. I used my specific GUID in the other->database specification for the data source rules. Redeployed from DEV to TEST and tested the report in TEST… which then reported a connection failure to the data source.
Figuring I had made a mistake in the Data Source Rule setup and looking at the FROM… TO specification saw that the FROM database specification was not a GUID, but the text name DEV lakehouse. In my case the value is “devlh” .
I followed this pattern with the TO specification and specified the text name of the TEST lakehouse. In my case “testlh”. Saved the rule, redeployed and it worked!
Would like to include a screenshot here for clarification, but not allowed.
Thanks again for the post and will use this technique going forward.
Paul Goldy
Hello,
Does this work also when i have a semantic model connected to a warehouse in import mode?
I have tried but it import mode ask for an implicit connection and therefore need to be remapped manually when moving from DEV to TEST environment.
Let me know if there is a solution
Don’t normal deployment rules or parameter rules work here?
Hi Chris,
When I follow the above process my test semantic model falls back to directquery, even though the dev model I deployed successfully uses direct lake. I get this warning “DAX queries may fall back to DirectQuery, if allowed, or show errors. To fix this, try refreshing the semantic model, scaling up the capacity, or avoid using SQL endpoint items, such as views.” and can see it falling back to directquery in performance analyser.
Any thoughts as to what could cause this to happen?
Hi Chris, great post and detail using deployment pipelines. Do you have any thoughts or suggestions for backwards deployments? I am in a scenario where I need to drop Prod down to Test and doing so requires my Test workspace to be empty. I’m unable to assign any business rules for the deployment and I can’t find a way to change the binding on Direct Lake semantic models – they all still point to our Production Warehouse. I’m considering a work around by creating a new deployment pipeline for Prod to push “up” to test but wondering if you have any thoughts on alternate approaches for these scenarios? Thanks! M
Hi Chris,
It’s a good one. I tried the same way as you mentioned. But the data source connection is grayed out in the deployment rules here. There is no way to parameterize the connection with dire lake. This works with the import and direct query mode. But not for the direct lake. any additional tips which I missed ?
Thank you,
Priyan.
I’m also facing a similar issue, did you manage to find a solution?
See the section on the documentation linked below with the header “Supported data sources for dataflow gen1 and semantic model rules”
https://learn.microsoft.com/en-us/fabric/cicd/deployment-pipelines/create-rules?tabs=new-ui#supported-data-sources-for-dataflow-gen1-and-semantic-model-rules
This seems to suggest it’s not currently supported…
This appears to only work for DirectLake on SQL Endpoint models, not the newer DirectLake on OneLake models. I dont see how deployment rules (or parameter rules) can be used for DirectLake for OneLake, so then even with deployment pipelines I have to manually update the semantic model via TMDL edit in production directly….