Power BI Copilot And Report Filters And Slicers

In my last post I talked about how to push Power BI Copilot to get answers from the semantic model rather than the report you’re looking at. If you want to do this you are probably getting worse answers when Copilot goes to the report than the semantic model; before you try to bypass the report, though, it’s worth spending some time tuning how Copilot works with reports and to do that you need to understand how it works. In this post I will describe one important aspect of this that I’ve recently learned about: how Copilot behaves when filters and slicers are present on a report.

Using the same semantic model I’ve just in all my recent posts on Copilot, I created a report with a single card visual on showing the value of a measure called Count of Transactions with no other visuals or filters:

Using the prompt:

What is the value of Count of Transactions?

Gives the same value shown in the card, as you would expect:

The fact that the result comes in text form and the presence of a citation (the [1] at the end of the response which, when you click it, spotlights the card visual) tells me that Copilot answered this question using data from the report. Changing the prompt to filter by a County, like so:

What is the value of Count of Transactions for the county Devon?

…now gives me a result in the form of a visual:

This indicates that the result came from the semantic model because it could not be derived from the report.

What if the County field is added to the report as a slicer like so?

The second prompt above now gives the same answer but in a different way:

This time the textual answer and the presence of a citation shows that Copilot derived the response from the report. Clicking on the citation now not only spotlights the card visual but also shows that Copilot selected the county Devon in the slicer to get that result:

Also when you click on the citation in this response in the Service (but not in Desktop) a message is displayed at the top of the report telling the user “Copilot filters temporarily applied”:

The same thing happens if there is no slicer but if basic page and report level filters (but not visual level filters – the docs explicitly call out that that this is not supported and it looks like there is a bug here at the moment that results in incorrect results) are present. Here’s the report edited to remove the slicer and replace it with a page-level filter:

And here’s what the second prompt above returns for this new version of the report, and the citation is clicked to spotlight the card visual so it shows the result:

What’s more, editing the report so the filter is an Advanced Filter on the first letter of the name of the County like so:

…means that prompts like this:

Show the Count of Transactions for counties whose name begins with the letter H

…can be answered from the report too. Here’s the response to the prompt above with the citation clicked, the card spotlit and the new filter applied by Copilot shown:

I’m sure I’ve seen all this happen a hundred times but it’s only now that I’ve done these tests that I understand this behaviour, and now I understand it I can use it to design reports that work better with Copilot and troubleshoot problems.

[Thanks to Carly Newsome for telling me about this]

Stopping Power BI Copilot From Answering Questions From Report Visuals

When you ask Power BI Copilot a data question, the first thing it will do is try to answer that question using information from report visuals; if it can’t find the answer on a report page it will then go on try to build a new visual or generate a DAX query. Most of the time you’ll find that answering the question from data already displayed on a report is the method that is most likely to give you the correct answer, but occasionally – depending on the report, the measure and the filters applied to the visual – it can result in incorrect answers. In those situations you can use AI Instructions to influence how Power BI Copilot answers questions.

Consider the following report built on the semantic model I have used for most of my recent posts on Copilot containing real estate price data from the UK Land Registry:

There are two measures displayed in the visuals here: Count Of Transactions and Average Price Paid. Asking questions whose answers are clearly displayed on the page such as:

What is the Count Of Transactions?
What is the Average Price Paid for Flats?

…means that Copilot gets those answers from the report, as you would expect:

You can tell that the question has been answered from a report visual by the presence of Citations (underlined in red in the screenshot above) in the answers which point back to the visual used.

In this case the answers are both correct but let’s pretend that the answer to the question about the Average Price Paid for Flats is not and you want Copilot to bypass the bar chart visual when generating its answer. In this case you can use an AI Instruction like this:

If a user asks a question about the Average Price Paid measure, ignore any visuals on report pages and do not use them to answer the question because they may give a misleading answer. Instead, always generate a new visual to answer the question.

After applying these AI Instructions, the question:

What is the Average Price Paid for Flats?

…is now answered with a new card visual:

In this case Copilot has now ignored the visual on the page containing the answer and instead gone to the semantic model.

While this is a useful trick to know, if you find Copilot is not giving you the results you expect when it answers questions using report content it’s much better to try to understand why that’s happening and tune your report appropriately before trying to bypass the report altogether. How you do that is a topic I will address in a future post.

[Thanks to Celia Bayliss for the information in this post]

Calling DAX UDFs From Power BI Copilot

Can you call a DAX UDF from Power BI Copilot? I was asked this question by Jake Duddy during the livestream on Power BI Copilot I did with Reid Havens last week. I already knew it was possible because one of the customers I work with had already tried it, but I hadn’t tried it myself. So I did, and it is possible, and here’s the blog post.

A few months ago I wrote a post about how you can put template DAX queries in your AI Instructions to show Copilot how to solve more complex problems that can only be solved with a custom DAX query. I took some of the code from that post and turned it into the following DAX UDF:

createOrReplace

	function ABC = ```
			(
				AUpperBoundary: SCALAR int64,
				BUpperBoundary: SCALAR int64,
				AnalysisDate: SCALAR datetime
			) => 
			VAR ApplyAnalysisDate = 
			CALCULATETABLE(
				'Transactions',
				'Date'[Date] = AnalysisDate
			)
			VAR AddGroupColumn = 
			ADDCOLUMNS(	
				ApplyAnalysisDate, 
				"Group",
				SWITCH(
					TRUE(),
					//If the price is less than or equal to AUpperBoundary
					//then return the value "A"
					Transactions[Price]<=AUpperBoundary, "A (<=£" & AUpperBoundary & ")",
					//If the price is less than or equal to BUpperBoundary
					//then return the value "B"
					Transactions[Price]<=BUpperBoundary, "B (>£" & AUpperBoundary & " and <=£" & BUpperBoundary & ")",
					//Otherwise return the value "C"
					"C (>£" & BUpperBoundary & ")"
				)
			)
			RETURN
			SUMMARIZE(
			    AddGroupColumn,
				[Group],
			    "Count Of Transactions", [Count Of Transactions]
			)

This UDF does a basic form of ABC analysis on the semantic model I’ve used in all my recent Copilot posts containing UK Land Registry data on real estate transactions:

Note: this is not great quality code and it’s certainly not a general purpose solution for ABC analysis in DAX, but it’s a UDF. I then added the following to my AI Instructions describing what the UDF does and how to use it:

This semantic model contains a DAX user-defined function called ABC that does an ABC analysis on the data in the Transactions table. It takes three parameters defined as follows:
AUpperBoundary - an integer value which is the upper boundary of transactions in the A group
BUpperBoundary - an integer value which is the upper boundary of transactions in the B group
AnalysisDate: a datetime value which is the date to filter transactions on
The function returns a table which can be used in an EVALUATE statement in a DAX query.

For example if I wanted to see the number of transactions which took place on 1st January 2025 divided into three groups:
A - transactions between £0 up to and including £250000
B - transactions above £250000 up to and including £700000
C - transactions above £700000
I could call the function as follows:
ABC(250000, 700000, DATE(2025,1,1))

Then I tried the following prompt:

Do an ABC analysis on the Transactions table, grouping all transactions into three groups named A, B and C based on the Price column where the upper boundary for the first group is £290000 and the upper boundary for the second group is £780000. Filter the transactions to just 14th January 2025.

Here’s the response I got:

As in that previous post, Copilot generated a DAX query to answer the question. Apart from that weird Sort Key column it’s the right answer; an additional AI Instruction telling Copilot to sort on the Group column might help here. The difference between this response and the one I got in my previous post is that this time the DAX query generated by Copilot simply calls the UDF. Here it is:

// DAX query generated by Fabric Copilot with "Perform an ABC analysis on the Transactions table, grouping all transactions into three groups named A, B, and C based on the Price column, where group A includes transactions with Price up to £290,000, group B includes transactions with Price above £290,000 and up to £780,000, and group C includes transactions with Price above £780,000. Only include transactions from 14th January 2025."
// ABC analysis for Transactions on 14-Jan-2025 with the specified price boundaries.
// A: Price <= 290,000
// B: 290,000 < Price <= 780,000
// C: Price > 780,000
// The ABC function is user-defined in the model and returns a table.
// Since the column names returned by ABC are not specified, add a dummy sort column to satisfy the ORDER BY requirement.
EVALUATE
  ADDCOLUMNS(
    ABC(290000, 780000, DATE(2025, 1, 14)),
    "Sort Key", 0
  )
ORDER BY
  [Sort Key] ASC

I guess the advantage of this over the DAX template approach is that it’s likely to be a lot more reliable and consistent: the less DAX Copilot has to generate, the more likely it is to do what you expect. On the other hand it’s a lot less flexible. For example I tried this prompt from my previous blog post which asks for an ABC analysis with four rather than three groups:

Do an ABC analysis on the Transactions table, grouping all transactions into four groups named A, B, C and D based on the Price column where the upper boundary for the first group is £300000, the upper boundary for the second group is £750000 and the upper boundary for the third group is £900000. Filter the transactions to just 16th January 2025.

I got the correct result from Copilot but the DAX query generated didn’t use the UDF because the UDF is hard coded to only return three groups; I suppose I was lucky in this case.

Based on these – admittedly fairly basic – tests I think using DAX UDFs with Power BI Copilot could be very useful when you need Copilot to generate complex measures or calculations where you know the general DAX pattern to use.

Monitoring The DAX Queries Generated When The Power BI Copilot Index Is Built

In my last post I talked about the text index that Power BI Copilot builds to help it answer data questions. You might be wondering if you can monitor the index build process and the bad news is that – at least at the time of writing – you can’t do so directly. However you can monitor it indirectly because the index build process runs DAX queries to get text values from the semantic model and you can see the DAX queries being run using Workspace Monitoring, Log Analytics or Profiler. While this is described quite well in the docs here let’s see what these DAX queries actually look like.

I published the semantic model from my last post (which has four text columns CustomerId and CustomerName from the Customers table, CustomerId and TransactionId from the Orders table, none of which are hidden or excluded from indexing using the Simplify The Data Schema feature) to a workspace where Workspace Monitoring was enabled:

I then ran the following KQL query to look at the DAX queries run in the workspace I published to:

SemanticModelLogs
| where Timestamp > ago(1hr)
| where OperationName == "QueryEnd"
| project Timestamp, EventText, DurationMs, CpuTimeMs

I saw that the index build generated three DAX queries with the same pattern:

Here’s the first of these queries:

EVALUATE
SELECTCOLUMNS (
    FILTER (
        VALUES ( 'Customers'[CustomerName] ),
        LEN ( 'Customers'[CustomerName] ) <= 100
    ),
    "valueColumn", 'Customers'[CustomerName]
)

In line with what is described in the documentation this query gets all the distinct text values from the CustomerName column on the Customer table that are less than 100 characters long. I assume there are only three DAX queries even though there are four text columns in the model because one of the columns, TransactionId, has 5 million distinct values in it and Copilot cannot currently index more than 5 million text values in a single model.

[Interestingly, I saw that one of these queries failed with a memory error which I have asked to be investigated]

Each query took between 300-600ms and there was a small gap between queries, so you can see how a large model with a lot of text columns could generate a lot of queries that all together take a noticeable amount of time to run. Finding the amount of time between the start of the first of these queries and the end of the last query will give you a rough idea of how long it takes to build the index, even though there is some extra work that needs to be done after the last query has been run which can sometimes take some time too.

As described here, indexing takes place for Import mode models with Q&A enabled every time the model is published, as well as every time it is refreshed so long as either Copilot or Q&A has been used in the last 14 days. For DirectQuery or Direct Lake models indexing also takes place every time the model is published but because it’s impossible to say whether the underlying data has changed when a refresh takes place, indexing happens instead every 24 hours so long as either Copilot or Q&A have been used in the last 14 days. As a result, if you’re an admin, you might see a lot of DAX queries similar to the one above being run.

[Thanks to Aaron Meyers for much of the information in this post]

Understanding The “Copilot Analyzed Only Part Of The Model Due To Its Size” Warning In Power BI Copilot

If you’re using Power BI Copilot you may have seen the following warning message:

Copilot analyzed only part of the model due to its size, which may affect these results

What does it mean? What causes it? What can you do to avoid it? Let’s find out.

In order to answer your questions better Power BI Copilot creates an index of the values in all the text columns in your semantic model that it thinks you’re likely to ask questions about. This indexing process is described in the docs here in great detail and I recommend you read the docs before carrying on (incidentally, another common warning message, “Copilot is currently syncing with the data model” that you might have seen is displayed while that indexing is taking place). There is a limit on the number of text values that can be indexed and when Copilot hits that limit you see the “Copilot analyzed only part of the model due to its size” warning.

At the time of writing (and this may change in the future) there are two limits on index size in place which are documented here:

  1. Copilot can only index 1000 text columns – but if you have 1000 columns in your model I would argue you’re already doing something wrong, because a model that large is going to be very difficult to use.
  2. Copilot can only index up to 5 million distinct text values across all text columns – and this is the limit you’re most likely to hit if you’re not careful.

Only text values that are less than 100 characters are indexed; longer text values are ignored and do not cause the warning to appear.

For an example of how you can run into the 5 million distinct text values limit, consider the following simple semantic model that I have designed so that the warning message appears:

Here’s what the contents of the Customers table looks like (the CustomerName column contains random text values):

Here’s what the contents of the Orders table looks like:

The Customers dimension table has 1 million rows in it; the Orders fact table has 5 million rows in it. If you ask Copilot a question about this semantic model like:

Show the Sales Value for the Customer Name WAMUQT

…it may work well but you will see the “Copilot analyzed only part of the model” message:

In other cases you might find that Copilot falls back to generating a DAX query or asks for clarification because, if it can’t find a text value you’ve used in your prompt in its index, it’s less sure about how it can answer the question:

In these cases you may find Copilot’s responses are less reliable, even if you still get correct results in many cases.

There are a couple of flaws in the design of this semantic model that either individually or combined cause that warning message to appear. Remember that it only appears when Copilot thinks it needs to index more than 5 million text values, so what is causing it to go over that limit?

First of all I deliberately made all the key columns (CustomerId on Customers and Orders, TransactionId on Orders) text columns. In this case they can easily be set to be of type Whole Number and the problem will be solved because the only remaining text column, CustomerName, contains 906925 distinct values which is well under the limit. It’s a best practice to use integer keys in Power BI whenever you can anyway. What if you can’t change the data type of your key columns though?

Well the second thing to consider is whether you need the TransactionId column on the fact table at all – primary keys on a fact table are rarely useful and can be very expensive in terms of model size, and in this case there are 5 million distinct values in that column so removing it completely will have a number of other benefits apart from reducing the overall number of text values that need to be indexed.

Thirdly, though, why are these key columns even visible? The CustomerId columns are almost certainly not useful to end users and TransactionId might not be either. Setting all these columns’ Is Hidden property to true means that Copilot does not try to index them, which means that only the CustomerName column is indexed and the warning message goes away. Again, hiding key columns is a best practice regardless of whether you’re using Copilot.

Finally, if you don’t want to hide a text column, you have another option: you can use the Simplify The Data Schema page in the Prep Data For AI dialog to prevent Copilot considering it in its responses. As I said, hidden columns are never considered anyway, but deselecting a column in this page has the same effect:

Here’s Copilot successfully answering the question above without the warning after deselecting the key columns in the Simplify The Data Schema page:

For most Power BI semantic models you should be able to use one or more of these techniques to make sure only the text columns that need to be indexed are indexed, stop the warning message from appearing, and get more reliable results from Copilot.

Power BI Copilot, AI Instructions And DAX Query Templates

At the end of my last post I showed how, if you put the definition of a measure in the AI Instructions of your Power BI semantic model, Copilot can use it as a template to create new measures with slightly different logic. That got me wondering – can you do the same thing for DAX queries to solve even more complex problems? It turns out you can.

First of all, why would you want to do this? There are some problems that can be solved relatively easily by writing a DAX query but which are quite hard to solve if you’re limited to writing measures. ABC classification is one example of this. Marco and Alberto have a great article on this problem here; the problem is when you want to increase the number of buckets/clusters/groups you use or change the rules for which transactions go into which buckets and make all this dynamic for end users building reports in Power BI Desktop then your semantic model can get very complicated very quickly. In a DAX query you can solve these problems much more easily, in part because you can define a calculated column at query time and use it to do your grouping. And since Copilot can generate and run DAX queries, it can be taught how to do ABC analysis by giving it an example DAX query and some instructions telling it how the query works. Let’s see an example.

The sample semantic model I’ve been using in this series has a fact table called Transactions where each row represents a real estate property sale in England and Wales. Let’s say you want to group these rows into different groups based on the price paid for the property. With nothing in the AI Instructions for the model, the following prompt:

Do an ABC analysis on the Transactions table, grouping all transactions into three groups named A, B and C based on the Price column where the upper boundary for the first group is £250000 and the upper boundary for the second group is £700000. Filter the transactions to just 1st January 2025.

…will sometimes generate the right result but will also sometimes generate a slightly strange or even incorrect result. Here’s an example where the information in the summary is correct but the DAX query is far from ideal:

EVALUATE
  // Create a table with transactions on 1st January 2025 and assign an ABC group based on Price
  SELECTCOLUMNS(
    FILTER(
      'Transactions',
      'Transactions'[Date] = DATE(2025, 1, 1)  // Filter transactions to 1st January 2025
    ),
    "Price", 'Transactions'[Price],
    "ABC Group", 
      SWITCH(
        TRUE(),
        'Transactions'[Price] <= 250000, "A",      // Group A: Price up to £250,000
        'Transactions'[Price] <= 700000, "B",      // Group B: Price over £250,000 and up to £700,000
        "C"                                       // Group C: Price above £700,000
      )
  )
ORDER BY
  [ABC Group] ASC,
  [Price] ASC

This query returns one row per transaction rather than one row per group:

Not something you can give to an end user for sure. Here’s a DAX query that gives me exactly the result I want:

DEFINE
//Create a new calculated column at query time 
//to create the groups for ABC classification
COLUMN 'Transactions'[Group] =
//Upper boundary for price for group A
VAR AUpperBoundary = 250000
//Upper boundary for price for group B
VAR BUpperBoundary = 700000
RETURN
//Return a different letter representing a group name
//based on where the value in the Price column sits in
//the boundaries defined
SWITCH(
TRUE(),
//If the price is less than or equal to the variable AUpperBoundary
//then return the value "A"
Transactions[Price]<=AUpperBoundary, "A (<=£250,000)",
//If the price is less than or equal to the variable BUpperBoundary
//then return the value "B"
Transactions[Price]<=BUpperBoundary, "B (>£250,000 and <=£700,000)",
//Otherwise return the value "C"
"C (>£700,000)"
)
//Returns the results of the classification
EVALUATE
SUMMARIZECOLUMNS(
    'Transactions'[Group],
    //Filter by a given date
     KEEPFILTERS( TREATAS( {DATE(2025,1,1)}, 'Date'[Date] )),
    "Count Of Transactions", [Count Of Transactions]
)
ORDER BY 
    'Transactions'[Group] ASC

Here’s what this query returns in DAX Studio:

Putting this query in the semantic model’s AI Instructions with some explanatory text, like so:

The following DAX query does an ABC analysis on the Transactions table, grouping transactions into three groups called A, B and C, for the 1st January 2025.  The transactions with the lowest prices always go into group A, then subsequent letters represent higher price ranges. If the user asks for an ABC analysis use this query as a template.

DEFINE
//Create a new calculated column at query time 
//to create the groups for ABC classification
COLUMN 'Transactions'[Group] =
//Upper boundary for price for group A
VAR AUpperBoundary = 250000
//Upper boundary for price for group B
VAR BUpperBoundary = 700000
RETURN
//Return a different letter representing a group name
//based on where the value in the Price column sits in
//the boundaries defined
SWITCH(
TRUE(),
//If the price is less than or equal to the variable AUpperBoundary
//then return the value "A"
Transactions[Price]<=AUpperBoundary, "A (<=£250,000)",
//If the price is less than or equal to the variable BUpperBoundary
//then return the value "B"
Transactions[Price]<=BUpperBoundary, "B (>£250,000 and <=£700,000)",
//Otherwise return the value "C"
"C (>£700,000)"
)
//Returns the results of the classification
EVALUATE
SUMMARIZECOLUMNS(
    'Transactions'[Group],
    //Filter by a given date
     KEEPFILTERS( TREATAS( {DATE(2025,1,1)}, 'Date'[Date] )),
    "Count Of Transactions", [Count Of Transactions]
)
ORDER BY 
    'Transactions'[Group] ASC

Means that the prompt above not only returns what you would expect:

…but more importantly, variations on the prompt such as:

Do an ABC analysis on the Transactions table, grouping all transactions into four groups named A, B, C and D based on the Price column where the upper boundary for the first group is £300000, the upper boundary for the second group is £750000 and the upper boundary for the third group is £900000. Filter the transactions to just 16th January 2025.

…take the template query and modify it as you would expect:

Note that the prompt has not only changed the date to filter by but increased the number of groups from three to four, and that Copilot has modified the DAX query to handle this – which I think is very impressive. Here’s the DAX query Copilot generated:

// DAX query generated by Fabric Copilot with "Group all transactions from the Transactions table into four groups named A, B, C, and D based on the Price column, where group A includes transactions with Price up to £300,000, group B includes transactions with Price above £300,000 and up to £750,000, group C includes transactions with Price above £750,000 and up to £900,000, and group D includes transactions with Price above £900,000. Filter the transactions to just 16th January 2025. Show the count of transactions in each group."
DEFINE
// Create a new calculated column at query time 
// to classify transactions into groups A, B, C, and D based on the Price column
COLUMN 'Transactions'[Group] =
    // Define the upper boundaries for each group
    VAR AUpperBoundary = 300000
    VAR BUpperBoundary = 750000
    VAR CUpperBoundary = 900000
    RETURN
        SWITCH(
            TRUE(),
            'Transactions'[Price] <= AUpperBoundary, "A (<=£300,000)",
            'Transactions'[Price] <= BUpperBoundary, "B (>£300,000 and <=£750,000)",
            'Transactions'[Price] <= CUpperBoundary, "C (>£750,000 and <=£900,000)",
            "D (>£900,000)"
        )
        
EVALUATE
    SUMMARIZECOLUMNS(
        'Transactions'[Group],
        // Filter transactions to include only the 16th January 2025
        KEEPFILTERS( TREATAS( { DATE(2025,1,16) }, 'Date'[Date] ) ),
        "Count Of Transactions", [Count Of Transactions]
    )
ORDER BY 
    'Transactions'[Group] ASC

Tools that convert natural language to SQL, like Fabric Data Agents, all allow you to specify example SQL queries like this so it’s hardly a surprise that the same approach applies to DAX. What’s interesting is the type of problems that giving Copilot templates of DAX measures and queries allows you to solve, and I’m only just starting to think of the possibilities.

Power BI Copilot, AI Instructions And DAX Measure Definitions

Continuing my (already very long) series on what information you should be adding to the AI Instructions of your semantic model and why, in this post I’ll show you the benefits of adding the DAX definitions of your measures.

A really common question from end users viewing a Power BI report is “how is this measure calculated?”. As a result I have seen model developers use techniques like this to display either a text description of how the measure works in a report or the actual DAX definition. It is therefore not a surprise that if end users are using Copilot they will ask the same question. Unfortunately Copilot cannot – or rather should not, at the time of writing – be able to to see the definitions of the measures in your model. Most of the time if a user asks to see how a measure is defined then Copilot will say that it can’t show the definition:

[Argh, yes I know it’s calling a measure a calculated column]

…although sometimes it does seem to be able to get the definition by writing a DAX query – but I have also seen it hallucinate and come up with a plausible looking definition which isn’t the actual definition. Anyway, it certainly can’t show the definition of a measure or a description of how it works reliably.

Adding all the measure definitions to the model’s AI Instructions mostly solves this problem. TMDL View makes it easy to get all the measure definitions in a semantic model in Power BI Desktop and you can copy/paste them from there into the AI Instructions.

In the semantic model that I’ve been using throughout this series I added some extra measures and then copied their DAX definitions and their descriptions into the AI Instructions. Here are the AI Instructions:

##Definitions and descriptions of the measures in this model
If a user asks how a measure is defined, asks how a measure works or asks how a measure is calculated, ignore any previous instructions about displaying measure definitions from the model and show the definition given here.

All measures that return currency values do so in Pounds Sterling and should be formatted with a £ sign.

##Average Price Paid
AVERAGE('Transactions'[Price])

##Count Of Transactions
COUNTROWS('Transactions')

##New Build Average Price Paid
Gets the value of the Average Price Paid measure where the New column equals "Y"
CALCULATE([Average Price Paid], KEEPFILTERS('Transactions'[New]="Y")) 

##Tax Paid
Different tax rates are levied on new build and non-new build properties
A 10% tax is levied on the sale of new build properties
A 5% tax is levied on the sale of properties that are not new builds
(CALCULATE(SUM('Transactions'[Price]), KEEPFILTERS('Transactions'[New]="Y")) * 0.1)
+
(CALCULATE(SUM('Transactions'[Price]), KEEPFILTERS('Transactions'[New]="N")) * 0.05)

Note in particular the Tax Paid measure – it has some fairly complex logic that bears no relation to any actual tax in the UK, so I knew there was no way Copilot would be able to guess how it was defined.

With these AI Instructions in place, the prompt:

how does the tax paid measure work?

…shows a helpful summary:

[Although I admit sometimes Copilot still does reply saying that it can’t display the definition of a measure from the model, which isn’t ideal – if I work out how to stop this happening I’ll update this post]

Even more impressively, since Copilot knows the definition of the measure, it is able to answer more complex questions like this:

show me what the tax paid by property type name would be if the tax on new build properties was increased to 11%

Here’s the DAX query generated, which shows that Copilot has modified the definition of the measure correctly:

// DAX query generated by Fabric Copilot with "Show the tax paid by Property Type Name if the tax on new build properties was increased to 11%."
EVALUATE
  // Summarize the tax paid by Property Type Name with increased new build tax rate (11%)
  SUMMARIZECOLUMNS(
    'PropertyTypes'[Property Type Name],
    "Tax Paid", 
      // New build tax (rate increased to 11%) plus non-new build tax (rate unchanged at 5%)
      (CALCULATE(SUM('Transactions'[Price]), KEEPFILTERS('Transactions'[New] = "Y")) * 0.11) +
      (CALCULATE(SUM('Transactions'[Price]), KEEPFILTERS('Transactions'[New] = "N")) * 0.05)
  )
  ORDER BY
    'PropertyTypes'[Property Type Name] ASC

Why Do Power BI Copilot AI Instructions Seem To Work Better in Desktop Than In The Service?

I’m spending a lot of time with organisations testing Power BI Copilot at the moment, and something I hear a lot is that Copilot works well in Desktop but when you publish your model to the Power BI Service the results are a lot more inconsistent. One reason why is because of how updates to AI Instructions are applied after you publish your semantic model.

Let’s see an example of this. Consider the following semantic model consisting of a single table with two measures, Sales Amount and Profit Amount:

The semantic model has the following AI Instructions applied:

//Instructions V1
xyz means the measure Sales Amount

The instructions here don’t make much sense, but using a meaningless term like “xyz” makes it easier to test whether Copilot is using an instruction or not.

In Power BI Desktop, the following Copilot prompt returns the results you’d expect with xyz understood as Sales Amount:

show xyz

If you publish this model to an empty workspace in the Power BI Service then this prompt returns the same correct result.

[By the way, the message “Copilot is currently syncing with the data model. Results may be inconsistent until the sync is finished” will be the subject of a future blog post – it’s not connected to what I’m describing in this post, it relates to how Copilot needs to index the text values in your semantic model, which is a separate process]

So far so good. Going back to Power BI Desktop, changing the AI Instructions like so:

//Instructions V2
xyz means the measure Sales Amount
kqb means the measure Profit Amount

…then closing and reopening the Copilot pane in Desktop and entering the prompt:

show kqb

…also returns the result you would expect, with kqb understood as Profit Amount

However, if you publish the same model up to the same workspace as before – so you are overwriting the previous version of the model in the Service – and then use the same prompt immediately after publishing:

…Copilot returns an incorrect result: it does not understand what “kqb” means. Why?

After you publish changes to a Power BI semantic model it can take a few minutes, sometimes up to an hour, for updates to the AI Instructions to be applied. This means if you’re testing Power BI Copilot in the Service you may need to be patient if you want to see the impact of any changes to AI Instructions, or do your testing in Power BI Desktop.

How can you know whether the latest version of your AI Instructions are being used in the Service when you do your testing? In the Power BI side pane in both Desktop and the Service there is an option to download diagnostics from the “…” menu in the top right-hand corner. This downloads a text file with diagnostic data in JSON format which contains a lot of useful information; most importantly it contains the AI Instructions used for the current Copilot session. The file contents aren’t documented anywhere, I guess because the structure could change at any time and it’s primarily intended for use by support, but there’s no reason why you as a developer shouldn’t look at it and use it.

For the second example in the Service above, where Copilot returned the wrong result, here’s what I found at the end of the diagnostics file:

As you can see the changes I made to the AI Instructions before publishing the second time had not been applied when I ran the prompt asking about kqb.

After waiting a while, and without making any other changes to the model, the same prompt eventually returned the correct results in the Service:

Looking at the diagnostics file for this Copilot session it shows that the new version of the AI Instructions was now being used:

Since looking in the diagnostics file is the only way (at least that I know of right now) to tell what AI Instructions are being used at any given time, it makes sense to do what I’ve done here and put a version number at the top of the instructions so you can tell easily whether your most recent changes are in effect.

One last point to mention is that if you’re deploying semantic models using Deployment Pipelines or Git, the docs state that you need to refresh your model after a deployment for changes to AI Instructions to take effect and that for DirectQuery or Direct Lake (but not Import) mode models this only works once per day.

Power BI Copilot, AI Instructions And Visualisation Guidelines

If there are specific ways you want your data to be visualised by Power BI Copilot then you have two options. You can use Verified Answers to link questions to visuals on your reports and can even set them up so users can apply filters to these visuals in their prompts. Alternatively – and this is the topic of this post – you can use AI Instructions to tell Copilot which visuals should be used when certain combinations of measures and columns appear together in a prompt.

Using the same semantic model I have used in this series of posts, consider the following prompts and the responses returned when there are no AI Instructions:

show count of transactions by date for the first week of January 2025

This returns results as a line chart:

show count of transactions broken down by New

[There is a column in the model called New containing Y or N values]

This returns results as a bar chart:

Adding the following AI Instructions changes the visualisations used by Copilot for these prompts:

When showing the count of transactions measure by date always use a column chart.
When showing the count of transactions measure by New use a pie chart.

Here are the responses returned now:

show count of transactions by date for the first week of January 2025
show count of transactions broken down by New

Before someone comments, I’m not saying that these visualisation choices are better than the defaults, I’m just saying that you can use AI Instructions to change the visuals used in responses. There may be a type of analysis for your data or your industry that should always use a certain Power BI visual, one that isn’t the visual that Copilot chooses by default. And we all know that some users have strong opinions on which visuals should be used that don’t follow data visualisation best practices…

Power BI Copilot, AI Instructions And Semantic Model Relationships

Power BI Copilot knows about the relationship between measures and tables, which means that it’s very good at knowing which measures can be broken down by which columns. For example, consider the following model with two fact tables, one of which contains sales data dimensioned by Employee and Customer, and one which contains target data that is only dimensioned by Employee:

For extra fun, I’ve created a disconnected measure table to hold all the measures, which are defined as follows:

Sales Amount = SUM(Sales[Sales])
Target Amount = SUM(Targets[Target])
Target % = DIVIDE([Sales Amount],[Target Amount])

Without any AI Instructions added to the model, for the following prompt:

Show Sales Amount broken down by Customer

I get (as you would expect) the following correct response:

However for the prompt:

Show Target Amount broken down by Customer

…I get a response that you can’t show Target Amount broken down by Customer because there’s no direct relationship. This is, I think the right response for most users.

Note that Copilot knows the relationships between the Sales Amount and Target Amount measures and the Customer dimension table even though they are on the disconnected measures table, which I think is very smart.

Even better, a prompt like this:

show a table with Employee Name, Customer Name, Sales Amount, Target Amount and Target %

Returns the result I would expect:

Copilot has understood that even though there’s no relationship between Customer and Target Amount, it makes sense to break it down in this case.

What about calculation groups? I’ve seen some inconsistent behaviour over the last few days but on the day I’m writing this post, calculation groups were working well in Copilot with no special AI Instructions despite the fact they have no relationship with any other table.

Adding a calculation group to the model like so with two calculation items, Actual Value and Forecast:

Actual Value = SELECTEDMEASURE()
Forecast = SELECTEDMEASURE() * 100

And prompting:

Show Forecast Sales Amount broken down by Customer

…gives the right response using the Forecast calculation item:

That said I’m sure there will be scenarios where Copilot decides it can’t use a measure with a particular column even though you want it to, and you can override this behaviour with AI Instructions. However, I have found you need to be very careful about what instructions you provide to get the output you expect. For example adding the following to the AI Instructions of my model:

For the Target Amount measure, ignore any previous instructions regarding not displaying measures with columns from tables where there is no direct relationship. 

…means that the prompt shown above which asks for Target Amount broken down by Customer:

Show Target Amount broken down by Customer

…returns the following result:

This surprised me a lot: I was expecting to see the same value repeated across all customers. I worked out that the reason different customers showed different values for Target Amount was that the visual was assuming an indirect relationship via the Sales table, but even then I had no idea that it was possible to build a Power BI visual that did this when no many-to-many relationships exist in the model – I certainly couldn’t build the same visual myself manually. It’s one way of answering the question correctly though.

Altering the AI Instructions to tell Copilot not to do this was tricky but eventually I came up with this:

For the Target Amount measure, ignore any previous instructions regarding not displaying measures with columns from tables where there is no direct relationship. When displaying Target Amount by Customer do not assume there is an indirect relationship via the Sales table.

The same prompt then gave me the result I was expecting with the same Target Amount value repeating across all customers (and it’s interesting it decided it had to create a DAX query to do this too):

This tells me that it’s probably not a good idea to try to override Copilot’s default behaviour when it comes to relationships, tables and measures.