The CredentialQuery Option For Web.Contents() In Power BI Custom Data Connectors

There’s a new, as yet undocumented, option for the Web.Contents() M function that is only available for custom data connectors and not in Power BI Desktop: CredentialQueryOption. It does the same thing as the Query option, but it stops the values you are passing to the query parameter from being logged in Power BI’s diagnostic logs, so it’s useful if you are passing sensitive information such as passwords.

For example, consider the following call to Web.Contents() used in a custom data connector:

Web.Contents(
 "https://data.gov.uk/api", 
 [RelativePath="3/action/package_search", 
 Query=[q="cows"]
]
)

When the custom data connector is used in Power BI Desktop, if diagnostic logging is turned on in the Options dialog:

image

…then, if you look in the log files you’ll see the entire url called including the search term “cows”:

image

However, if you change the M code to:

Web.Contents(
 "https://data.gov.uk/api", 
 [RelativePath="3/action/package_search", 
 CredentialQuery=[q="cows"]
]
)

So that the CredentialQuery option is used instead of the Query option, the search term will not appear in the log file.

Thanks to Curt Hagenlocher for providing this information.

Calling The Power BI REST API From Microsoft Flow, Part 3: Finding All Data Sources Used In A Power BI Tenant

[Part 1 of this series can be found here and part 2 here]

The problem with self-service BI is that you never quite know what your users are up to. For example, what data sources are they using? Are there hundreds of Excel files being used as data sources for reports that you don’t know about? If so, where are they? Could they and should they be replaced by a database or something else more robust? In this post I’ll show you how you can use Microsoft Flow and the Power BI REST API (see part 1 to find out how to create a Flow custom connector to call the Power BI API) to get the details of all the data sources used in all of the workspaces of your Power BI tenant.

I’ll admit that doing this turned out to be a bit trickier than I had expected. My plan was to use the GetDatasetsAsAdmin endpoint to get a list of all datasets, loop over each one and then call the (undocumented, but in the REST API’s Swagger file and therefore in my custom connector) GetDatsourcesAsAdmin endpoint to get the datasources in each dataset. Both these endpoints require administrative permissions to call, so I made sure my custom connector had the correct permissions (at least Tenant.Read.All – you can check this in the Azure Portal on the app you registered in Azure Active Directory) and I ran the Flow as a user with Power BI Admin permissions. But I kept getting 404 errors when requesting the data sources for certain datasets . Eventually I realised that GetDatasetsAsAdmin returns a list of all of the datasets that have ever been created in a tenant, including ones that have been deleted, with no indication as to which ones are the deleted ones, and the errors were the result of asking for the datasources from deleted datasets. Hmm. Once I realised I could ignore these errors the Flow was fairly easy to build.

Here’s the top-level view:

image

Step-by-step:

  • I use a button to trigger the flow here, but if you want to create an automated report you should probably run this Flow on a schedule.
  • Initialize variable initialises an array called datasources to hold the details of each datasource:
    image
  • Return a list of datasets for the organisation calls the GetDatasetsAsAdmin endpoint. As I said, if there was any indication as to whether the dataset had been deleted (if you look at the results returned by the GetGroupsAsAdmin endpoint it tells you whether a group has been deleted or not) then it would be possible to use $filter to declare an OData filter to remove them. But it isn’t.
    image
  • Parse list of datasets parses the JSON containing the list of datasets returned by the previous action. I generated the schema from a sample payload I got from calling the endpoint in the “test” section of the edit screen for the Flow custom connector. I had to edit it to make sure only the “id” and “name” properties are listed as required, as shown below.
    image
  • Apply to each dataset loops over each dataset, and I’ll describe that in more detail below.
  • Create CSV table takes the contents of the array variable defined above and turns it into a CSV file.
    image
    One very important point to note is that I had to click the “Configure run after” option and state that this action should run even after Apply to each dataset had failed, which it would inevitably after the 404 errors mentioned above. Just because one iteration fails doesn’t mean the whole Flow should be stopped.
    image
  • Create file simply saves the resulting CSV file into a folder in OneDrive for Business.
    image

 

Going back to the Apply to each dataset action, here’s what it does:

image

  • It loops over every dataset returned by the call to GetDatasetsAsAdmin
  • Return a list of datasources for the specified dataset calls GetDatasourcesAsAdmin to get the datasources in each dataset; as I said, this fails when the dataset has been deleted.
    image
  • Parse datasources parses the JSON returned by the previous action. Once again I generated the schema using a sample payload, and in this case I edited it manually to ensure that only “datasourceType”, “connectionDetails” and “datasourceId” were required.image
  • Apply to each datasource then loops over each datasource (one dataset can have many datasources) and Append to array variable appends some information about each datasource to the datasources array.
    image
    The connectionDetails property for a datasource returns details such as connection strings and file paths, but since I also wanted the type of the data source and the name and id of the dataset I used the following expression to create an array to append with all this information in:
    addProperty(
    addProperty(
    addProperty(
    items('Apply_to_each_datasource')?['connectionDetails'], 
    'datasetname', 
    items('Apply_to_each_dataset')?['name']), 
    'datasetid', 
    items('Apply_to_each_dataset')?['id']), 
    'datasourceType', 
    items('Apply_to_each_datasource')?['datasourceType'])
    

It turned out that my Power BI tenant had almost 190 datasets, both deleted and not deleted, and the Flow took eleven minutes to run.

The output of this Flow is a rather messy CSV file when you open it in Excel Online:

image

Once you load the data into Power BI though, you can build some interesting reports such as this one which shows the number of files used as datasources in my tenant, broken down by file type:

image

Finding All Selected Items In An Excel Slicer Connected To SSAS, Power BI Or the Excel Data Model Using Dynamic Arrays

The big news in the world of Excel right now is the introduction of dynamic arrays. They’re only available in the Office 365 click-to-run version of Excel and, at the time of writing, only available to people on the Office Insiders programme, but eventually they’ll be available to anyone running Excel for Office 365 on their desktop. There are already lots of blog posts about them including this overview by Jon Acampora, and you probably also want to download Bill Jelen’s detailed mini-book on them here which is free for the rest of 2018. Now I’m not an Excel expert by any stretch of the imagination but I’m as excited as anyone else about them because they will be incredibly useful for anyone building reports using Excel cube functions against Analysis Services, the Excel Data Model/Power Pivot and Power BI. Bill Jelen’s book has a short section on this subject but the possibilities are limitless…

Here’s one example of how they can be used. A while ago I blogged about how to use a regular array formula and the TextJoin() Excel function to get all the selected items from a slicer. Dynamic arrays make this problem much easier to solve. Take the following table loaded into the Excel Data Model:

Capture1

Now, say you have a PivotTable built from this and a slicer (called Slicer_Fruit) connected to it:

Capture2

It’s possible to use the CubeSet() function to get the set of selected items in a slicer using the following formula:

=CUBESET("ThisWorkbookDataModel",Slicer_Fruit,"Slicer Set")

Assuming this formula is in cell H1, you can then get the number of items in this set using CubeSetCount():

=CUBESETCOUNT($H$1)

Assuming this is in cell H2, all you need to do to get a comma-delimited list of all the selected items in the slicer via this set is:

=
 TEXTJOIN(
  ", ", 
  TRUE, 
  CUBERANKEDMEMBER(
   "ThisWorkbookDataModel",
   $H$1, 
   SEQUENCE($H$2,1)
  )
 )

Capture3

Here it is in action:

demo

It works as follows:

  • The new Sequence() function is used to create a dynamic array of numbers from one to the number returned by the CubeSetCount() function.
  • The CubeRankedMember() gets the name of each of the items in the set using the numbers returned by Sequence()
  • TextJoin() then concatenates all of the names returned by CubeRankedMember() into a single comma-delimited list.

You can download the sample workbook here (remember it will only work if you have a version of Excel installed that includes dynamic arrays!).

Calling The Power BI REST API From Microsoft Flow, Part 2: Refreshing A Dataset When a Data Source Changes

For a while now I’ve had an idea stuck in my head: wouldn’t it be cool to build a Power BI solution where a user could enter data into an Excel workbook and then, as soon as they had done so, they could see their new data in a Power BI report? It would be really useful for planning/budgeting applications and what-if analysis. I had hoped that a DirectQuery model using the CData Excel custom connector (mentioned here) might work but the performance wasn’t good enough; using Flow with the Power BI REST API (see Part 1 of this series for details on how to get this set up) gets me closer to my goal, even if there’s still one major problem with the approach. Here’s how…

First of all, I have two Excel files saved in a OneDrive For Business folder:

image

The contents of each Excel file is more or less the same, a table containing budget data for four quarters:

image

These are the files that my hypothetical users will be editing. I also have a very basic Power BI dataset that imports the data from these two files, and a report to display the data:

image

Now for the Flow. For the trigger I use a OneDrive For Business “When a file is modified” that fires when a file in the above folder is changed:

image

I had to turn on Concurrency Control for the trigger and set the Degree of Parallelism property to 1, to make sure only one instance of the Flow ran at any one time – there’s no point trying to refresh the dataset if another instance of the Flow is already running.

image

Next I have an action from the Power BI custom connector created in my last post. It calls the Power BI API’s Get Refresh History In Group endpoint and gets details of the last refresh of the dataset:

image

This returns a JSON value containing the details of the last refresh; the JSON goes to a Parse JSON action so later on I can get the Status of the last refresh. This action requires a schema, which I generated from a sample payload given in the Power BI REST API documentation. I had to delete endTime from the list of required properties in the schema because it won’t be returned if the most recent refresh is currently in progress:

image

Next there’s an Apply to each to loop over each refresh returned by the API; since only the most recent refresh is returned I know this will only ever execute once.

image

Inside the Apply to each I have a Condition that looks at the Status property of the most recent refresh and checks whether the Status is Completed:

image

If the Status is indeed Completed, then the Flow calls the Refresh Dataset In Group endpoint to refresh the dataset:

image

If the Status of the last refresh is not Completed (either because it failed for some reason, or because there is already a refresh in progress), there is a delay of one minute and the Flow then starts itself again:

image

As I’ve said before, I’m not an expert on Flow but this seems to work reliably with the limited testing I’ve done on it. Here it is in action:

ExcelBudgetDemo

So what is the problem with this approach? It’s that if you are using Power BI Pro you can only refresh a dataset eight times a day and if you are using Power BI Premium you can only refresh a dataset forty-eight times a day. There have been rumours that this forty-eight times a day refresh limit for Premium has been lifted, or might be lifted soon, or doesn’t apply when you refresh via the API, but I specifically tested this and got the following error message: “Invalid dataset refresh request. Number of refresh requests in last 24 hours exceeded limit” (interestingly this didn’t appear as a failed refresh in the refresh history of the dataset). Since it’s reasonable to assume that any Excel file will be updated more than forty-eight times per day in a busy period I don’t think this approach is usable in the real world; all the same I’m happy with this as a proof-of-concept showing what might be possible if the refresh limit is lifted in the future though.

Calling The Power BI REST API From Microsoft Flow, Part 1: Creating A Flow Custom Connector

Playing around with Microsoft Flow recently, I was reminded of the following blog post from a few months ago by Konstantinos Ioannou about using Flow to call the Power BI REST API to refresh a dataset:

https://medium.com/@Konstantinos_Ioannou/refresh-powerbi-dataset-with-microsoft-flow-73836c727c33

I was impressed by this post when I read it, but don’t think I understood quite how many exciting possibilities this technique opens up for Power BI users until I started to use it myself. The Power BI dev team are making a big investment in the API yet most Power BI users, myself included, are not developers and can’t easily write code (or PowerShell scripts) to call the API. With Flow, however, you can use the API without writing any code at all and solve a whole series of  common problems easily. In this series of blog posts I’m going to show a few examples of this.

Before that, though, there is something to add to what Konstantinos says about creating the custom connector. In his post he describes how you can add individual actions to the custom connector, but with Flow you can also create a custom connector from an OpenAPI definition and it turns out that there is an OpenAPI definition file for the Power BI REST API available here. As a result you can create a Flow custom connector with (at the time of writing) a whopping 116 actions by following the instructions in Konstantinos’s post, but instead of using the “Create from blank” option using the “Import an OpenAPI file” option instead and uploading the Power BI REST API OpenAPI definition file. You’ll find that the text for the summaries and descriptions of many of the actions needs cleaning up – in particular, Flow doesn’t allow actions with names that finish with a dot so you’ll have to go through each one and remove that character – but doing this is much faster and easier than adding the actions manually.

image

And one last thing to point out: as Konstantinos mentions in his post, if you get a 404 error while testing the connector, just wait a few minutes!

Testing Power BI Premium Features With Power BI Embedded

It is very difficult for an organisation to decide whether to buy Power BI Premium or to stick with Power BI Pro. Power BI Premium represents a big financial commitment, but how do you know whether the benefits will be worth the costs involved? This was a question that Melissa Coates and I dealt with in great detail in the second version of our white paper “Planning A Power BI Enterprise Deployment”, and I strongly suggest you read the relevant section (pages 37-42) if you are considering buying Premium.

However, it’s one thing to read a white paper and another to actually test Premium yourself – and right now there is no way of trying Premium before you buy it. As more and more features like SSRS paginated reports or linked entities in dataflows get added to Premium, the more likely it is you are going to want to test these features before buying Premium. So what do you do? The answer is to use Power BI Embedded instead. It’s not exactly the same thing as Premium (the setup experience is different, for one thing), but there are three significant facts to point out:

  • I have been assured by several senior Microsoft employees that all new Premium features will be available in both the Premium (EM and P) SKUs and the Embedded (A) SKUs. So, for example, when the SSRS paginated report feature is released it will be available in both Premium and Embedded.
  • The resources available (in terms of the number of v-cores and memory) in the various Premium SKUs mirror those available in the Embedded SKUs. For example a P1 Premium SKU has the same resources available as an A4 Embedded SKU.
  • With Power BI Embedded, unlike Power BI Premium, you only need to pay for what you use: you can pause a Power BI Embedded capacity when you are not using it and pay nothing.

Therefore, to sum up, if you want to test Premium features before you buy, all you need to do is create a new Power BI Embedded capacity in the Azure portal and assign a Workspace to it – and you’ll get access to all the Premium features. When you’ve finished just pause the capacity. You’ll still need to pay while you’re testing but it will be a fraction of the cost of buying Premium.

Can You Use Power Query In Power BI Or Excel To Write Data To A Data Source?

The short answer: no.

The long answer:

It is, technically, possible to use Power Query (in either Power BI or Excel) to write data to a data source; for example, here’s an old post I wrote showing how to run a UPDATE statement to update data in a table in SQL Server. However because the Power Query engine may evaluate a value or statement more than once when a query is executed, it is therefore not safe or supported to use it to try to create, insert, update, delete or otherwise change data in a data source – it should only be used to read data. You may see some functions listed in the M function reference that suggest otherwise but they aren’t supported either, and indeed may not work at all.

Thanks to Matt Masson of the dev team for providing the official position on this.

%d bloggers like this: