Delaying Power BI Dataset Refresh Until The Source Data Is Ready

This week a customer came to me with the following problem: they had scheduled the refresh of their dataset but their source data wasn’t always ready in time, so the old data was being loaded by mistake. The best solution here is to use some kind of external service (for example Power Automate) to poll the data source regularly to see if it’s ready, and then to refresh the dataset via the Power BI REST API when it is. However, it got me thinking about a different way of tackling this: is it possible to write some M code that will do the same thing? It turns out that it is, but it’s quite complicated – so I don’t recommend you use the code below in the real world. Nevertheless I wanted to write up the solution I came up with because it’s interesting and you never know, it might be useful one day.

Here’s the scenario I implemented. Using a SQL Server data source, let’s say that you want to load the result of the following SQL query against the AdventureWorksDW2017 database into Power BI:

SELECT DISTINCT [EnglishDayNameOfWeek] FROM DimDate

However, let’s also say that the data in this table may not be ready at the time when dataset refresh is scheduled; when the data is ready, the sole value in the sole column of another table, called ContinueFlagTable, will be changed from False to True:

Therefore when dataset refresh starts we need to check the value from the ContinueFlagTable regularly; if it is False we need to wait for a given amount of time and then check again; if it is True the data can be loaded; if the flag hasn’t changed to True after checking a certain number of times then we need to raise an error.

Here’s the full M code of the Power Query query:

let
  //The SQL Server database
  Source = Sql.Database(
      "MyServerName", 
      "AdventureWorksDW2017"
    ),
  //The query to load into the dataset
  ResultQuery
    = Value.NativeQuery(
        Source, 
        "SELECT DISTINCT [EnglishDayNameOfWeek] FROM DimDate"
      ),
  //The number of times to check the Continue flag
  MaxIterations = 3,
  //The number of seconds to wait before checking the Continue flag
  DelaySeconds = 5,
  //Check the Continue flag
  Iterate = List.Generate(
      () => 1, 
      each (
      //Query the ContinueFlag table
      //after waiting the specified number of seconds
      //and keep trying until either it returns true
      //or we reach the max number of times to check(Function.InvokeAfter(
          () => 
            let
              dbo_ContinueFlag
                = Value.NativeQuery(
                    Source, 
                    "SELECT [ContinueFlagColumn] from ContinueFlagTable where -1<>@param", 
                    [
                      param
                        = _
                    ]
                  ),
              Continue
                = dbo_ContinueFlag{0}[ContinueFlagColumn]
            in
              Continue, 
          #duration(
              0, 
              0, 
              0, 
              DelaySeconds
            )
        )
        = false
      )
        and (_ < MaxIterations)
      ), 
      each _ + 1
    ),
  //Find how many times the ContinueFlag was checked
  NumberOfIterations
    = List.Max(Iterate),
  //Did we reach the max number of checks?
  ReturnError
    = NumberOfIterations
      = MaxIterations
      - 1,
  //Table type of the table to load into the dataset
  ReturnTableType
    = type table [
      EnglishDayNameOfWeek = Text.Type
    ],
  //Error message to return if the
  //max number of checks is reached
  ErrorMessage
    = error "After waiting "
      &amp; Text.From(
        MaxIterations
        * DelaySeconds
      )
      &amp; " seconds your data is still not ready to be loaded",
  //Handle Power BI Desktop's behaviour of asking for
  //the top 0 rows from the table
  //before loading the actual data
  OverrideZeroRowFilter = 
    if ReturnError then 
      ErrorMessage
    else 
      Table.View(
          null, 
          [
            GetType = () => 
              ReturnTableType, 
            GetRows = () => 
              ResultQuery, 
            OnTake
              = (
                count as number
              ) => 
                if count = 0 then 
                  #table(
                      ReturnTableType, 
                      {}
                    )
                else 
                  Table.FirstN(
                      ResultQuery, 
                      count
                    )
          ]
        )
in
  OverrideZeroRowFilter

There’s a lot to explain here:

  • Everything is in a single query, and this is deliberate: I found it was the only way I could make sure that the query to load the data (ie the query in the ResultQuery step) is run only once, after the check on the ContinueFlagTable has returned true. If I split the code to get the data off into a separate query, I found that it got queried before any checks on ContinueFlagTable; this seemed to be something to do with the formula firewall, but I’m not sure.
  • I’ve used List.Generate to do the polling of ContinueFlagTable. There’s a good example of how to use it to implement Do Loops here.
  • I’ve used Function.InvokeAfter to add the delays in between checks on ContinueFlagTable.
  • I found that if I ran the same query to check ContinueFlagTable, after the first execution the result was cached and the query couldn’t tell if the flag changed after that. Therefore I wrote a SQL query with a WHERE clause that was different each time but which always returned true; I did this by passing the iteration number into the query via a parameter, and I passed the parameter in using the Value.NativeQuery function.
  • It’s possible to raise an error when needed using the error keyword, described here.
  • The OverrideZeroRowFilter step at the end uses the technique I blogged about here to deal with the fact that Power BI Desktop asks for the top 0 rows from a table to get the schema before it loads the data.

Refreshing the table in Power BI Desktop with the value True in ContinueFlagTable loads the data successfully; in Profiler you can see one query to ContinueFlagTable and one query to get the data to be loaded from the DimDate table:

Refreshing the table with the value False in ContinueFlagTable shows the specified number of queries to ContinueFlagTable and no query to DimDate:

Here’s the error message you get in Power BI Desktop when this happens:

Here’s the same error in the Power BI Service when you try to refresh the dataset set:

Visualising Azure Analysis Services Processing Tasks With The Job Graph Events Sample

I’ll leave it to other people to get worked up about things like Power BI Premium gen2 – what really gets me excited is a new Profiler event type. And indeed there is a new one to play with – Job Graph – that you can use to visualise Azure Analysis Services processing jobs (it’s not enabled yet for the Analysis Services engine in Power BI).

More details about it, and how it can be used, are in the samples here:

https://github.com/microsoft/Analysis-Services/tree/master/ASJobGraphEvents

The data returned by the Job Graph event isn’t intelligible if you look at the text it returns in Profiler. However if you save a .trc file with Job Graph event data to XML you can use the Python scripts in the GitHub repo to generate DGML diagrams that can be viewed in Visual Studio, plus Gantt charts embedded in HTML. Of course to do this you’ll need to have Python installed; you’ll also need to have Visual Studio and its DGML editor installed (see here for details).

Here’s the DGML diagram for the refresh of a very simple AAS database with just one table in it, called People:

The eight nodes in a semi-circle around the top of the diagram represent the eight jobs that process the attribute hierarchies in the table. These eight jobs must be completed before the single partition in the table is processed (this is the node in the centre of the diagram) and that partition must be processed before the table itself is processed (this is the node at the bottom of the diagram).

Since this is the annotated output you can hover over a node in the diagram and see more details about it, including its duration and how long it was blocked for:

Notice how three nodes have dark backgrounds? These nodes represent the critical path. From the sample documentation:

When the job graph executes, there is always a job that finishes last before the engine can commit the change. This job that finishes last is the “critical dependency” for the commit; the entire commit needs to wait for this one job to finish before executing. This last job depends on other jobs, one of which finished after all the others. This is the next critical dependency. Tracing this path of critical dependencies forms the critical path, which helps engineers and customers identify why processing takes so long.

To diagnose slow refresh times, look for the critical path and start at the top. Look at “Blocked duration”, “Waiting duration”, and “Running duration”. If a job has a long blocked duration, it spent a long time waiting on other jobs. If a job has a long waiting duration, it was waiting for an available thread, so increasing maxParallelism could help. If a job has a long running duration, then the model might have to be changed to speed up the job.

Here’s another diagram, generated from a slightly more complex database with four tables, a calculated column and a calculated table:

There’s also a script that takes this data and generates a Gantt chart from it, but at the time of writing it seems to be a bit buggy; I can make it work if I comment out some code but I suspect that means the output isn’t accurate. Hopefully it will get fixed soon, but in the meantime here’s an example of what it produces:

I think this is going to be really useful when I’m troubleshooting processing performance problems. It would also be really cool to have this in DAX Studio (hint, hint)…

Handling Multi-select In Power BI Dynamic M Parameters

Even though the documentation for dynamic M parameters does mention how to handle multi-select in the M code for your Power Query queries, I thought it would be useful to provide a detailed example of how to do this and explain what happens behind the scenes when you use multi-select.

Once again I’m going to use Azure Data Explorer as my DirectQuery data source in this post. Consider the following simple KQL query, which returns a list of counties in the UK and the average price paid for real estate in them:

pricepaid
| where county in ('DEVON', 'CORNWALL', 'KENT')
| summarize avg(price) by county

The third line of this query uses a combination of the KQL where and in operators to filter the list of counties in a way that’s very similar to other query languages such as SQL. The challenge is to write an M expression that will generate the comma-delimited list of county names in parantheses at the end of this line.

Taking the query above and generating a non-dynamic DirectQuery table in Power BI is straightforward, and let’s assume that you have already created an M parameter called Selected County:

… a table of county names:

…and bound the County column of this table to the SelectedCounty M parameter and turned on the Multi-select option in Model View in the main Power BI Desktop window:

Now the confusing thing for me, when I first started to look at this problem, was that the M parameter called SelectedCounty I created was of type Text but the example code in the documentation was written for an M parameter of type list – when in fact it isn’t possible to create an M parameter of type list in the Power Query Editor (at least not at the time of writing). It turns out that when the Multi-select option is turned on Power BI is able to send a value of type list to the parameter regardless of what type you have defined for it.

With that knowledge here’s an example of an M query to generate the query using the parameter:

let
  CountyList = 
    if 
    //check to see if the parameter is a list
      Type.Is(
        Value.Type(SelectedCounty), 
        List.Type
      ) then 
        //if it is a list
        let
          //add single quotes around each value in the list
          AddSingleQuotes = List.Transform(
              SelectedCounty, 
              each "'" &amp; _ &amp; "'"
            ),
          //then turn it into a comma-delimited list
          DelimitedList = Text.Combine(
              AddSingleQuotes, 
              ","
            )
        in
          DelimitedList
    else 
      //if the parameter isn't a list
      //just add single quotes around the parameter value
      "'" &amp; SelectedCounty &amp; "'",
  //generate and run the KQL query
  Source = AzureDataExplorer.Contents(
      "https://mycluster.northeurope.kusto.windows.net", 
      "pricepaid", 
      "pricepaid#(lf)
       | where county in 
       ("
        &amp; CountyList &amp;
       ")#(lf)
       | summarize avg(price) by county", 
      [
        MaxRows                 = null, 
        MaxSize                 = null, 
        NoTruncate              = null, 
        AdditionalSetStatements = null
      ]
    )
in
  Source

Note that the CountyList step has to check the data type of the parameter using an if statement, because in the Query Editor it will always be Text whereas in the report it will be a list if multi-select is turned on and Text if not.

Here’s the final report showing a multiselect slicer passing values into this query:

[Thanks to Ravi Kiran Vemulapalli and Sujata Narayana for their help with this post]

Geospatial Analysis With Azure Data Explorer, Power BI And Dynamic M Parameters

Since last week’s blog post about dynamic M parameters generated so much interest, this week I thought I’d give you another example of something cool you can do with them when you’re using Azure Data Explorer (ADX) as a DirectQuery source in Power BI: geospatial analysis.

Let’s say you work for a chain of supermarkets and want to use Power BI see what other competing stores are close to one of your stores. First of all you need some data, in in this case my data source was this list of British supermarkets and their locations , published by the nice people at Geolytix, and called Retail Points. Once again I loaded the csv file into ADX and was able to write a KQL query using the geo_point_in_circle() function to filter the list of supermarkets down to those within a specified distance of a given supermarket. For example, here’s a simple KQL query that finds all the supermarkets within a 5000m radius of the location of my local Marks and Spencer store in Amersham:

let centrestore =  
retailpoints
| where store_name == "M&amp;S Amersham SF"
| project long_wgs, lat_wgs;
let centrelat = toscalar( centrestore | summarize max(lat_wgs));
let centrelong = toscalar ( centrestore | summarize max(long_wgs));
retailpoints
| where geo_point_in_circle(long_wgs, lat_wgs, centrelong, centrelat, 5000)

[Apologies if this isn’t the greatest KQL query – I’m still learning]

I was then able to create two M parameters called selectedstore and selectedradius and bind them to two other tables in my dataset using dynamic M parameters so that the end user could use slicers to select:

  • The store they want to use as the central point of their analysis
  • The size in metres of the radius of the circle around the central point within which other stores should be displayed

These M parameters could then be injected into the KQL query of a DirectQuery table using the following M query:

let
  Source = AzureDataExplorer.Contents(
      "https://mycluster.northeurope.kusto.windows.net", 
      "retailpoints", 
      Text.Format(
          "declare query_parameters(filterstore:string = ""#[storeMparametername]"", centreradius:int = #[radiusMparametername]);#(lf)let centrestore =  #(lf)retailpoints#(lf)| where store_name == filterstore#(lf)| project long_wgs, lat_wgs;#(lf)let centrelat = toscalar( centrestore | summarize max(lat_wgs));#(lf)let centrelong = toscalar ( centrestore | summarize max(long_wgs));#(lf)retailpoints#(lf)| where geo_point_in_circle(long_wgs, lat_wgs, centrelong, centrelat, centreradius)", 
          [
            storeMparametername  = selectedstore, 
            radiusMparametername = selectedradius
          ]
        ), 
      [
        MaxRows                 = null, 
        MaxSize                 = null, 
        NoTruncate              = null, 
        AdditionalSetStatements = null
      ]
    )
in
  Source

There are two important things to notice here:

  • I declared KQL query parameters to hold the selected store name and selected radius at the start of the KQL query. As the documentation notes, this is important to guard against KQL injection attacks.
  • I used the M Text.Format function to inject the M parameter values into the KQL query parameter declarations, which was easier and cleaner than generating the KQL query string by concatenating text.

And here’s the report I built, for example showing how you can select an Aldi store in Aberdeen and display all stores within a 1km, 3km and 7km radius of that store:

Not the most sophisticated geospatial report in the world, I admit, and I know very few people are using Azure Data Explorer as a data source for Power BI right now, but I do think it is a fun example of the kind analysis that dynamic M parameters now make possible.

Why I’m Excited About Dynamic M Parameters In Power BI

My favourite feature in the October 2020 release of Power BI is undoubtedly dynamic M parameters. However, before I explain why, I’m going to put my hands up and admit that we screwed up the communication around its launch and didn’t explain its limitations properly – which led to some understandable disappointment and confusion.

Although the blog post and the docs have now been updated to make it clear that this feature doesn’t work with many popular data sources like SQL Server, Synapse, Oracle, Teradata and SAP (see here for the full details), I thought I’d explain why. It’s because there are two sets of connector that Power BI uses for DirectQuery: an older set that was built some time ago for the data sources originally supported by Analysis Services Tabular, and a newer set that is built in Power Query’s M language. Dynamic M parameters only work for this newer set of connectors; if you want to see this feature extended so it works for the older connectors too then please vote on ideas.powerbi.com, talk to your Microsoft account team, call Satya or whatever your normally do.

OK, so why am I excited? Well, because dynamic M parameters let you do a whole bunch of cool things that weren’t possible before. My original plan for this blog post was to show how they can be used to change the dimension column used in a visual dynamically but Mim beat me to it with this blog post. Instead in this post I’m going to show you how you can use them to create something like dynamic calculated columns and I also want to make a more general point that will Blow Your Mind (hopefully).

For my example I’m going to use Azure Data Explorer (ADX) as my data source. There are a couple of reasons why: it’s something I’ve been interested to learn more about for a while, and the ADX team recently announced a bunch of new features which make me think it will be increasingly useful in BI scenarios; it will be important to know it if you’re using the new Azure Monitor integration for Power BI log data; and, crucially, it’s one of the few M-based connectors where you can write your own queries in DirectQuery mode (although this is coming to Snowflake and Redshift next year too).

My source data is from the UK Land Registry Price Paid dataset, and for my demo report I loaded a table containing around 790,000 real estate transactions into ADX. Each real estate transaction has the price paid for the property:

One obvious way to analyse this data is to build a histogram and show the number of transactions that occur in price bins such as £0-£100000, £100000-£200000, £200000-£300000 and so on. This is quite easy to do in Power BI, for example using calculated columns in Import mode, unless you want to allow your users to be able to change the bin sizes themselves. Dynamic M parameters give you an easy way to solve this problem though.

KQL, the query language for ADX, has a really nice way of generating bins for histograms using the summarize() operator and bin() function. For example, the following KQL query does something like a SQL group by and shows the number of real estate transactions for each county and bins based on price in £25000 increments:

pricepaid 
| summarize Transactions=count() 
by county, bin(price, 25000)

In Power BI I created a parameter called binsizes:

…and then created a parameterised version of the KQL query above to populate a DirectQuery table. Here’s the M code for this table:

let
    Source = 
     AzureDataExplorer.Contents(
      "https://mycluster.northeurope.kusto.windows.net", 
      "pricepaid", 
      "pricepaid | summarize Transactions=count() by county, bin(price, " 
      &amp; Number.ToText(binsizes) &amp; 
      ")", 
      [MaxRows=null, MaxSize=null, NoTruncate=null, AdditionalSetStatements=null]
     )
in
    Source

Next I created a new table with the possible bin sizes that the user can choose using a DAX calculated table:

And then finally linked the column in this new table with the M parameter:

Here’s the final report, where you can see the number of transactions in each price bin and use slicers to change the price bins and change the county you’re looking at:

Apart from being a new solution to an old problem, what does this mean? Well, with a data source like ADX where M parameters are supported and you can write your own queries in DirectQuery mode, you can do whatever you want in the query so long as it always returns the same columns. As well as creating parameterised queries, in ADX you could also create a user-defined function (something like table-valued functions in SQL Server), pass parameters into it using M parameters, and have a totally dynamically generated fact table in your dataset. Either way, using dynamic M parameters in this way allows you to use KQL’s specialised functions for basket analysis or geospatial analysis to do things that are otherwise impossible in Power BI. I’m sure similar things are possible with other data sources too. And that, my friends, is why I’m so excited about dynamic M parameters.

Build Scalable BI Solutions Using Power BI and Snowflake

On the Power BI team we want our customers to be successful whichever data source they’re using – not just the Microsoft ones. Recently I had the pleasure of recording a webinar on the subject of best practices for using Power BI with Snowflake, along with Craig Collier from Snowflake and Chris Holliday from Visual BI (a partner that specialises in Power BI/Snowflake solutions). You can watch it here:

https://info.microsoft.com/ww-landing-build-scalable-BI-solutions-using-power-BI-and-snowflake.html

There’s a summary of what we talk about on the Snowflake blog:

https://www.snowflake.com/blog/maximizing-power-bi-with-snowflake/

We’re continuously improving the performance and functionality of all our connectors, and Snowflake is no exception: we recently announced that in early 2021 we’ll have support for Snowflake roles and the ability to use your own SQL queries in DirectQuery mode. Power BI support for Azure Service Tags (on the roadmap here) will also be very useful for Snowflake customers.

A Function To Visualise Parallelism In Power Query Diagnostics Data

Most of the time I’ve spent looking at Power Query Query Diagnostics data, I’ve been looking at data for a single query. Over the past few days though I’ve spent some time investigating what this data shows for all the queries that are executed for a single dataset refresh. To help me do this I wrote the following M function:

(InputTable as table) => 
  let
    Source = InputTable,
    EarliestStart = List.Min(Source[Start Time]),
    AddRelativeStart = Table.AddColumn(
        Source, 
        "Relative Start", 
        each [Start Time] - EarliestStart
      ),
    AddRelativeEnd = Table.AddColumn(
        AddRelativeStart, 
        "Relative End", 
        each [End Time] - EarliestStart
      ),
    SetToDurations = Table.TransformColumnTypes(
        AddRelativeEnd, 
        {
          {"Relative Start", type duration}, 
          {"Relative End", type duration}
        }
      ),
    CalculatedTotalSeconds
      = Table.TransformColumns(
          SetToDurations, 
          {
            {
              "Relative Start", 
              Duration.TotalSeconds, 
              type number
            }, 
            {
              "Relative End", 
              Duration.TotalSeconds, 
              type number
            }
          }
        ),
    GroupedRows = Table.Group(
        CalculatedTotalSeconds, 
        {"Id", "Query"}, 
        {
          {
            "Relative Start", 
            each List.Min([Relative Start]), 
            type number
          }, 
          {
            "Relative End", 
            each List.Max([Relative End]), 
            type number
          }
        }
      ),
    MergeColumns = Table.AddColumn(
        GroupedRows, 
        "Id Query", 
        each Text.Combine({[Id], [Query]}, " "), 
        type text
      ),
    IdToNumber = Table.TransformColumnTypes(
        MergeColumns, 
        {{"Id", type number}}
      )
  in
    IdToNumber

If you invoke this function and pass in the “Detailed” query diagnostics data query:

…you’ll end up with a query that gives you a table that looks something like this:

This table has one row for each Power Query query that got executed while the diagnostics trace was running, an Id column to uniquely identify each execution, the name of the query executed, an Id Query column that concatenates the previous two columns, and Relative Start and Relative End columns that give you the number of seconds from the start time of the first query executed up to the start time and end time of the query on the current row.

Loading the data into Power BI allows you to build a stacked bar chart with Id Query on the axis and Relative Start and Relative End together in the values:

If you then set the Sort By Column property of Id Query to the Id column (which is numeric, so this ensures that the values in Id Query are sorted correctly) and set the Data Color property of the Relative Start values to white (or whatever the background of your chart is) so that it’s invisible, then finally set the axis of your stacked bar chart to be sorted by Id Query in ascending order, you get a visual something like this:

This is a kind of Gantt chart where the x axis shows the number of seconds since the start of the first query execution and each bar shows the start and end times of each query, making it easy to see which queries are executing in parallel.

“What can this show us?” I hear you say. Well, that’s something for a future blog post. All that I can say right now is that it’s COMPLICATED and I don’t properly understand it myself yet. But it’s undoubtedly interesting.

View Native Query Now Works For Analysis Services Data Sources

If you’re familiar with the topic of query folding in Power Query, you’ll know that the View Native Query right-click option in the Applied Steps pane of the Power Query Editor can be used to show the native query that is run against the data source. You may also know that there are some data sources where query folding does take place but where View Native Query remains greyed out. One of those used to be Analysis Services, but the good news is that that is no longer the case: you can use View Native Query when importing data from Analysis Services! Look:

I’m told it also now works for SAP BW, but I haven’t tested it.

Exporting Data From Websites For Use In Power BI Using Power Automate UI Flows

As a Power BI developer I know how Power Query makes it easy to get data from web sources, but I also know there are limits to what it can do. I can get data from tables on web pages, I can get data from web services, but when confronted with a website where you have to click a button to download a file there’s a problem. Take https://data.police.uk/data/ for example:

This site allows you to download crime data for England and Wales, but to do so you need to check some boxes and then click the “Generate file” button at the bottom (yes I know there’s also an API, but this is only an example). You can’t click a button in Power Query.

Enter Power Automate UI Flows. I’ve been a fan of Power Automate for a while now but it was only the release of the new Power Automate Desktop preview last week that I decided to check out its Robotic Process Automation (RPA) capabilities. Basically, if you’re still doing lots of manual downloading, clicking, moving files etc before you can load your data into Power BI then Power Automate UI Flows can do all this for you automatically by recording and replaying everything you do.

Now I’m not a UI Flows expert by any stretch of the imagination, but within an hour I was able to create a UI Flow that:

  • Opened a browser and went to https://data.police.uk/data/ (quick tip: if you want to save a file from a browser you need to use the special Automation Browser, not Chrome, Edge or Firefox)
  • Checked the boxes to download crime, outcome and stop and search data for Thames Valley Police
  • Clicked the Generate file button
  • Clicked the Download Now button on the next screen when it appeared
  • Downloaded the resulting zip file to a particular location
  • Unzipped the file to a subfolder
  • Deleted the zip file

Here’s what it looked like in Power Automate Desktop:

Now this is my first ever UI Flow so it’s probably not as robust as it could be, but it seemed to work ok. I was then able to call it from Power Automate and refresh a dataset once the UI Flow had finished:

I guess the point of this post is not to tell you how to use Power Automate UI Flows to extract data from web pages for use with Power BI, just that it’s there for you to use if you need it. It’s important to be aware of the other tools that are available in the Power Platform and understand how they can be used alongside Power BI, rather than focusing on trying to solve your problems with Power BI alone. Not only are there a lot of cool things happening with Power Automate that are directly relevant to Power BI (such as the new Export actions and the upcoming ability to kick off a Flow when a Power BI dataflow finishes refreshing – see the demo at 33:00 here) but there are other features in Power Automate like UI Flows that can be used for BI purposes, such as the ability to run an Office Script from a Flow which can be useful for preparing Excel data for Power BI.

If you want to learn more about Power Automate Desktop there’s a good multi-part tutorial on YouTube starting here; there’s also a three-part series on extracting data from web pages too (part 1, part 2, part 3) although it uses an example that Power Query handles much more elegantly 😉

Power BI FastTrack Recognized Solution Architect 2021 Nominations Open

Are you really, really good at designing Power BI solutions? If so, you should nominate yourself for Microsoft’s Power BI FastTrack Recognized Solution Architect (FTRSA) distinction! Here’s the official description of what this is:

FastTrack Recognized Solution Architect is a recognition conferred by the Power Platform product engineering team upon a practicing solution architect for consistently exhibiting deep architecture expertise and creating high quality solutions during customer engagements. A FastTrack Recognized Solution Architect typically works for a systems integrator partner.

You can see some of the people who received this distinction at the Microsoft Business Applications Summit 2020 here:

https://powerplatform.microsoft.com/en-us/fasttrack/recognized-solution-architects/

I’ll be honest, the bar is set extremely high and only the very best will be successful. For an architect to be eligible for the FTRSA distinction, he/she must meet the following requirements:

  • Must have a minimum of 2 years of experience with Power BI and a minimum of 5 years of experience with Enterprise BI solutions.
  • Must have a minimum of 2 years of experience as an Enterprise BI architect.
  • Must be working for a partner with Gold certification in Data Analytics MPN Competency
  • Must have been lead architect for at least 2 Power BI in-production implementations with at least 200 active users.

Why nominate yourself?

  • FTRSA profiles, and the partners that they work for, will be listed on a public-facing page on the Power Platform product websites in recognition of their contributions towards driving customer success.
  • Partners can indicate the number of FTRSA(s) they have in their promotional materials. They may also point prospective customers to the public-facing profile listing page mentioned above.
  • The architects will get an e-badge that can be shared on LinkedIn and other social media platforms.
  • The architects will receive a special mention at the annual Microsoft Business Applications Summit.

If you’re interested and meet all the requirements you can nominate yourself here:

http://aka.ms/FTRSANomination

Nominations close on November 30th 2020.

%d bloggers like this: