Cloud · Excel · Power BI · Q&A

Power BI Review, Part 2.5: Q&A (Now I’ve Used It With My Data)

A few months ago I posted a review of Q&A, the natural language query functionality in Power BI, based on the sample data sets that were then available. Last week, finally, we got the news that we could enable Q&A on our own Power Pivot models, and having played with this new release I thought it was a good idea to post an update to my original thoughts.

The first thing to point out is that even if you’ve got a Power BI Preview tenant you will need the latest version of Power Pivot for Excel to be able to get the best out of Q&A. This latest release contains some new functionality to add ‘Synonyms’ to the model – what this means is that it allows you, as a model creator, to tell Power BI about other names that end users might use when querying your model. For example on a Geography dimension you might have a column called State but if you are a multinational company you may find that while your State column contains the names of states in the USA, it might contain the names of cantons in Switzerland, counties in the UK, departments in France and so on. As a result you will want Power BI to know that if a user asks for sales by county in the UK that it should actually look in the State column. Devin Knight has already written a good post showing how synonyms work with Q&A which you can see here.

Another complication is that, at the time of writing, the Synonym functionality is only available to users who have installed the streamed version of Office 2013 from Office 365. I have an Office 365 subscription but I had installed Office from an msi before that, so I had to uninstall Office and reinstall the streamed version to be able to see Synonyms – I assume that support for Synonyms in the non-streamed version of Excel will come at some point soon in the future, but in general I would expect that new Power BI functionality will appear first in the streamed version of Office first so if you’re serious about BI you should change over to it as soon as you can. Melissa Coates has a lot more detail on this issue here.

But enough about setup, what about Q&A? The data that I tested it on was a model I’ve been using for user group and conference demos for about six months now, which contains data from the UK’s Land Registry and details all residential property transactions in England and Wales in 2013. It’s fairly simple – two tables, a date table and a transactions table containing around 0.5 million rows – so probably a lot simpler than the average Power Pivot model, but nonetheless real data and one which had been polished for demo purposes. The Excel file holding it is around 25MB so I was well within the Power BI file size limits.

My initial impression after I had added my existing model (with no synonyms etc) to Q&A was that while it worked reasonably well, it worked nowhere near as well as the demo models I had seen. I then set about making changes to the model and re-uploading it, and these changes made all the difference. Some examples of the things I did are:

  • Changed table and column names. In my model I had already taken the trouble to make them human readable, but this did not necessarily mean they were suitable for Q&A. For example, my main fact table was called ‘Land Registry’, so at first Q&A kept suggesting questions like “How many land registries were there in June…” which clearly makes no sense. Renaming the fact table to ‘Sales’ fixed this.
  • Setting synonyms. Unsurprisingly, this had a big impact on usability in the same way that changing the table and column names did. I found that I had to go through several iterations of uploading the data, writing questions, seeing what worked and what didn’t, and adding more synonyms before I had a set that I was happy with; I can imagine that in the real world you’d need to round up several end users and lock them in a room to see how they phrased their questions so as to get a really good list of synonyms for them.
  • Setting Power View-related properties. This included setting the Default Field Set on a table, so I only saw a few important fields in a meaningful order when Q&A returned a table result; and also Summarize By so that Q&A didn’t try to aggregate year values. All of this makes sense given how closely-related Q&A and Power View are, but even though I had a reasonably ‘finished’ model to start off with I still hadn’t set all of these properties because I knew I was never going to try to sum up a year column.
  • Adding new columns. There were a number of cases where I realised that I, as a human user, was able to make assumptions about the data that Q&A could not. For example the source data records sales of four different types of residential property: terraced, detached, semi-detached and flat. The first three are types of house, but the source data doesn’t actually state that they are types houses anywhere so in order to see the total number of sales of houses I had to add another column to explicitly define which property types were houses.
  • Disambiguation. Probably the most irritating thing about the Bing geocoding service that Power View and Q&A use is the way it always chooses a US location when you give it an ambiguous place name. Therefore when looking at sales by town I would see the town name “Bristol” show up on the map as Bristol, Tennessee (population 24,821) rather than Bristol in England (population 416,400). Creating a new column with town name and country concatenated stopped this happening.

The Microsoft blog post I referenced above announcing Q&A promises that a more detailed guide to configuring models for Q&A will be published soon, which is good news. The important point to take away from this, though, is that even the most polished Power Pivot models will need additional tweaks and improvements in order to get the best out of Q&A.

The big question remains, though, whether Q&A will be something that end users actually get some value from. As a not-very-scientific test of this I handed my laptop over to my wife (who has no experience of BI tools but who has a healthy interest in property prices) to see how easy it was for her to use, and straight away she was able to write queries and find the information she was looking for, more or less. There were a still few cases where Q&A and/or my model failed, such as when she searched for “average house price in Amersham” – the model has a measure for “average price”, it knows about the property type “house” and the town “Amersham”, but “average house price” confused it and the query had to be rewritten as “average price of a house in Amersham”. Overall, though, I was pleasantly surprised and as a result I’m rather less sceptical than I was about Q&A’s usefulness, even if I’m still not 100% convinced yet.

6 thoughts on “Power BI Review, Part 2.5: Q&A (Now I’ve Used It With My Data)

  1. Hi Chris,

    As a long time developer of the “traditional” MS BI stack of SQL, SSRS, SSAS, SISS and the kimball model, I am still struggling to see how PowerBI can become a truly effective solution as a self serve tool without some serious BI training.

    Don’t get me wrong, I think the new product(s) look great, and they do provide the ability to create a totally self contained BI solution. The issue for me is, how does the average analyst develop the skills required to create an effective model for reporting if they don’t have database/SQL knowledge of their source systems?

    Take this scenario, we have a fictional company using a standard SQL Server OLTP system developed in house. This is designed to the relevant normal forms and is well built, in COD land.

    Now, said analyst comes along with their Power BI stack and wants to build an MI solution of this system. This person will then need to explore the said system back end, determine relationships, joining tables where appropriate (if there’s foreign keys then this is handle nicely by powerpivot) and then determine the relevant fields in the back end that map to the relevant screens in the front so that what the consumers of the new Mi system see maps to what they remember from the front end application.

    For me as a developer this is the norm and is what I’m expected to do/know. But in reality, how do we expect the average analyst to be able to do this work themselves?

    I don’t intend to dismiss the abilities of the various MI analysts out there, but from my own experience the ones I’ve met are undoubted domain experts and very gifted in excel, but modelling a BI solution powerpivot to give true and accurate measures, attributes etc? Isn’t this asking a lot?

    I’m trying to sell the virtues of PowerBI as I do see it’s potential, but at the moment I still lean towards building a tabular cube atop a kimball style DW and then let the analysts point excel, powerview etc to this product.

    What are your thoughts on this? I’m sure I’m missing a trick with what is undoubtedly an excellent tool.

    1. Chris Webb – My name is Chris Webb, and I work on the Fabric CAT team at Microsoft. I blog about Power BI, Power Query, SQL Server Analysis Services, Azure Analysis Services and Excel.
      Chris Webb says:

      Hi Mike,

      To answer your main question, I don’t think Power BI or any self-service BI tool can be truly effective without training and I think only a small minority of analysts have the required skills to use it to its full potential. That said, when you consider the potential size of the Excel market, a ‘small minority’ is gigantic number of users and therefore worth MS pursuing. Anyone that works with Access today can build something useful with Power Pivot.

      Also I think the whole idea that self-service tools are used only by end users is a marketing trick: from what I can see, most self-service BI tools are used by technical or semi-technical people. The big difference between Power Pivot and SSAS, though, is that the barrier to entry is much lower in terms of skill level and license costs. You can build a basic BI solution in Power Pivot with much less money spent on licenses and crucially using much less expensive IT resources than you do for SSAS – it’s the difference between needing a highly-paid MS BI consultant and a much lower-paid report designer, albeit one still with basic SQL skills. Similarly, you can argue that it’s now much easier for someone who is highly skilled like a DBA or a .NET developer to do a bit of BI work on the side, without them needing to spend too much time learning a new technology and without the need to hire a separate BI developer; again, this makes things much cheaper overall.

      Finally (and this should come as no surprise given my background), I agree that in many (most?) cases giving analysts an SSAS Tabular or Multidimensional solution is a better idea than letting them loose with Power Pivot. This is still what most people think of as self-service BI, it’s just in this case it’s self-service reporting and querying but without the data modelling (which is the difficult bit). I wish MS would promote this a bit more… but alas, it’s not fashionable at the moment.


Leave a ReplyCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.