Reporting Services

SSRS and SSAS-Sourced Parameter Dataset Performance Issues

Ahhh, Reporting Services and Analysis Services integration – a never ending source of pain and frustration! Although I shouldn’t complain because it’s been quite lucrative for me in terms of consultancy work…

Anyway, recently I was tuning a SSRS report based on a SSAS cube for a customer. Looking at Profiler I could see a number of slow queries being executed, which was strange given that it was a fairly simple report. It turned out that during the design phase this report had had a number of parameters created in the SSAS query designer that had later been deleted; however, when you delete a parameter in the SSAS query designer BIDS does not delete the hidden dataset that it is bound to. What’s worse is that when an SSRS report is executed all dataset queries are also executed, even if the datasets aren’t used anywhere in the report, which means you get the overhead of running extra queries. It’s an easy mistake to make and in this case the execution of unused datasets was adding several seconds to the execution time of the report.

You can reproduce this problem very easily by creating a simple report based on the Adventure Works cube. Once you’ve created the data source, open the query designer, drag the Internet Sales Amount measure onto the grid, drag the Customer hierarchy from the Customer dimension onto the filter pane and check the Parameters box:


Now close the query designer and reopen it, then remove the Customer hierarchy from the filter pane, close the query designer again and delete the report parameter. When you Preview the report you’ll see the following in Profiler:

The highlighted row shows the hidden dataset is being executed. What you need to do to fix this is to right-click on your data source and check the Show Hidden Datasets option:

You’ll then see the offending, automatically-generated, hidden dataset and you can delete it:

Luckily, BIDS Helper has functionality to find unused datasets in your reports for you:

And there’s more! What I found really interesting about this parameter dataset query was how long it was taking to execute. In this example 2.5 seconds, even on a warm cache, seems like a very long time to me even though there are a lot of members on the Customer hierarchy. Once the report is deployed that goes down to a consistent 2.1 seconds, and when I run the same query through SQL Management Studio it goes down to 1.5 seconds. Why the difference in execution times? I’m not sure, but I suspect it’s a combination of the connection string properties used and the use of a flattened rowset. In any case, 1.5 seconds is still slow and it’s certainly not good if you actually do want to use a query like this in a dataset bound to a parameter.

Luckily, if our parameter datasets are causing performance problems, we can usually rewrite the queries involved to make them faster. Here’s the original query from the parameter in the example:

MEMBER [Measures].[ParameterCaption] AS
MEMBER [Measures].[ParameterValue] AS
MEMBER [Measures].[ParameterLevel] AS
FROM [Adventure Works]

If we decide that we can make do without the All Member and the level-based indenting that goes on in the parameter dataset (this is an attribute hierarchy, after all, so there’s just one level), we can use the following query in the dataset instead:

FROM [Adventure Works]

Once the first query above has been replaced with the second, and the report parameter has been hooked up to use the new fields, Profiler shows that the time taken to execute the parameter dataset has gone down to around 0.7 seconds:

That is, of course, almost 2 seconds faster than the original query in Preview mode and almost 1.5 seconds faster than the original query in the deployed report. Not a lot on its own but certainly noticeable, and if you have more than one large parameter the cumulative gain could be quite significant. If you create a separate OLEDB connection to the cube and use the second query in a dataset, the execution time is even faster, going down to around 0.45 seconds:

Incidentally, some of the more astute may be asking why I need to include MEASURES.DUMMY in the query above when I can use an empty set on the columns axis instead. Two reasons: one, if you use an empty set on columns in the OLEDB connection you get no rows returned; two, I noticed when the query was being executed in SSRS a Query Subcube event was raised suggesting measure data was being requested from the cube – this didn’t happen when I ran the query in SQL Management Studio. I suspect both problems are something to with SSRS using a flattened rowset, so I’ll investigate and post back here when I get an answer.

27 thoughts on “SSRS and SSAS-Sourced Parameter Dataset Performance Issues

  1. Your suggestion to delete unused datasets applies equally to SQL datasets. It appears that RS fires every dataset, whether they are used or not.

  2. Great stuff Chris, i work with SSRS and SSAS on most of my consultancy jobs and it indeed is astounding to see how bad SSRS and SSAS work together. Indeed good for us but bad for customers who are report builders.It should be a great match but we have to hack it at every turn! The rumor is that this will improve much with SQL 11, which i hope 🙂

  3. They\’ve been saying that the integration will improve ever since I saw the first beta of SSRS back in, ohhhh, 2002 I think. I\’ll believe it when I see it!

  4. I second that one Chris. Slowly waiting for this integration to improve as well. Nice catch on the parameter dataset. Crazy. Thanks for sharing your tips.

  5. Can you provide any info on what SSRS does in the background to process a report from an OLAP cube that uses Query Parameters?Our report performance is very poor (compared with the Excel 2007 pivot tables). For example, we have a report that has 5 query parameters. We have no unused datasets. When we profile the report, the sql trace shows that each query parameter dataset is executed twice in a row. Then once they\’ve all been executed, it executes each of them one more time before it begins the main dataset query. Now, each of these query parameters do not take much time, but the main datset for this particular report will take nearly 22 seconds. We\’ve tried comparing the times in profiler to the SSRS Execution log, but we cannot get them to tie out. And, the execution log shows that the greater amount of time appears to be in the TimeDataRetrieval (vs. the TimeProcessing and TimeRendering). Yet, we cannot find any information to point us to how SSRS is processing the data, datasets, reports, etc. themselves to figure out what is causing the poor performance. Just wondering if you can provide any insight on what it is doing? Thanks

  6. Which version of SSRS are you using, N? I seem to remember seeing SSRS trying to validate queries before it ran reports – and because MDX doesn\’t support any kind of query validation, this involved SSRS running the queries to validate them and then running them again.If your main dataset always takes 22 seconds to run, then it sounds like the query needs some tuning. Do you have lots of calculations on the cube or in the WITH clause of the query? My guess is that for some reason the calculations are not being cached – which could be due to this issue:!7B84B0F2C239489A!3057.entry

  7. We are using SSRS 2005 Standard Edition (SP3). Yes, we\’re thinking the main dataset needs some tuning. This is where our issue lies. How do we determine where the issue is within the dataset? This is where we are \’stuck\’ and have been searching for answers.I write the reports (and new to it as well, not real familiar with MDX code), not the cube. The MDX query for the SSRS report is generated by SSRS. I do not believe I have any calculations in the query (if there are, then they were generated by SSRS). Our thought, here, is that all these calculations are in the cube. So we should, essentially, just be able to \’drag & drop\’ the dataset query onto the SSRS report for display. I\’d copy the SSRS generated code here, but I don\’t know if that\’s appropriate.I will forward your link you included to those here who build the cube to see if they can determine if the cacheing would help.Thanks and any other insight would be greatly appreciated.

  8. Hello Chris,is there a way to avoid this autogeneration of the hidden datasets. I have several datasets that should use all the same parameter but go over different cubes. But I want one specific cube to be the source for the hidden dataset and not the last one I put in the parameter. greetings from Zurich 😉

  9. Cant believe how badly SSRS and SSAS interact. Do the dev teams not like each other.

    I have gone down the route of trying to report from cubes and am having all sorts of problems.

    Excellent article btw, I am a noob to this the steps above really helped me troubleshoot a problem I had.

    1. Chris Webb – My name is Chris Webb, and I work on the Fabric CAT team at Microsoft. I blog about Power BI, Power Query, SQL Server Analysis Services, Azure Analysis Services and Excel.
      Chris Webb says:

      Which query are you talking about?

  10. Nevermind. Solved it..

    I’ve another question. Indeed the cube query time goed down drastically. I encountered that the serialize results (see profiler) in the cube was taking a long time. By your solution suggested here the cube processing time is very low but now I’ve problems in the report. The report (PreviewProcessingService.exe) is taking a long time now. Extremely long. I think it has something to do with the calculated field…

    do you have any suggestion?

      1. Chris Webb – My name is Chris Webb, and I work on the Fabric CAT team at Microsoft. I blog about Power BI, Power Query, SQL Server Analysis Services, Azure Analysis Services and Excel.
        Chris Webb says:

        Interesting – all the more reason to push as much work as possible back to the query, I guess

  11. Yeah well, but the problem is how to get the “All Customers” in the Parameter of the Report or else you have to select every Customer record and pass that to the cube….Seems not a desirable option?

    1. Chris Webb – My name is Chris Webb, and I work on the Fabric CAT team at Microsoft. I blog about Power BI, Power Query, SQL Server Analysis Services, Azure Analysis Services and Excel.
      Chris Webb says:

      No, that wouldn’t be good. I can’t test this, but if you don’t use a calculated field the query should just return a blank name for the All Member when SSRS runs it as a flattened rowset. Could you then accept the blank name instead of seeing “All Customers”?

      1. Working on this issue and it’s driving me nuts. The problem is that I have to pass the set to a dataset and to a subreport. First I had problems passing the parameter to the subreport because of a “Invalid URI : The URI string is too long”. SO I thought well I could change your suggested parameterquery by adding the Code field:

        ON COLUMNS
        ,EXISTS( ([Customer].[Comp-Cust].[Customer Code].ALLMEMBERS * [Customer].[Customer Name].[Customer Name].ALLMEMBERS), [Customer].[Company Code].&[US])
        ON ROWS
        FROM [Cube]

        Now I ‘ve also the Customer Code and I thought I could pass this to the Subreport. But now I’ve the same error even passing the parameter to the dataset in the main report. brrrrrrr. frusti. Don’t know why I have this errror now in the main report.

        So, Getting a bit frustrated.

        I’m working with visual studio 2013 and SQL Server 2008 R2.

        There seems a limitation on the characters that can be passed of 65554 or something like that. I think that is may be the cause. I also read something about SQL Server 2012 and this problem in forumposts

        Running out of options.

      2. I think I found another limitation of the OLEDB MDX Query: You can’t pass a variable to it. I thought that I could limit the number of customers by checking whether there is Sales in a particular week (also a parameter). But SSRS is complaining that this (using a parameter) is not supported by the dataextension. So cascading parameters are also not possible with this solution.

        AS Procedures? Well, Seems a bit drastically for such a simple problem in my eyes? Isn’t this a bit strange that you can’t have a normal parameter that performs (okay it’s quite a lot) ? We can solve this with your suggested approach in this blogpost. The next problem is the performance of the All Customer Calculated field and so I have left this out of the report. The next problem is because we have such a large dataset in the parameter list you can’t pass it to the subreport. Hmmmmmm

        I was thinking to pass the selected list not as a MDX but as a Comma Separated Value with join and split. Because there is quite some overhead transferred to subreport: [Customer].[Customername] every time. Succes not guaranteed.

        pfff.. Building just a simple parameter in a report…pfff

        Thnx for your help…appreciate this…


  12. Not sure if anyone has experienced an issue with the Designer itself? Over a WAN the designer slows to a crawl when dragging a field to the Filters or selecting the drop down to select a member to filter by, even if the attribute only has a couple of members.

Leave a ReplyCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.