Deprecated And Discontinued Functionality in SSAS 2017

In the past I’ve blogged about deprecated and discontinued functionality in SSAS 2014 and SSAS 2016; I forgot to check what’s deprecated and discontinued in SSAS 2017 until last week but it turns out that there are a few things that are worth knowing.

Here’s the link to the official documentation:

https://docs.microsoft.com/en-us/sql/analysis-services/analysis-services-backward-compatibility-sql2017?view=sql-analysis-services-2017

…and here are the definitions of ‘deprecated’ and ‘discontinued’:

A deprecated feature will be discontinued from the product in a future release, but is still supported and included in the current release to maintain backward compatibility. It’s recommended you discontinue using deprecated features in new and existing projects to maintain compatibility with future releases.

A discontinued feature was deprecated in an earlier release. It may continue to be included in the current release, but is no longer supported. Discontinued features may be removed entirely in a future release or update.

As far as discontinued features go it’s straightforward: everything that was deprecated in SSAS 2016 is now discontinued. For SSAS MD that means remote partitions, remote linked measure groups, dimension writeback and linked dimensions are now discontinued; I don’t think these features were ever used by more than a small number of people. Profiler is discontinued too and that’s more of a problem, given that the UI for Extended Events in SSMS remains awful and unusable for the kind of query performance tuning tasks I use Profiler for (I blogged about this issue here). The state of tooling for SSAS is already pretty bad and if Profiler stops working in the future the situation will be even worse; is it right that we have to rely on community-developed tools like DAX Studio and Analysis Services Query Analyzer, however good they are, for tasks like performance tuning?

UPDATE 30th April 2018: it turns out that Profiler was put on the ‘discontinued’ list by accident, and in fact is still only deprecated. The documentation has now been updated appropriately.

There are two important deprecated features:

  • SSAS Multidimensional data mining. Given that it has not had any new features now for a long, long time (even longer than the rest of SSAS MD) and was never very popular in the first place, I’m not surprised. However the example of Microsoft’s first, failed attempt at brining data mining to a wider audience is interesting in the light of the company’s attempts to do the same thing with Azure Machine Learning and other services. As far as I understand it the technology was never the problem and it was about as easy to use as it could be, so why did it fail? I’m not the right person to answer this question but I suspect the reasons include the following: Microsoft BI customers were not ready for data mining back when it was first launched; customers who did want data mining didn’t want to buy a product from Microsoft; very few Microsoft partners had the skills or experience to sell it; and finally is it even possible to do proper data science in a user-friendly GUI with no coding?
  • SSAS Tabular models at the 1100 and 1103 compatibility level (for SSAS 2012 and SSAS 2012 SP1). Anyone that is still running Tabular models at this compatibility level really needs to upgrade, because they’re missing out on the great new features that have appeared in SSAS 2016 and 2017.

Analysis Services Query Analyzer

Last week a new, free tool for analysing the performance of MDX queries on SSAS Multidimensional was released: Analysis Services Query Analyzer. You can get all the details and download it here:

https://ssasqueryanalyzer.github.io/

…and here’s a post on LinkedIn by one of the authors, Francesco De Chirico, explaining why he decided to build it:

https://www.linkedin.com/pulse/asqa-10-released-francesco-de-chirico/

I’ve played around with it a bit and I’m very impressed – it’s a really sophisticated and powerful tool, and one I’m going to spend some time learning because I’m sure it will be very useful to me. It’s an AddIn for SQL Server Management Studio and captures Profiler and Perfmon data for an MDX query when you run it, does so for a cold and warm cache, and then displays the results in a variety of graphs and charts. And it even has an MDX formatter built in!

image

image

image

You can find a full list of features here, and when you download the tool it comes with very detailed documentation. Definitely a must-have for all serious SSAS Multidimensional developers.

Using the Resource Usage Profiler Event For SSAS Multidimensional Query Performance Tuning, Part 3: Aggregations And Indexes

Building aggregations in your SSAS Multidimensional will make your queries faster, right? While that’s true, they will only make a noticeable difference to performance if your query has Storage Engine-related problems rather than Formula Engine-related problems. What’s more, even when do you have Storage Engine-related problems there are some cases where you may find that aggregations don’t give you the kind of performance boost that you expect. In this post I’ll explain why this can happen, how you can use the Resource Usage Profiler event (as described in parts 1 and 2 of this series) to find out when this is happening, and how you can deal with the problem.

Aggregations make queries go faster for two reasons:

  1. Most importantly they contain pre-aggregated data. For example your fact table might contain data at the Day granularity but an aggregation might contain fact data aggregated up to the Year granularity. This means that when a query needs to get data at the Year granularity, SSAS does not need to read the fact table-level data stored in the partition, it can read the data it needs direct from the aggregation without needing to aggregate any data at query time.
  2. Secondly, because the size of an aggregation is usually a lot smaller than the size of the fact table-level data stored in a partition, it is much faster to read data from an aggregation.

However, regarding this second point, there’s a catch: you’ll know if you’ve read the previous posts in this series that SSAS builds indexes on fact data so it can scan that data very quickly, but most builds of SSAS do not build indexes on aggregations. I say ‘most’ because there were a few builds of SSAS that did build indexes on aggregations, but as this article explains this feature was turned off soon after it was introduced because it was found that the time spent building those aggregations was not usually worth any gain in query performance that resulted.

The Resource Usage Profiler event can be used to monitor the number of rows read and rows scanned when SSAS reads data from an aggregation.Taking the original test query and cube from the previous posts in this series:

SELECT 
{[Measures].[My Measure]}
ON 0
FROM
[MyCube]
WHERE([ID].[ID].&[1])

…the Resource Usage event shows that when run on a cold cache and no aggregations are built on the cube, SSAS scans 256 out of 5000 fact rows of data – a single page – and returns one row:

image

However if this cube has an aggregation built on the ID attribute of the ID dimension, the same granularity as the fact table:

image

…when the query runs and SSAS reads data from this aggregation rather than the fact data, the Resource Usage event returns the following:

image

Notice that the ROWS_SCANNED value is now 5000. This is because the aggregation has no indexes built on it so SSAS has to scan all the rows in the aggregation. Reading data from an aggregation at the same granularity as the fact data is therefore a lot less efficient than reading data from the original fact data.

Of course, since most aggregations are usually much smaller than the original fact data, the lack of indexes is not so important because scanning all the data is going to be very quick anyway. However, on very large cubes you may need to build some very large aggregations and find that even when your queries hit these aggregations, performance is still bad because of this lack of indexes. If you see this happening, and can see the ROWS_SCANNED value in the Resource Usage event reporting very high values, then it might be a good idea to enable the building of indexes on aggregations.

You can do this by changing the values of the AggIndexBuildEnabled and AggIndexBuildThreshold server properties in the msmdsrv.ini file. Setting AggIndexBuildEnabled to 1 allows SSAS to build indexes on aggregations. It’s not necessary to build indexes on all aggregations though: you can specify that only aggregations larger than a certain number of rows have indexes built using the AggIndexBuildThreshold property. The only public documentation for these properties is given in two articles here and here, and I strongly recommend you read these articles so that you understand the implications of doing this on your processing times. You should only consider changing these properties if you are a very experienced SSAS developer and you monitor the effects carefully – I’m not even sure if changing these properties is supported by Microsoft.

Using the Resource Usage Profiler Event For SSAS Multidimensional Query Performance Tuning, Part 2: Many-To-Many Relationships And Non-Materialised Referenced Relationships

You probably know that using many-to-many relationships or non-materialised referenced relationships can be bad for Analysis Services Multidimensional query performance. How can you measure their impact, though? In the first post in this series I showed how the Resource Usage Profiler event could be used to to monitor Storage Engine activity; in this post I’ll use it to show the effect of using these features on the amount of Storage Engine activity that takes place during query execution.

Many-to-many relationships

In my previous post I built a very simple SSAS cube with one measure group called Fact from a table with 5000 rows, then built a dimension called ID from that same table with 5000 members on its only hierarchy. For this post I added a second measure group called Fact Bridge based on the same table, added a new role-playing copy of the existing dimension called M2M ID, and then created a many-to-many relationship from this new dimension to the original measure group via the new measure group.

image

Because all the dimensions and measure groups are built from the same table, one member on the M2M ID dimension is linked to just one member on the ID dimension, so selecting a member on the M2M ID dimension will give the same result as selecting the member with the same key on the ID dimension. Even though SSAS see a many-to-many relationship, in the data it’s a one-to-one relationship.

As I showed in my last post, selecting one member from the ID dimension in a query like the following:

select 
{[Measures].[My Measure]} 
on 0,
{[ID].[ID].&[1]}
on 1
from
[M2M Test]

image

…results in a single page of 256 rows being read and a single row being returned from the cube, as shown by the Resource Usage event in a Profiler trace:

image

If the query is changed to use the M2M ID dimension instead:

select 
{[Measures].[My Measure]} 
on 0,
{[M2M ID].[ID].&[1]}
on 1
from
[M2M Test]

image

…the same result is returned, because the member [M2M ID].[ID].&[1] is associated with one member from the [ID].[ID] hierarchy, the member [ID].[ID].&[1], via the intermediate measure group. However the Resource Usage event shows something very different:

image

The ROWS_SCANNED value has gone from 256 to 5256, and ROWS_RETURNED has gone from 1 to 5003! Why? Part of the explanation is that we now have two measure groups that must be scanned, and the Resource Usage statistics are totals for all Storage Engine activity across all measure groups. In this query the Fact Bridge measure group is scanned first to resolve the many-to-many relationship between the M2M ID and ID dimensions, and then the Fact measure group is scanned to get the value for the measure My Measure. The Fact Bridge measure group only accounts for 256 rows scanned and 1 row returned though, the remaining 5000 rows scanned are from the main Fact measure group. The problem here is that SSAS does not translate the filter on the M2M ID dimension into a filter on the ID dimension (this is a limitation of the way SSAS handles many-to-many) so all the rows on the main Fact measure group get scanned in this query.

This explains something that I have blogged about before here, namely that if you partition your measure group by a dimension that is used in a many-to-many relationship you’ll see that all partitions are scanned and not just the partitions you expect to be scanned. The Resource Usage event shows that even when you don’t see unexpected partition scans happening, using a many-to-many relationship in a query can result in a lot of extra Storage Engine activity and therefore potentially worse query performance.

Non-materialised referenced relationships

Something similar happens when you use non-materialised referenced relationships (although materialised referenced relationships are OK). To test this I created another variation on my original cube, with just one measure group and the ID dimension as before but now with a new, role-playing instance of the ID dimension joining to the measure group through the ID dimension using a non-materialised referenced relationship.

image

image

The following query returns the same result as the two queries above:

select {[Measures].[My Measure]} on 0,
{[Ref ID].[ID].&[1]}
on 1
from
[RefDimTest]

image

…but again, the Resource Usage event shows the entire measure group is being scanned when this query runs:

image

I’m not much of a fan of referenced relationships anyway – you can usually get rid of them by redesigning your SSAS dimensions or your underlying dimensional model – so this one more reason not to use them.

Using the Resource Usage Profiler Event For SSAS Multidimensional Query Performance Tuning, Part 1: Rows Scanned And Rows Returned

If you’re performance tuning SSAS Multidimensional Storage Engine issues, the Resource Usage Profiler event can provide a lot of useful information about what’s going on behind the scenes when you run a query. This is something I have blogged about in the past (and it will be useful to read this post before carrying on) but recently I’ve done some more research into this area and found out a lot more things about what this event tells you.

For my testing I created a very simple cube from a single fact table. The table contained 5000 rows and two columns: a dimension key column containing the values 1 to 5000, and a measure column that always contained the value 1. From this I built a single measure group with a single measure called My Measure, and a single dimension built from the dimension key column with 5000 members on it called ID.

Consider the following MDX query:

SELECT 
{[Measures].[My Measure]}
ON 0
FROM
[MyCube]
WHERE([ID].[ID].&[1])

Here’s the output:

image

In this query the WHERE clause filters the output to one member from the ID dimension, which in turn returns data from one row in the underlying fact table, and so My Measure returns 1. If the query is run on cold cache, the Resource Usage Profiler event returns the following:

image

The ROWS_RETURNED value here returns 1, which is what you might expect – the query results show data from one row in the underlying fact table. The ROWS_SCANNED value is 256 though. Why? Chapter 20 of the book “Microsoft SQL Server 2008 Analysis Services Unleashed” has a lot of detail about how SSAS MD stores data on disk, but the important point here is that the data in a partition is stored on disk in segments, with each segment made up of pages, and when a query is run SSAS will scan all the pages that it thinks contain data. The ever-reliable Akshai Mirchandani of the SSAS dev team helped me with the remaining information I needed:

  • There are 65536 rows of data per segment
  • There are 256 pages per segment
  • There are therefore, at most, 256 rows per page

So in this case ROWS_SCANNED shows 256 because one complete page was scanned.

Modifying the query to slice on two members from ID like so:

SELECT 
{[Measures].[My Measure]}
ON 0
FROM
[MyCube]
WHERE([ID].[ID].&[1]:[ID].[ID].&[2])

image

…results in a ROWS_RETURNED of 2 and a ROWS_SCANNED that is still 256, because the two rows must be stored in the same page:

image

…while asking for 257 members from ID in the WHERE clause like so:

SELECT 
{[Measures].[My Measure]}
ON 0
FROM
[MyCube]
WHERE([ID].[ID].&[1]:[ID].[ID].&[257])

Results in a ROWS_RETURNED of 257 and a ROWS_SCANNED of 257 – obviously 2 pages are now being scanned to get the data needed for the query.

image

Finally, the query:

SELECT 
{[Measures].[My Measure]}
ON 0
FROM
[MyCube]
WHERE([ID].[ID].&[5000])

…returns a rows scanned of 136:

image

This must be because the final page, which doesn’t contain the full 256 rows, is scanned. 5000-136=4864, and 4864/256=19, so there must be 20 pages of data: 19 pages of 256 rows and one final page of 136 rows.

I don’t think it’s worth getting too hung up on the exact values that ROWS_SCANNED and ROWS_RETURNED, especially given that they return totals for all Storage Engine activity for all measure groups for the whole query, but knowing that they tell you roughly how much work is being done my the Storage Engine means that you can use them to watch for warning signs that something isn’t working properly when you’re performance tuning queries. In subsequent parts of this series I’ll show some practical examples of this.

Drillthrough On Calculated Members In SSAS MD 2017 Using DAX Expressions, Part 2

If you read part 1 of this series you probably already have a good idea of how I’m going to use DAX queries in actions to implement drillthrough on calculated members. However, there is one last major problem to solve – and no 100% fool-proof way of solving it.

That problem is that there is a maximum length of (as far as I can tell) 32768 characters for the DAX query that is generated by the action. If you exceed that limit you’ll see an error in Excel when you drillthrough:

image

This is the result of the generated DAX query being truncated before Excel tries to run it. As you can imagine, if you have large dimensions and you’re generating a DAX query with thousands of key values in an IN expression, it’s very easy to hit the maximum query length. What can you do to make this less likely?

Probably the most effective method is to check whether the IN clause will contain all of the possible key values on the dimension (most likely because no attributes from the dimension are used in the query), and if so don’t bother including that dimension in the DAX query. Here’s an example of how this can be done:

IIF(
COUNT(EXISTING [Date].[Date].[Date].MEMBERS) = 
COUNT([Date].[Date].[Date].MEMBERS),
"",
",'Date'[Date.Key0] IN {
" +
GENERATE(EXISTING [Date].[Date].[Date].MEMBERS , 
[Date].[Date].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "}
")

Another option is to use NonEmpty() to remove any members from the drillthrough that have no data, like so:

IIF(
COUNT(EXISTING [Date].[Date].[Date].MEMBERS) = 
COUNT([Date].[Date].[Date].MEMBERS),
"",
",'Date'[Date.Key0] IN {
" +
GENERATE(EXISTING 
NONEMPTY([Date].[Date].[Date].MEMBERS, [Measures].[Sales Amount]) 
, [Date].[Date].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "}
")

However, if these techniques (and there may be others) still don’t reduce the length of the query to an acceptable length you will want to try to fail as gracefully as possible and show a more helpful error message than the one above. You can do this with the DAX Error() function, like so:

IIF(
COUNT(EXISTING [Customer].[Customer].[Customer].MEMBERS) = 
COUNT([Customer].[Customer].[Customer].MEMBERS),
"",
",'Customer'[Customer.Key0] IN {
" +
IIF(COUNT(
{EXISTING 
NONEMPTY(
[Customer].[Customer].[Customer].MEMBERS, 
[Measures].[Sales Amount])} 
AS CustomerList)>7000,
"ERROR(
""Please reduce your selection on the Customer dimension
 or remove it from your PivotTable"")",
GENERATE(
CustomerList
, [Customer].[Customer].CURRENTMEMBER.PROPERTIES("KEY"), ",")
)
+ "}
")

In this example, if the resulting DAX IN expression for the Customer dimension will have more than 7000 keys in it, the entire DAX query returns a custom error message instead:

image

Once again this is a far from perfect solution – I would have liked to test the total number of characters in the query, but if you do that you have to write the expression twice, once in the first parameter of IIF() and once in one of the results, and that would be horrible. My gut feeling is that you should only use this technique on dimensions with a large number of members on the key attribute.

Putting this all together, for a simple cube based on data from Adventure Works with three dimensions (Date, Product and Customer) here’s what a complete Action Expression for a regular measure might look like:

"EVALUATE 
FILTER(
CALCULATETABLE(
SELECTCOLUMNS(
'Sales Order', 
""Sales Order Number"",
'Sales Order'[Sales Order Number],
""Sales Order Line Number"",
'Sales Order'[Sales Order Line Number],
""Sales Amount"",
[Sales Amount])
" +
IIF(
COUNT(EXISTING [Date].[Date].[Date].MEMBERS) = 
COUNT([Date].[Date].[Date].MEMBERS),
"",
",'Date'[Date.Key0] IN {
" +
GENERATE(EXISTING 
NONEMPTY([Date].[Date].[Date].MEMBERS, 
[Measures].[Sales Amount]) 
, [Date].[Date].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "}
") +
IIF(
COUNT(EXISTING [Customer].[Customer].[Customer].MEMBERS) = 
COUNT([Customer].[Customer].[Customer].MEMBERS),
"",
",'Customer'[Customer.Key0] IN {
" +
IIF(
COUNT(
{EXISTING NONEMPTY([Customer].[Customer].[Customer].MEMBERS, 
[Measures].[Sales Amount])} AS CustomerList)>7000,
"ERROR(""Please reduce your selection on the Customer dimension 
or remove it from your PivotTable"")",
GENERATE(
CustomerList
, [Customer].[Customer].CURRENTMEMBER.PROPERTIES("KEY"), ",")
)
+ "}
") +
IIF(
COUNT(EXISTING [Product].[Product].[Product].MEMBERS) = 
COUNT([Product].[Product].[Product].MEMBERS),
"",
",'Product'[Product.Key0] IN {
" +
GENERATE(EXISTING 
NONEMPTY([Product].[Product].[Product].MEMBERS, 
[Measures].[Sales Amount]) 
, [Product].[Product].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "}
") +
"),
[Sales Amount]>0)"

 

What about a calculated measure though? Assuming that  you have a time utility/date tool/shell dimension, you can use a Condition expression on the action above to make sure it only gets executed for the member on the time utility dimension that contains the non-calculated values, in this case called [Actual Value]:

([Measures].CURRENTMEMBER IS [Measures].[Sales Amount]) AND
([Date Calculations].[Date Calculations].CURRENTMEMBER IS 
[Date Calculations].[Date Calculations].&[Actual Value])

image

Now, let’s say there is a calculated member on the time utility dimension that contains a year-to-date calculation with the following definition:

CREATE MEMBER 
CURRENTCUBE.[Date Calculations].[Date Calculations].[Year-To-Date] 
AS
AGGREGATE(
PERIODSTODATE(
[Date].[Calendar].[Year],
[Date].[Calendar].CURRENTMEMBER),
[Date Calculations].[Date Calculations].&[Actual Value]);

You can create a new action that has a Condition expression as follows that restricts it to the year-to-date calculation:

([Measures].CURRENTMEMBER IS [Measures].[Sales Amount]) AND
([Date Calculations].[Date Calculations].CURRENTMEMBER IS 
[Date Calculations].[Date Calculations].[Year-To-Date])

Now, the final problem to solve is to generate a DAX query that returns all of the Sales Orders from the beginning of the current year up to and including the selected date – the Sales Orders that a user would expect to see when they drilled through on the year-to-date calculated member above.

Here’s a modified MDX expression for the Date dimension that returns a DAX expression that finds all of the dates associated with the current selection on the Date dimension, finds the maximum date, then filters the drillthrough DAX query by all dates from the beginning of the current calendar year up to and including the maximum date:

IIF(
COUNT(EXISTING [Date].[Date].[Date].MEMBERS) = 
COUNT([Date].[Date].[Date].MEMBERS),
"",
", VAR FilteredDates = FILTER('Date', 'Date'[Date.Key0] IN {
" +
GENERATE(
EXISTING 
NONEMPTY([Date].[Date].[Date].MEMBERS,[Measures].[Sales Amount]) , 
[Date].[Date].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "})
VAR MaxDate = MAXX(FilteredDates, 'Date'[Date.Key0])
VAR MaxYear = MAXX(FilteredDates, 'Date'[Year])
RETURN 
FILTER('Date', 'Date'[Date.Key0]<=MaxDate && 'Date'[Year]=MaxYear)
")

Here’s the complete MDX expression for the year-to-date drillthrough:

"EVALUATE 
FILTER(
CALCULATETABLE(
SELECTCOLUMNS(
'Sales Order', 
""Sales Order Number"",
'Sales Order'[Sales Order Number],
""Sales Order Line Number"",
'Sales Order'[Sales Order Line Number],
""Sales Amount"",
[Sales Amount])
" +
IIF(
COUNT(EXISTING [Date].[Date].[Date].MEMBERS) = 
COUNT([Date].[Date].[Date].MEMBERS),
"",
", VAR FilteredDates = FILTER('Date', 'Date'[Date.Key0] IN {
" +
GENERATE(
EXISTING 
NONEMPTY([Date].[Date].[Date].MEMBERS,[Measures].[Sales Amount]) , 
[Date].[Date].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "})
VAR MaxDate = MAXX(FilteredDates, 'Date'[Date.Key0])
VAR MaxYear = MAXX(FilteredDates, 'Date'[Year])
RETURN 
FILTER('Date', 'Date'[Date.Key0]<=MaxDate && 'Date'[Year]=MaxYear)
") +
IIF(
COUNT(EXISTING [Customer].[Customer].[Customer].MEMBERS) = 
COUNT([Customer].[Customer].[Customer].MEMBERS),
"",
",'Customer'[Customer.Key0] IN {
" +
IIF(COUNT({
EXISTING 
NONEMPTY([Customer].[Customer].[Customer].MEMBERS, 
[Measures].[Sales Amount])} AS CustomerList)>7000,
"ERROR(""Please reduce your selection on the Customer dimension or 
remove it from your PivotTable"")",
GENERATE(
CustomerList
, [Customer].[Customer].CURRENTMEMBER.PROPERTIES("KEY"), ",")
)
+ "}
") +
IIF(
COUNT(EXISTING [Product].[Product].[Product].MEMBERS) = 
COUNT([Product].[Product].[Product].MEMBERS),
"",
",'Product'[Product.Key0] IN {
" +
GENERATE(EXISTING 
NONEMPTY([Product].[Product].[Product].MEMBERS, 
[Measures].[Sales Amount]) 
, [Product].[Product].CURRENTMEMBER.PROPERTIES("KEY"), ",")
+ "}
") +
"),
[Sales Amount]>0)"

I warned you it was going to be complicated, didn’t I? You can download a SSAS MD 2017 .abf backup file containing the sample database and the two actions here.

SSAS Multidimensional HOLAP Storage: Even More Useless Than You Might Think

In almost 20 years of using Analysis Services I have never, ever used HOLAP storage. However the other week I was with a customer that was using ROLAP storage on top of Exasol and while query performance was generally good, I wondered whether building an aggregation or two in SSAS might help query performance in some cases. Surely HOLAP could be useful here? Sadly not.

What I hadn’t realised was that when you use HOLAP storage and process a partition, SSAS generates exactly the same fact table-level SQL query that is used to process a MOLAP partition. It then uses this data to build any aggregations you have defined and after that throws the data it has read away, leaving only the aggregations stored in MOLAP mode. Therefore you get exactly the same, slow processing performance as pure MOLAP storage and worse query performance! It even executes the SQL query when there are no aggregations defined. I had assumed SSAS would generate one SQL query for each aggregation and get just the summarised data needed by the aggregation but I was wrong. This means that for the kind of scenarios where ROLAP on a fast relational database is becoming more popular (for example when working with large data volumes and/or real-time data) HOLAP is not a viable option.

%d bloggers like this: