Building an Ad-Hoc SSAS Cube Browser in SSRS

The post I did on implementing SSAS drilldown in SSRS back in February has been one of the most popular I’ve written this year. Its success immediately made me think of taking the idea one step further and building a full, ad-hoc cube browser in SSRS – something I’ve had a crack at doing several times in the past, but which has proved to be pretty much impossible with SSRS out of the box. However I knew that with the extra flexibility that Intelligencia Query (full disclosure – I have a financial stake in this product; see here for some background on my involvement with it) gives it was going to be feasible… the next problem was that I didn’t have the time to do the work.

Luckily, my friend Andrew Wiles (who owns the company that develops and sells Intelligencia Query) did have the time to do this and blogged about it here:
http://andrewwiles.spaces.live.com/blog/cns!43141EE7B38A8A7A!566.entry

He also uploaded a video demo of it to YouTube:

It’s now available as a sample app for the SQL 2008 version of IQ (it relies on DMVs, so it won’t work with 2005), and he’s made several improvements to it since. I thought I’d mention it here because it’s not only a fun demo, it shows just how flexible the combination of SSRS and IQ is: we traditionally think of SSRS as being suitable only for static or semi-static reports, but here it is working as a basic ad-hoc query tool. OK it’s not exactly elegant the way you have to rerun a report every time you click something, and of course the only form of interaction possible is clicking on links, but hopefully you get the point!

Proclarity Migration Roadmap (or lack thereof)

For those of you who commented on my recent post asking what the future held for existing Proclarity users, some interesting news. My fellow SQL BI MVP Thomas Ivarsson asked whether there were any plans for helping Proclarity users migrate to PerformancePoint and got this reply from Alyson Powell Erwin:

http://social.technet.microsoft.com/Forums/en-US/ppsmonitoringandanalytics/thread/b4e9bd35-62ce-4ca5-bd1f-05133b30bcc9

Here’s the text:

There will not yet be a migration from ProClarity 6.3 to PerformancePoint Services for SharePoint 2010.  Customers can continue to use ProClarity throughout its current supported lifecycle date of July 2012 for mainstream and July 2017 for extended.  We are still working on the roadmap for ProClarity but it is likely that you will not see a migration path until the O15 timeframe. 

So, in effect, three and a half years after Microsoft first announced they were buying Proclarity, they still have no roadmap for migrating existing Proclarity customers onto a new platform. I’m sorry, but this is just not good enough; I don’t think they could have come up with a strategy that would be more damaging to Microsoft BI if they had called up Larry Ellison and asked him to contribute some ideas. Development on Proclarity finished three years ago, almost, and they’re saying that there probably won’t be a migration story until Office 15 – which is likely to be about three or four years in the future! That’s effectively telling some of the most serious, committed Microsoft BI customers to bin their existing solutions and start again from scratch, and I can’t tell you how angry that makes me feel. It seems to me that Microsoft don’t have a BI strategy any more, they have a sell-more-Office (and especially MOSS) strategy. That’s fair enough, Microsoft have to make money somehow, but in there’s no point expecting SQL Server BI to drive sales of Office in the future if they’re busily driving away the existing customer and partner base. It’s a classic case of killing the goose that laid the golden egg.

Here’s what Microsoft should do:

  • Round up whatever members of the Proclarity dev team that are still in Microsoft and get them to work on a new stopgap release of Proclarity. It doesn’t need to add much new functionality, but it does need to update the UI and make it look a bit less like a VB6 app circa 1998.
  • Either stop pretending that Excel will meet the needs of power users and let the Proclarity fat client live for a few years longer, or add functionality to Excel that will bring it up to the required standard. Richard Lees has just published a good list of what needs to be done here (I can think of a few more myself, such as support for ragged hierarchies that use HideMemberIf), and while some of these issues are addressed in Excel 2010 not all are. Excel 2010 is just bringing Excel up to the levels of functionality that most third party SSAS clients had in 2005. And again, I can’t wait until Office 15.
  • Publish – and commit to – a clear roadmap showing how existing Proclarity customers can be migrated to the new Office BI platform. At the moment most Proclarity customers feel completely abandoned and have no idea what to do (as the comments in my recent blog post demonstrate).

In the meantime, if I was one of the remaining third party SSAS tools vendors I would be wondering if it was possible to create a wizard that would migrate existing Proclarity briefing books onto their own platform. I would imagine it might generate a few leads…

SQL Server Day Belgium

I’m pleased to announce I’ll be speaking at the Belgian SQL Server User Group’s one-day conference, SQL Server Day 2009, which will be taking place in Mechelen on December 3rd 2009. You can find out more about the agenda and register here:

http://sqlserverday.be/

It looks like it’s going to be a great day, so if you’re in Belgium (or nearby) why not come along?

Using Dimensions as Parameters to Calculations and Sets

One of the advantages of using SSRS as a client for SSAS is the control you have over your MDX, and one of the advantages of control over your MDX is the ability to parameterise not only your queries but also any calculations defined in those queries. It would be great if we could parameterise calculations defined in the MDX Script (I have a Connect open on this, in fact – please vote!) but until we can, we can do something almost as good: we can use a dimension hierarchy to store a pre-defined range of parameter values and then use the CurrentMember on this hierarchy to pass one of these values to a calculation or dynamic named set. Here’s how.

First of all, we need to create a dimension to hold these parameter values, something like a Time Utility or Shell dimension. We can create the source data for this easily in the DSV using a named query and a SQL SELECT statement as follows:

SELECT        1 AS ParamID, ‘5%’ AS ParamDesc, 1.05 AS ParamValue
UNION ALL
SELECT        2 AS ParamID, ‘10%’ AS ParamDesc, 1.1 AS ParamValue
UNION ALL
SELECT        3 AS ParamID, ‘15%’ AS ParamDesc, 1.15 AS ParamValue
UNION ALL
SELECT        4 AS ParamID, ‘20%’ AS ParamDesc, 1.20 AS ParamValue

We can then build an SSAS dimension – I called it [Parameter Values] – with a single attribute, its KeyColumn property using the ParamID column above, its NameColumn property using the ParamDesc column, and its ValueColumn property using the ParamValue column. This means we have a simple hierarchy with four members on it. We then add the new dimension to the Adventure Works cube; it doesn’t need to have any relationship with any measure group.

Now let’s use it. Imagine we want to see what the value of Internet Sales Amount would be if it grew by 5%, 10%, 15% or 20%, how would we do it? Well, what we could do is say that when the All Member on the Parameter Values is selected we see the real value of Internet Sales Amount, but when one of the other members is selected we increase the value of Internet Sales Amount by the percentage associated with the selected member. The MDX required would look like this:

SCOPE([Measures].[Internet Sales Amount]);
    SCOPE([Parameter Values].[Parameter Values].[Parameter Values].MEMBERS);
        THIS =
            ([Measures].[Internet Sales Amount], [Parameter Values].[Parameter Values].[All])
            *
            [Parameter Values].[Parameter Values].CURRENTMEMBER.MEMBERVALUE;
        FORMAT_STRING(THIS) = ‘CURRENCY’;
    END SCOPE;
END SCOPE;

What I’m doing here is scoping on Internet Sales Amount and all of the members except the All Member on my new dimension, so that I’m only doing my calculation when a selection is made on the Parameter Values hierarchy. In this scope I’m then multiplying the value of Internet Sales Amount at the All Member with the value returned by the MemberValue function for the CurrentMember on [Parameter Values].[Parameter Values] – which is the value from the column I used in the ValueColumn property of the attribute. Even though I’m scoping on a real measure, the calculation doesn’t aggregate up to the All Member because this dimension has no relationship with the measure group that Internet Sales Amount is from (or indeed any other measure group).

Here’s a query that shows the results:

SELECT [Measures].[Internet Sales Amount] ON 0,
[Parameter Values].[Parameter Values].MEMBERS ON 1
FROM [Adventure Works]

And here’s the output, showing Internet Sales Amount and underneath it the value increased by 5%, 10%, 15% and 20%:

image

So we’ve got a (sort of) parameterised calculation. We can also use a similar approach with dynamic named sets too – here’s an example dynamic named set definition that uses the same dimension to control the percentage passed into TOPPERCENT function:

CREATE DYNAMIC SET MYSET AS
    IIF([Parameter Values].[Parameter Values].CURRENTMEMBER IS
        [Parameter Values].[Parameter Values].[All],
            [Customer].[Customer].[Customer].MEMBERS,
            TOPPERCENT(
                [Customer].[Customer].[Customer].MEMBERS
                ,([Parameter Values].[Parameter Values].CURRENTMEMBER.MEMBERVALUE – 1) * 100
            , [Measures].[Internet Sales Amount])
        );

The set returns all Customers if no selection is made on Parameter Values. However if a selection is made on Parameter Values then the selection drives the number of Customers that the set returns. So the query:

SELECT [Measures].[Internet Sales Amount] ON 0,
MYSET ON 1
FROM [Adventure Works]

…returns all 18485 Customers, whereas:

SELECT [Measures].[Internet Sales Amount] ON 0,
MYSET ON 1
FROM [Adventure Works]
WHERE([Parameter Values].[Parameter Values].&[1])

…returns the top 5% of Customers (I’m using the MemberValue function minus 1, multiplied by 100, here so I can use the value 5 rather than the original MemberValue of 1.05) by Internet Sales Amount, which results in the top 164 Customers being returned. Slicing by the other members on Parameter Values will give me the top 10%, 15% and 20% of Customers by Internet Sales Amount.

Farewell to the Excel 2003 addin and the BI Accelerator

Reading the SQL Server technical rollup mail I get sent as an MVP (the same information’s also available at http://blogs.technet.com/trm/archive/2009/10/01/october-2009-technology-rollup-mail-sql-server.aspx) I noticed that two old products have just been retired: the Excel 2003 Analysis Services addin, and the BI Accelerator. A little more information on this is available on the download pages here:
http://www.microsoft.com/downloads/details.aspx?displaylang=en&familyid=dae82128-9f21-475d-88a4-4b6e6c069ff0
http://www.microsoft.com/downloads/details.aspx?displaylang=en&familyid=a370fbc9-98b1-4f5c-b09b-6f1bf08e9292

I quote from the Excel addin page:
”The Excel Add-in for SQL Server Analysis Services has been removed to avoid customer confusion about support for this component. As noted in the details that accompanied the release of this product, Microsoft does not provide any support for this add-in and has no plans to release future versions. Newer versions of Excel include most of the functionality that is provided by this add-in; these newer versions are supported according to the Microsoft Product Lifecycle.

To be honest I’ve not even looked at either of these products for years, but at least in the case of the Excel addin I wonder how many people are still using it? If you have no choice but to use Excel 2003 (and I’m sure a fair proportion of Excel users still are) then it was an invaluable upgrade for Excel 2003’s built-in SSAS support. More to the point, the BI Survey 8 (which collected data from mid 2008) had 21.8% of Analysis Services users claiming to use it, more than double the number that were using Panorama Novaview and only 5% less than were using Proclarity. At first that seemed an improbably high number to me, but on reflection I think it could be more or less accurate: as BI consultants and developers we tend only to see ‘new’ BI projects, but what about all those projects we delivered 4+ years ago and haven’t seen since? They’re chugging along happily, ‘just working’ with no obvious need to upgrade, and their users are the people who are likely to be using the Excel addin. They won’t stop using it because of this announcement, but it might start them thinking about what they should upgrade to – probably Excel 2007, but maybe something else.

And Proclarity users are in the same situation: they have an ageing tool that is no longer supported, and need to think about upgrading to something. But what? At least with the Excel addin there’s Excel 2007 but in the case of Proclarity there’s no obvious answer – it’s not just that PerformancePoint/Excel Services/SSRS don’t have the same functionality, but if you’ve got several hundred briefing books your users aren’t going to be happy about rebuilding them in some new tool. I don’t want to go off on yet another rant about Microsoft’s idiotic client tools strategy, but I’m worried that we’ll start to see a series of migrations away from the Microsoft BI platform as a result of this issue.

Building a Better Cache-Warmer, Part 1: the Storage Engine Cache

SSAS caching is an interesting subject: I’ve blogged numerous times about when caching can and can’t take place, and one of the most popular posts I’ve ever written was on how to build a cache-warmer in SSIS. However that last post was written a while ago, before the CREATE CACHE statement for warming the Storage Engine cache was reintroduced in SSAS 2005 SP2, and I’ve not been satisfied with it for a number of other reasons too. So as a result I’ve started to do more research into the issue and, with the help of Mosha, Akshai Mirchandani, and the chapter 29 of the indispensible “Microsoft SQL Server 2008 Analysis Services Unleashed” I’ve been collecting my thoughts on how to build a better cache warmer with a view to blogging about it again. This first post will deal with the relatively easy bit – warming the SE cache – and when I have time I’ll write about the more difficult but more useful task of warming the Formula Engine cache, and by hopefully then I’ll be in a position to create the definitive SSIS cache warmer package and be able to blog about that too.

Before I go on I need to mention that one other reason that has made me want to revisit this subject is that, at his MDX Deep Dive session at PASS last year, Mosha asserted that cache-warming was a Bad Thing. This intrigued me, since it seemed like such an obviously Good Thing to do, but of course it turns out he had good reason for saying what he did… SSAS caching is a lot more complex than it first appears and cache warming done badly can be counter-productive.

I’m also tempted to say that if you don’t need to warm the FE cache (ie you don’t have any complex calculations that take a long time to run) then there’s not much point in warming the SE cache. In my experience if you have the right aggregations built then you can get excellent response times even on a cold cache. Of course to build aggregations you need to know the granularities of data that your users’ queries are, but you need the same information to warm the cache. The only difference would be that you can make your cache warmer much more dynamic than your aggregation design: if your users’ query patterns change regularly it would be difficult to keep your aggregation design in synch with them, whereas with a cache-warmer you can record the most recent queries that have been run and warm the cache accordingly.

There are a number of things to bear in mind when warming the SE cache:

  • We must not try to cache too much data. If we go over the LowMemoryLimit data will start to be evicted from the cache, which means we get no benefit from that point on; if we go over the TotalMemoryLimit it’s likely that just about the entire cache will be emptied. We also have to remember that the cache will grow as a result of normal usage during the day, so we should plan to use only a fraction of the available memory when warming the cache.
  • We want to avoid cache fragmentation. The SE cache uses a data structure called the data cache registry to store its data, and when the FE requests a (potentially filtered) subcube of data from the SE that data subsequently gets stored in the SE cache as an entry in the data cache registry. As a result it is possible that instead of having one entry in the SE cache that contains a lot of data, the same data could be spread across multiple smaller entries. This is bad because:
    • It means that lookup in the cache is slower as there are more entries to search
    • The SE doesn’t bother to continue searching in the data cache registry after it has compared the current request with more than 1000 subcubes, and will then go to disk even if the required data is in the cache. If the cache is filled with lots of small entries then this will happen more often.
    • If the FE requests the slice {A,B} from a hierarchy in a subcube and the data for those two members A and B are held in different entries in the cache, then it will not be found. It will only be matched if a single entry in the cache contains data for both A and B.
  • Data in the SE cache can be aggregated to answer a request – but only if the data itself if aggregatable, and this is not the case when there are distinct count measures in the measure group or when there are many-to-many relationships present. Assuming that the cache is aggregatable though we should perform SE cache warming at lower rather than higher granularities, although if we go too low down we’ll end up caching data we don’t need, using up too much memory and find that the cache warming itself takes too long.

Of course all of the above is subject to change in later versions, but I understand that this is the current situation with SSAS 2008 SP1.

How can we find out what data to use to warm the SE cache then? The MDX queries that have been run by users aren’t much help, but we can see what subcubes are being requested when a query is run by looking at the Query Subcube and Query Subcube Verbose events in Profiler; we could therefore run a Profiler trace to capture this data, but in fact the easiest option is to just use the Usage Based Optimization wizard’s Query Log since it contains the same data as Query Subcube. The following blog entry from James Snape shows how to interpret this information:
http://www.jamessnape.me.uk/blog/2006/11/09/SubcubeQueries.aspx 
A truly sophisticated SE cache warmer would be able to take this data, along with data about attribute relationships, and work out what the most recent, frequent (but slowest) subcube requests were, then try to find some lowest (but not too low) common granularities to use for cache warming, perhaps also slicing so that only the most recent time periods were used as well.

Last of all, how do we actually go about loading data into the SE cache? We could run MDX queries but this is a bit of a risky approach since we may not know what calculations are present and so we can’t be sure that our query requests the right data; the best option is to use the CREATE CACHE statement that is described here:
http://blogs.msdn.com/sqlcat/archive/2007/04/26/how-to-warm-up-the-analysis-services-data-cache-using-create-cache-statement.aspx

Slow metadata

There’s an interesting thread on the Analysis Services MSDN Forum that I’ve been following for a while now (it’s over 40 posts long at the time of writing) that gives a good explanation of why metadata access on very large or complex cubes can be so painfully slow. Here’s the link:

http://social.msdn.microsoft.com/Forums/en-US/sqlanalysisservices/thread/679b510f-3d42-4eed-ba2c-df2a2bd39a00

In summary, it turns out that the more files there are on disk in your SSAS database folder, the slower metadata reads become, and when you reach very large numbers of files almost any kind of metadata access becomes incredibly slow. There are a lot of different factors determining how many files are created for your SSAS database but large numbers of cubes, attributes, partitions and aggregations (which end up being multiplied across each partition) are the main culprits; the problem is that the way the EstimatedSize property on metadata objects has been implemented results in SSAS having to iterate over all the files in every subdirectory in the database’s directory in the data folder to find their size.

One workaround is to set the Storage Location property of your partitions to somewhere outside the data directory since this means they won’t get scanned, but doing this makes backup and restore difficult and means you get inaccurate size estimates. The only other option is to reduce the number of partitions and aggregations you have; given that the SQLCat team are now saying that partitions of 50-60 million rows each are feasible in 2008, and that you can use different partition designs for partitions containing old and new data, there’s probably a lot of scope for doing this in most cubes. Hopefully, though, this problem will be fixed in a future release…

adCenter and SSAS 2008 White Paper

There’s a new white paper on how adCenter uses Analysis Services 2008 available here:
http://technet.microsoft.com/en-us/library/ee410017.aspx

For the most part it’s a fairly run-of-the-mill list of reasons to upgrade to 2008 (and, to be honest, if you’re running 2005 I can’t think of a good reason not to upgrade to 2008 – the performance is so much better) but there are several good nuggets on performance tuning for large SSAS implementations; for example, table 3 has a list of ini file settings for optimising processing. Worth a look.

Top Ten Reasons to come to SQLBits

It seems like every conference nowadays has to come up with a ‘top ten’ of reasons to attend, doesn’t it? Well, we at SQLBits thought we shouldn’t be left out and I got the job of writing it. So, even though it’s already been used in one of our promotional emails and Simon’s already put them on his blog, I thought I’d post them up here:

10. We’re going West (Newport, to be exact)! Which is great news if you live in Wales or the South West and you get fed up with all the tech conferences being held in London or the Thames Valley. Even if you don’t live in the local area it’s only 1hr 45 by car or train from London, it’s close to Birmingham, and with Bristol and Cardiff airports nearby it’s easy to get to from anywhere in Europe.

9. SQLBits has grown to three days. We have a day of pre-conference seminars on Thursday November 19th; on Friday November 20th we have a special SQL 2008 and SQL 2008 R2 day; and on Saturday November 21st we have our flagship free community conference. That’s probably all the SQL Server education you’ll want or need this year.

8. Meet other SQL Server professionals. Are you the only DBA in the village? Do people think you’re talking about your private life when you say you’re a “BI consultant”?  Do you have to pretend to be an estate agent or a lawyer at parties to avoid the stigma of being seen as an IT geek? At SQLBits you’re among friends: no-one will mind you talking about clustered indexes or the finer points of MDX syntax.

7. Attend an advanced technical training course. Got some budget for a training course, but don’t want to spend it on yet another basic Microsoft Official Curriculum course where the instructor only read the course notes the night before? Our pre-conference seminars are run by world-class SQL Server experts and will give you the knowledge you need to do your job. Our big draw is Donald Farmer doing a whole day on Gemini and self-service BI, but I’ll be running an introductory MDX course too…

6. Win some swag, drink some beer and eat some pizza. It’s not all work, work, work at SQLBits: we have great giveaways, there are loads of opportunities for socialising and there’s beer, pizza and games at the end of the day. And if you like golf, you might be interested to know that the venue, Celtic Manor Resort, has three championship courses and is playing host to the 2010 Ryder Cup.

5. Check out the latest SQL Server-related products. Don’t have time to download and test the latest tools and utilities? Our sponsors will be happy to demo them for you and provide eval versions. They’ll probably also have loads of those cheap plastic promotional freebies that your kids love.

4. Find out what’s coming in the next version. Our new SQLBits Friday event will give you the lowdown in what’s new in SQL 2008 and what’s coming in SQL 2008 R2 (including Madison, Gemini, StreamInsight, Master Data Services and changes in Reporting Services), so you can plan for the future more effectively.

3. Bring the family. Want to come to SQLBits, but feel guilty about leaving the family at home? No need, they can come too! The venue has its own swimming pools, a spa, tennis courts and restaurants as well as the golf courses. Repeat after me: “It’s not a conference, darling, it’s a luxury weekend break…”

2. Deep technical content. We won’t make you sit through hours of marketing guff from some sales guy who doesn’t know a database from a spreadsheet – SQLBits is all about SQL Server professionals sharing their hard-won technical knowledge with the community. We’ve got big names like Donald Farmer (program manager on the Analysis Services/Gemini dev team) and Thomas Kejser (from the SQLCat team), and MVPs like Simon Sabin, Allan Mitchell and me speaking.

1. Did we mention that SQLBits Saturday is FREE? Yes, as always, the SQLBits community day on Saturday November 21st is free to attend. Top-class speakers, a massive selection of sessions and 100% SQL Server focused – and you pay nothing. That’s 100% cheaper than most tech conferences!

So, what are you waiting for? Register now at:
http://www.sqlbits.com/information/Registration.aspx

Oh, and if you are already registered you might be interested to know that voting has just opened, so you can pick the sessions you’d most like to see.

Last of all, would you like to win free admission to the training day of your choice, the Friday event, and free hotel acommodation at Celtic Manor for the Thursday and Friday night? Then check out our mugshot competition:
http://www.sqlbits.com/Competition.aspx

Now() and the Formula Cache

You know, I have the nagging feeling I’ve already blogged about this before… but I can’t find the post (I have been at this for over four years now) so it’s worth mentioning again…

Anyway, a common question asked on the MSDN Forums is how to get the current date and then use it in an MDX calculation – see this thread for example. Usually this is because users want calculations that show the current day’s sales, or something similar. One answer is of course to use the Now() function, but what people don’t often realise is how this can impact the ability of Analysis Services to cache the values returned by calculated members, and therefore reduce overall query performance.

To understand why, let’s look at some examples in Adventure Works. First of all create a calculated measure on the AW cube as follows:

CREATE MEMBER CURRENTCUBE.[MEASURES].[NOWDEMO] AS NOW();

Then run the following query a few times:

SELECT {[Measures].[NOWDEMO]} ON 0
FROM [Adventure Works]

As you’d expect, every time you run this query you see the current date and time – and every time you run it, you see a different value. But, you may be thinking, doesn’t Analysis Services cache the results returned by calculated members? Well, yes it does in most cases, but for non-deterministic functions (functions that could return a different result every time they’re called) like Now() no caching takes place, because otherwise the value returned from the cache might be different from the one the function actually returns.

The next problem is that if you create any other calculated members that depend directly or indirectly on the value returned by a non-deterministic function, their values can’t be cached either. One problem I see sometimes in my consultancy work is poor query performance resulting from SSAS being unable to use the formula cache, because a large number of calculations have a dependency on a single calculation that uses the Now() function. Here’s a greatly simplified example of two calculated members, the first of which finds the current year and the second which returns a sales value for the year eight years before the current year:

CREATE MEMBER CURRENTCUBE.[MEASURES].[Current Year]
AS YEAR(vba!format(now(),"MM/dd/yyyy")),
VISIBLE = 1;

CREATE MEMBER CURRENTCUBE.[MEASURES].[SHOW SALES FOR A YEAR]
AS ([Measures].[Internet Sales Amount],
STRTOMEMBER("[Date].[Calendar Year].&[" + CSTR([MEASURES].[Current Year] – 8) + "]", CONSTRAINED));

If you run a query that references the second calculated measure on a cold cache, such as:

SELECT [MEASURES].[SHOW SALES FOR A YEAR] ON 0
FROM [Adventure Works]

The first time you run it you’ll see SSAS going to disk as you’d expect; the second time you run it though you’ll see SSAS is able to use the Storage Engine cache but not the Formula Engine cache, as this Profiler trace shows:

NowNoFE

The highlighted Get Data From Cache event shows data being retrieved from the measure group cache. This is ok and can be beneficial for query performance, but if the calculation we’re doing is very expensive then it can still mean our query takes a long time to run. We’ll only get an instant response on a warm cache if we can work out how to use the formula cache somehow.

Luckily, in most cases where Now() is used, we don’t usually want the system date and time, we just want the date. That means that we only want to return a different value when the date changes, once every 24 hours. What we can do therefore is use a named set to somehow store the value returned by Now(), for example like this rewrite of the calculation above:

CREATE MEMBER CURRENTCUBE.[MEASURES].[Current Year]
AS YEAR(vba!format(now(),"MM/dd/yyyy")),
VISIBLE = 1;

CREATE SET CURRENTCUBE.MYYEAR AS
{STRTOMEMBER("[Date].[Calendar Year].&[" + CSTR([MEASURES].[Current Year] – 8) + "]", CONSTRAINED)};

CREATE MEMBER CURRENTCUBE.[MEASURES].[SHOW SALES FOR A YEAR – CACHEABLE]
AS ([Measures].[Internet Sales Amount],
MYYEAR.ITEM(0));

What we’re doing here is finding the year we want, then finding the member for the year 8 years ago and storing that in a named set. Because a static named set is only evaluated once, when we do our final calculation we can reference the single member stored in the named set and therefore make use of the formula cache as the following Profiler trace shows:

NowWithFE

We’re now getting data from the flat cache, which is one part of the formula cache (which isn’t ideal either as it indicates the calculation is being evaluated in cell-by-cell mode, I guess because we’re referencing a named set inside it) and so warm-cache performance will be better.

The next problem is that when the date does change, we need to clear the cache. This can be accomplished easily by running an XMLA ClearCache command, perhaps direct from SQL Server agent, every night at midnight or whenever necessary.

To be honest, though, I’m not sure using the Now() function at all is a good thing – apart from the issues described here there are a lot of other risks involved, such as the time or date on your server being wrong or confusions with time zones and date formats. I think a better approach to the problem is to have an extra attribute in your Time dimension which flags up a date as ‘today’, and which changes every day. Of course this means you need to do some extra ETL and processing on your Time dimension as a result, but I think it’s a much cleaner solution than Now() and leads to much more efficient MDX calculations.