SQLBits Registration Open

Registrations have opened for the SQLBits conference in Birmingham (UK) on March 1st. If you want to come you need to sign up here:



At the time of writing, after just under 24 hours, we’ve got 256 registrations confirmed already which I think is pretty impressive. If you don’t want to miss out you need to sign up quickly! For more information about the event see the main SQLBits website



You can see the agenda here:



See you there, hopefully…

New MDX Book

Over the last few months I’ve been working in secret on a new MDX book, and the time has now come to announce its publication. I think you’ll find I’ve taken a new and very relevant slant on the subject:


Thanks to Denis Gobo for the link to the publisher…

UPDATE: Umm, you do all realise this is a joke, don’t you? Follow the link behind the jpg to the website where I made this cover, it’s guaranteed to keep you amused over lunch. Sorry for any confusion!

SQL Down Under features Richard Tkachuk

The latest edition of the SQL Down Under podcast features Richard Tkachuk, currently of the SQLCat team and previously of the AS dev team. You can download it here:
If you’re rushed for time, the last twenty minutes are the most interesting with a discussion of some of the new features of AS2008.

Dimension Security Tips, Tricks and Problems

I’ve been doing some work with complex dimension security recently and come across a number of issues which I’ve either known about but never blogged about, or only just become aware of. It’s a bit of a random list but hopefully it’ll help someone out there…

First of all, how do you use dimension security to deny access to an entire dimension? It was one of those things that I thought was possible out of the box even back in the days of AS2K but it turned out my memory was playing tricks with me and it’s a lot harder to do than you might think. Dimension security lets you deny access to individual members, so to deny access to the entire dimension you have to deny access to all the members on every single attribute hierarchy on the dimension – which is a right pain in the **** to do. From reading the relevant pages in the book "Microsoft Analysis Services 2005" (which is very good on the subject of security) I had thought it would be enough to set the Allowed Set on the key attribute to be an empty set, ie {}, and indeed this is what I’d been told would work. However this only has the effect of denying access to the key attribute itself and as I said you need to set the Allowed Set to {} for every attribute. Furthermore, once you’ve done that you’ll still see the dimension itself visible even though all you’ll see on each attribute hierarchy is an All Member, and there isn’t a really a good way of hiding it apart from creating a perspective and trying to get all users who are denied access to the dimension to connect using that perspective. Not very nice. I posted something on Connect here and even though it’s been closed as Won’t Fix I had an email conversation with various people in Redmond a while ago and I’m hopeful there’ll be a good solution to this one day.

Then there’s the problem of calculations and security. If you read Mosha’s blog entry here on the subject of cube initialisation you’ll see that the MDX Script is executed after dimension security is applied – so if you reference a member in your MDX Script that has been secured, then it could result in an error. For example, imagine you have the following calculated measure defined on Adventure Works:

Create Member Currentcube.Measures.Test as Measures.[Internet Sales Amount];

If you create a role that denies access to [Internet Sales Amount] and then browse the cube, you’ll still see this calculated measure but it will return an error. The stock answer to this problem is that you should set the cube’s ScriptErrorHandlingMode property from IgnoreNone to IgnoreAll, so that when the MDX Script executes when someone connects through the role the calculated measure doesn’t get created because there’s an error in its definition. I’m not too keen on this solution, and nor is anyone else I’ve talked to who has faced this problem, because what happens when there’s a real bug in your MDX Script? You run the risk of calculations you actually want being ignored and no error being displayed, and in the worse case scenario this could result in incorrect data being returned. Imagine you’ve got a script assignment like this on Adventure Works:

Scope({Measures.[Internet Sales Amount],Measures.[Internet Tax Amount]});
    This=Measures.Currentmember * 2;
End Scope;

If you connect to the cube through a role where [Internet Sales Amount] is denied and if ScriptErrorHandlingMode is set to IgnoreNone, you’ll see a "END SCOPE statement does not match the opening SCOPE statement" error. Set ScriptErrorHandlingMode to IgnoreAll though, and you don’t get an error because the Scope block is ignored – but the value of [Internet Tax Amount] will be half the value that is displayed when someone connects through a role which has access to [Internet Sales Amount].

The best solution to this problem is one I found in this thread on the MSDN forum suggested by Deepak and Mosha. While it doesn’t seem to be possible to conditionally hide calculated members (again using a perspective seems to be the best option although still not ideal) it is possible to make sure they don’t return errors, and similarly it is possible to write scoped assignments that can still work if one or more members they’re scoped on is secured. The key is to use named sets in scoped assignments, as follows:

Create Member Currentcube.Measures.Test as null;

Create Set myset1 As
iif(iserror(strtomember("Measures.[Internet Sales Amount]")), {},

    This=Measures.[Internet Sales Amount];
End scope;

Create Set myset2 As
iif(iserror(strtomember("Measures.[Internet Sales Amount]")), {Measures.[Internet Tax Amount]},
{Measures.[Internet Sales Amount],Measures.[Internet Tax Amount]});

    This=Measures.Currentmember * 2;
End Scope;

For both the calculated measure and the scoped assignment, a named set is created that uses the IsError function to check to see whether [Internet Sales Amount] is secured or not – if it is, then an attempt to use the StrToMember function with its unique name will return an error. The outcome of this then determines the contents of a named set; for the calculated member this named set is used with a scoped assignment that assigns a definition to the member, and in the second example the named set is used instead of the hard-coded set in the Scope statement. This approach is a bit fiddly but it will work even when ScriptErrorHandlingMode is set to IgnoreNone because it avoids an error being raised.

Another thing to note is that if you are using role-playing dimensions in your cube, you will need to set dimension security on the cube dimensions rather than the underlying database dimension. For example, in Adventure Works, if you go to the Date dimension in the list of database dimensions (in the dimensions drop down on the Dimension Data tab of the Roles editor, database dimensions are listed first, then cube dimensions second) and set dimension security say to allow access to one year in the Calendar Year attribute, you’ll get an error like "The ‘Calendar Year’ attribute on the ‘Ship Date’ dimension has a generated dimension security expression that is not valid". Setting the same permissions on each of the role-playing dimensions in the cube based on Date, ie Ship Date, Delivery Date and Date does not cause a problem.

Lastly, what I will say after spending a week or so playing around with dimension security is that even in SP2 it is way more buggy that it should be. I’ve had no end of problems setting up the roles I want: usually everything works for a while and then suddenly the roles stop accepting any changes I make to them, ie you’ll make a change, save it, then test it and find that the change you’ve made has been forgotten about. I suspect something is getting corrupt somewhere. I’ve not managed to get a clear repro so that I can open a case with Microsoft about this, unfortunately, but it’s something to do with large dimensions and/or membership of multiple roles and/or visualtotals and/or securing measures and it’s happening enough to force us to rethink our entire approach to security. Hohum.

2nd Microsoft BI Conference dates announced

Via Ben Tamblyn, the dates for the second Microsoft BI Conference have been announced. It’s going to take place in Seattle October 6th-8th:
It’s taking place just over a month before PASS 2008 (which is being held in the same venue) so for someone like me there’s a bit of a dilemma over which to attend. I went to the BI Conference last year and enjoyed myself but it was a bit light on tehcnical content (will this be fixed this year?); PASS is likely to be a lot more techy but will probably be missing some of the BI-type content that I’m interested in such as PerformancePoint stuff. I’m sure a lot of the content, especially from MS speakers, will be duplicated across the two events though. Hmm, decisions, decisions… 

FAST acquisition includes interesting BI extras

Interesting article by Seth Grimes in Intelligent Enterprise here:
He points out that Microsoft’s recent acquisition of FAST includes some interesting BI-related products. Having a look on their website, I found this page:
Here are some details:
FAST AIW                                                       
FAST AIW (Adaptive Information Warehouse) is an information management solution that integrates your structured, unstructured, and multi-media data to create a virtual intelligence library where any insight is a few clicks away. FAST AIW incorporates both quantitative and qualitative analytics through mining of your numeric and text data.
The FAST Database Offloading Solution liberates eBusinesses from the artificial constraints of legacy structures by offloading data from the relational database to a search index. Now you can offer the same information, but in a more meaningful and intelligent context. The FAST Database Offloading Solution provides higher performance to eBusinesses at a dramatically lower TCO.
The FAST Data Cleansing Solution provides the ability to harvest meta-information from text and use linguistics to cleanse multiple structured data repositories into a clean master index. With the FAST Data Cleansing Solution structured data from multiple repositories can be merged to create a clean master index cost-effectively in a matter of weeks.

FAST Radar is a personalized Business Intelligence solution that empowers decision makers to explore and view information that is most relevant to them in an efficient, graphically intelligent fashion. It puts the creative process back in the hands of the business user by providing a simple and effective approach to Business Intelligence exploration and monitoring, reducing process times from weeks to real-time and aggregating information from data sources that may have been previously unavailable.

I wonder if/how/when all this will get integrated in the MS BI stack?

Dynamic named sets in AS2008 – not as fun as you might think

I got all excited when, last summer, Mosha blogged about dynamic named sets in Katmai: just when it looked like there weren’t going to be any cool new features to play with in AS2008, here was a juicy new MDX thing. However I didn’t really play with them properly until a bit later when I came to prepare my presentation for last autumn’s SQLBits conference, and that’s when I realised that they weren’t quite as cool or useful as I had thought they were going to be.

The key thing I hadn’t picked up on from my initial reading of Mosha’s blog entry was that they are evaluated once per query. While that means they are still useful in some scenarios, one of the key examples that Mosha describes in his blog entry is a bit misleading and that’s the ranking example. We all know, or at least we should all know, that the key to optimising rank calculations is to declare a named set which gets ordered just once and which is referenced inside the calculated member that returns the rank. Mosha accurately points out that the big drawback to this approach is that "It can only be used when the user can write his own MDX query" but then says that dynamic named sets are a solution – and my point here is that, in my opinion, they aren’t really.

The problem can be seen if you change Mosha’s example query slightly by adding Ship Date Calendar Years to the Columns axis:

MEMBER [Measures].[Employee Rank] AS RANK([Employee].[Employee].CurrentMember, OrderedEmployees)
[Ship Date].[Calendar].[Calendar Year].MEMBERS
[Measures].[Employee Rank] ON 0
,[Employee].[Employee].[Employee].MEMBERS ON 1
FROM [Adventure Works]

If you run this query, you’ll see that instead of seeing different ranks for different years, you get the same ranks repeated across every year – which is what you’d expect, because remember our dynamic named set is only evaluated once per query. I’m not saying this is a bug or something that should be fixed, however, because if the set was not evaluated once per query and evaluated every time it was called you’d be back where you started with poor performance; it’s just that dynamic sets aren’t very useful in this particular scenario. If the user can’t write their own MDX then it follows that they’re going to be in the situation where both the dynamic set and the rank calculated member are defined on the server and they’ll be querying with a tool like Proclarity or Excel, so you’d expect them to be able to generate whatever query they wanted and have it work as they would expect, but as you can see it isn’t going to.

Incidentally, if you’re playing around with this there is a bug in the November CTP that Mosha told me about: if you have a calculated member that references a dynamic named set then it should appear in the MDX Script before the dynamic named set definition. If the calculated member definition comes after the named set definition you seem to get some problems with caching and strange results are returned.

Book Review: Monitoring and Analyzing with Microsoft Office PerformancePoint Server 2007

One of my new year’s resolutions last year was to learn PerformancePoint, which I’ll be honest I’ve completely failed to do. I mean, I’ve played around with it, went to an airlift and seen more presentations on it than I can shake a stick at but I’ve not done anything serious with it yet; perhaps that’s because only a few projects are actually using it at the moment and in my line of work, I only get called in at the end of a project when things have gone wrong 😉

Anyway, to save my blushes the first time I need to work with it, Nick Barclay sent me a copy of one of the books he co-wrote with Adrian Downes on the subject, "The Rational Guide to Monitoring and Analyzing with Microsoft Office PerformancePoint Server 2007". I liked the book he and Adrian wrote on Business Scorecard Manager and a lot of the things that were good about that book can be repeated for this one too: it’s clear, it’s concise (like all the Rational Guide series), it’s well-written and it tells you just about everything you need to know. I guess no-one can claim to be a complete PerformancePoint guru simply because it’s a new product and best practices only emerge after a year or so of use in a lot of different projects, but Nick and Adrian have clearly been using the betas a lot and have already got some good practical tips to offer (such as the odd RTM bug). All in all, if you’re about to embark on your first PerformancePoint project you’ll probably want this book by your side; oh, and if you want a second opinion on it, Teo Lachev liked it too.

You can buy the book from Amazon UK here. There’s also a companion book on the planning side of PerformancePoint too coming, but I’m not sure when – Nick, Adrian, perhaps you can comment?

%d bloggers like this: