2nd Microsoft BI Conference dates announced

Via Ben Tamblyn, the dates for the second Microsoft BI Conference have been announced. It’s going to take place in Seattle October 6th-8th:
It’s taking place just over a month before PASS 2008 (which is being held in the same venue) so for someone like me there’s a bit of a dilemma over which to attend. I went to the BI Conference last year and enjoyed myself but it was a bit light on tehcnical content (will this be fixed this year?); PASS is likely to be a lot more techy but will probably be missing some of the BI-type content that I’m interested in such as PerformancePoint stuff. I’m sure a lot of the content, especially from MS speakers, will be duplicated across the two events though. Hmm, decisions, decisions… 

FAST acquisition includes interesting BI extras

Interesting article by Seth Grimes in Intelligent Enterprise here:
He points out that Microsoft’s recent acquisition of FAST includes some interesting BI-related products. Having a look on their website, I found this page:
Here are some details:
 
FAST AIW                                                       
FAST AIW (Adaptive Information Warehouse) is an information management solution that integrates your structured, unstructured, and multi-media data to create a virtual intelligence library where any insight is a few clicks away. FAST AIW incorporates both quantitative and qualitative analytics through mining of your numeric and text data.
 
The FAST Database Offloading Solution liberates eBusinesses from the artificial constraints of legacy structures by offloading data from the relational database to a search index. Now you can offer the same information, but in a more meaningful and intelligent context. The FAST Database Offloading Solution provides higher performance to eBusinesses at a dramatically lower TCO.
 
The FAST Data Cleansing Solution provides the ability to harvest meta-information from text and use linguistics to cleanse multiple structured data repositories into a clean master index. With the FAST Data Cleansing Solution structured data from multiple repositories can be merged to create a clean master index cost-effectively in a matter of weeks.
 

FAST Radar is a personalized Business Intelligence solution that empowers decision makers to explore and view information that is most relevant to them in an efficient, graphically intelligent fashion. It puts the creative process back in the hands of the business user by providing a simple and effective approach to Business Intelligence exploration and monitoring, reducing process times from weeks to real-time and aggregating information from data sources that may have been previously unavailable.

 
I wonder if/how/when all this will get integrated in the MS BI stack?

Dynamic named sets in AS2008 – not as fun as you might think

I got all excited when, last summer, Mosha blogged about dynamic named sets in Katmai: just when it looked like there weren’t going to be any cool new features to play with in AS2008, here was a juicy new MDX thing. However I didn’t really play with them properly until a bit later when I came to prepare my presentation for last autumn’s SQLBits conference, and that’s when I realised that they weren’t quite as cool or useful as I had thought they were going to be.

The key thing I hadn’t picked up on from my initial reading of Mosha’s blog entry was that they are evaluated once per query. While that means they are still useful in some scenarios, one of the key examples that Mosha describes in his blog entry is a bit misleading and that’s the ranking example. We all know, or at least we should all know, that the key to optimising rank calculations is to declare a named set which gets ordered just once and which is referenced inside the calculated member that returns the rank. Mosha accurately points out that the big drawback to this approach is that "It can only be used when the user can write his own MDX query" but then says that dynamic named sets are a solution – and my point here is that, in my opinion, they aren’t really.

The problem can be seen if you change Mosha’s example query slightly by adding Ship Date Calendar Years to the Columns axis:

WITH
MEMBER [Measures].[Employee Rank] AS RANK([Employee].[Employee].CurrentMember, OrderedEmployees)
SELECT
[Ship Date].[Calendar].[Calendar Year].MEMBERS
*
[Measures].[Employee Rank] ON 0
,[Employee].[Employee].[Employee].MEMBERS ON 1
FROM [Adventure Works]

If you run this query, you’ll see that instead of seeing different ranks for different years, you get the same ranks repeated across every year – which is what you’d expect, because remember our dynamic named set is only evaluated once per query. I’m not saying this is a bug or something that should be fixed, however, because if the set was not evaluated once per query and evaluated every time it was called you’d be back where you started with poor performance; it’s just that dynamic sets aren’t very useful in this particular scenario. If the user can’t write their own MDX then it follows that they’re going to be in the situation where both the dynamic set and the rank calculated member are defined on the server and they’ll be querying with a tool like Proclarity or Excel, so you’d expect them to be able to generate whatever query they wanted and have it work as they would expect, but as you can see it isn’t going to.

Incidentally, if you’re playing around with this there is a bug in the November CTP that Mosha told me about: if you have a calculated member that references a dynamic named set then it should appear in the MDX Script before the dynamic named set definition. If the calculated member definition comes after the named set definition you seem to get some problems with caching and strange results are returned.

Book Review: Monitoring and Analyzing with Microsoft Office PerformancePoint Server 2007

One of my new year’s resolutions last year was to learn PerformancePoint, which I’ll be honest I’ve completely failed to do. I mean, I’ve played around with it, went to an airlift and seen more presentations on it than I can shake a stick at but I’ve not done anything serious with it yet; perhaps that’s because only a few projects are actually using it at the moment and in my line of work, I only get called in at the end of a project when things have gone wrong 😉

Anyway, to save my blushes the first time I need to work with it, Nick Barclay sent me a copy of one of the books he co-wrote with Adrian Downes on the subject, "The Rational Guide to Monitoring and Analyzing with Microsoft Office PerformancePoint Server 2007". I liked the book he and Adrian wrote on Business Scorecard Manager and a lot of the things that were good about that book can be repeated for this one too: it’s clear, it’s concise (like all the Rational Guide series), it’s well-written and it tells you just about everything you need to know. I guess no-one can claim to be a complete PerformancePoint guru simply because it’s a new product and best practices only emerge after a year or so of use in a lot of different projects, but Nick and Adrian have clearly been using the betas a lot and have already got some good practical tips to offer (such as the odd RTM bug). All in all, if you’re about to embark on your first PerformancePoint project you’ll probably want this book by your side; oh, and if you want a second opinion on it, Teo Lachev liked it too.

You can buy the book from Amazon UK here. There’s also a companion book on the planning side of PerformancePoint too coming, but I’m not sure when – Nick, Adrian, perhaps you can comment?

“Get Ready for SQL Server 2008” Virtual Event

SQL Server magazine is hosting a free online event called "Get Ready for SQL Server 2008". No prizes for guessing what the content is…
 
You can register for it here:
No SSAS content, but sessions on SSRS and SSIS which look interesting.

SQLBits II Sessions

The list of 38 proposed sessions for SQLBits II in March has just been posted up here:
 
On the BI side there are some interesting SSIS sessions proposed and I’d really like to see Sutha’s session on Microsoft MDM because that’s something I’ve not had a chance to check out yet. If you’re planning to attend then please register on the SQLBits site and vote for the sessions you’d like to see.

Third Blog Birthday

It was this blog’s third birthday on Sunday (which I forgot, sorry blog, I do love you really) and as in previous years I thought I’d spend a few minutes looking back on last year and looking forward to the next. This was the year when, according to Live Spaces own very dodgy counting, I reached a million page impressions; most of that traffic is caused by RSS readers but I estimate at least a few hundred people a day read this blog which is something that never ceases to amaze me.
 
This year was a busy one for me work-wise, my first full year as a self-employed consultant, and I’ve been having a great time. Hopefully the credit crunch won’t lead to a recession next year because if it does, I know that consultants like me will be the first to feel the pinch and I’ll be heading back to the world of permie work. The release of SQL2008 should help demand as people start thinking about migration but since the differences between AS2005 and AS2008 are neligible compared to the differences between AS2K and AS2005 then migration will be a lot less painful. I really need to get some real project experience with AS2008; I have something lined up, but if you are on the TAP program and would like some extra help then drop me a line! Blog-wise, I’ve got a few interesting posts in the pipeline including something I’ve been working on for the last few weeks which might even turn into a commerical product, so stay tuned for more news on that.
 
Happy New Year everyone…
 
 

ASSP 1.2

Darren Gosbell has just done a new release of the Analysis Services Stored Procedure Project; he’s blogged about it here:
http://geekswithblogs.net/darrengosbell/archive/2007/12/22/ssas-assp-1.2-released.aspx
And of course, you can download it from Codeplex here:
 
It includes the partition healthcheck code that I blogged about here, as well as various other cool stuff.

M2M Dimension Optimisation Techniques Paper

Another new paper from the SQLCat team, this time on techniques that can be used to optimise cubes that use many-to-many relationships:
 
I’ve only skimmed through it so far, but I thought I’d mention two other tricks to try with m2m dimensions. One is breaking up large dimensions if you don’t ever need to see them, which I blogged about in detail here: http://cwebbbi.spaces.live.com/Blog/cns!7B84B0F2C239489A!777.entry, and which can be very powerful. The other is introducing other dimensions to the intermediate measure group: for example, imagine you have a measure group showing the relationship between Products and the Components that make them up, you could try adding the Time dimension to that measure group; even though it will increase the size of the measure group because you are repeating the Product/Component combinations for each Time period, if all of your queries are sliced by a Time period and you know that the majority of your Products were only sold for a short space of time then it can be beneficial, although if neither of these conditions is true then doing this can have a negative impact on performance too.

SQLBits II – Last Chance to Submit a Session

It’s getting close to the deadline to submit a session for SQLits II next year. I know what you’re thinking, you’d like to attend but don’t have the time to prepare a presentation. Aren’t you tired of hearing the same old people (eg me) present though? Wouldn’t you like to share your hard-won knowledge with the community? Get your 45 minutes of fame and adoration? Of course you would. So why not go to

http://www.sqlbits.com/information/SessionSubmission.aspx

…and submit something right now? I’d especially like to see some more BI sessions.

BTW, if you’re on Facebook you can join the SQLBits group to stay up-to-date with all the latest news and meet up with other people who’ll be attending.