I’m back from my trip to Seattle at the PASS Summit and have just about recovered from the jet lag, so now’s a good time to blog about the last week. What did I get up to?
- Monday: Mosha’s MDX pre-conference seminar. This was the first time I’ve ever paid for any form of training out of my own pocket and I was not disappointed – it was everything I was expecting it to be, ie a great in-depth look at the inner workings of MDX. Over the subsequent days I was stopped by a few Microsoft people who asked me how it went, and I got the impression they thought it was going to be too detailed, but to be honest I actively wanted detailed information and so did the vast majority of other people present (the likes of George Spofford, Deepak Puri, Tomislav Piasevoli and so on were also there). Mosha may not be as slick as some of the more experienced speakers out there on the SQL conference circuit but he’s more than competent; the material was laid out in a logical order and his slides were clear. And the most important thing of all is that he is probably the only person in the world who has the knowledge to be able to give this kind of seminar. Some of the material was repeated from his blog but benefited greatly from being put into a wider context; some of the material was completely new to me, and given that I’m someone who’s lived and breathed MDX for the last ten years or so that’s quite something. I only wish he could have gone on for another day since he clearly had enough topics prepared to be able to do so.
- Tuesday: I had off, so I headed over to Redmond for a few meetings. Heard some interesting things (as I did all week) but all were NDA, unfortunately. Maybe one interesting point though: for some reason I had assumed that there were separate development teams for Gemini and Analysis Services, but that’s not true – Gemini is Analysis Services, it’s all the same team.
- Wednesday: Day 1 of the conference. The first session I attended was from a company called Meta Integration. At first I was a bit surprised since it was clearly a ‘vendor presentation’ – they were plugging their software, which is definitely not the done thing at a conference like PASS. I didn’t mind that much since it was an interesting product, and then as the presentation went on came to the realisation that this was software that wasn’t on sale to the general public anyway. What Meta Integration do is provide tools for metadata integration and lineage to the big BI software suppliers and they make a big play of being independent from any one vendor. So, for example, you can track metadata from an ERwin model to an Informatica-based ETL process and so on through to an Analysis Services cubes and Reporting Services reports; the cool thing is that if you make any changes upstream you can track which cubes and reports are going to be impacted even if you’re using a variety of BI tools from different vendors, as most companies are. This is a massive missing piece in the current MS BI stack and the fact that they already have it working today had many people in the audience salivating; what they want to do is license this software to Microsoft and their presence at the conference was, as far as I can see, a clever bit of PR to try to twist Microsoft’s arm into reaching a deal with them by showing Microsoft’s customers a glimpse of what might be possible in the next version of SQL Server. So I’ll come right out and say it: Microsoft, please license this software and don’t try to build it yourself! We need it ASAP!
- Thursday: the day of my presentation. Compared to other conferences I’d had to do way more preparation for it than usual because I’d decided to talk about a topic that I needed an awful lot of research rather than one I knew all about already. In fact I’d already decided that I should have submitted an abstract on cache warming for Analysis Services instead, which would have been much less work, but then on Monday Mosha made the typically gnomic remark that ‘cache warming was a bad thing’ so I’m glad I didn’t. Anyway the presentation itself went ok and despite being up against Kalen Delaney and Bob Ward I had a good crowd in the room. I was also very lucky that the SQLCat team’s presentation on monitoring Analysis Services, which covered a lot of the same topics, was scheduled for the next day so I wasn’t completely upstaged.
To be honest, the SQLCat team had the best content out of all the speakers. In my case, where Carl Rabeler et al covered the same material as I did, they did it better than me. The same goes for other presentations: there were rather a lot covering MDX query performance and cube tuning (possibly too many) and the SQLCat session on this subject with Richard Tkachuk and Thomas Kejser was by far the best I saw. I suppose they do have the advantage of inside information and the fact that they do presentations like this for a living.
I also stopped by some stands in the vendor exhibition space too. There was a disappointing lack of pure BI vendors, but I suppose they all blew their conference budget on the BI Conference last month. I did talk to the guys on the Hyperbac stand – they work in the backup and compression space, and were suggesting their latest product Hyperbac Online (which Simon Sabin blogged about recently) might come in handy for Analysis Services. Hmm, I don’t know given that AS does its own compression, but I’d be interested to try it out. - Friday: spent a lot of time in private MVP sessions, which of course are all NDA. I also saw a good session from Donald Farmer on integrating data mining into other apps, which made me think that there would be much greater uptake on data mining if it was as easy to use AS data mining with AS cubes as it was to use the data mining Excel adding. Donald demoed something similar to what I blogged about here; why can’t this type of thing be built into BIDS for cubes? Who has ever used the MDX Predict function? What about this old idea (although the SQLCat team noted that the old rule of thumb for partitioning whereby you should have no more than 20 million rows per partition no longer applied – you can get good, if not better performance with partitions three times that size)?
The last session I saw was by Carolyn Chau and Sean Boon of the Reporting Services team, showing off all the new features of RS2008. As I’m sure you know, there’s a lot of cool new stuff in there but is it just me or is the tablix (which I discovered should be pronounced tay-blix rather than tab-lix, as I had been saying) a bit intimidating? Probably no more so than MDX to the uninitiated. Anyway, apart from all the things they’ve done for 2008 and the massive list of things they’ve got on their list to do for the next version, Carolyn mentioned the interesting point that SMDL (the language for building Report Builder 1.0 models) would not be coming back, but the idea of a semantic layer for RS would – and it would probably use the Entity Data Model in some way. Which makes me think: what would happen if Analysis Services went in that direction too? Replace the dsv with the Entity Framework and make Analysis Services an extra layer of metadata for aggregating entities? I don’t know enough about this subject to speculate further, but it’s an interesting area – who knows, it could lead onto the resurrection of the idea of the Unified Dimensional Model, with a Gemini-powered AS as the super-fast caching layer for all your reporting/querying needs.
All in all I enjoyed myself and I think I made the right decision going to PASS rather than the BI Conference – there was more than enough BI content for me and the conference itself was well run and good fun. I’d really like it if PASS and the BI Conference were merged into one mega-conference so I didn’t have to choose.