Ah, October 6th at last – the date when I was promised All Would Be Revealed. I’d been hearing rumours of something very new and exciting in the world of Microsoft BI for a while but never had any details (they probably reasoned that telling an inveterate blogger like me something top secret would be asking for trouble, but honestly I can keep my mouth shut when I need to); Mosha and Marco both mentioned it recently but didn’t give anything away either.
Anyway, to coincide with the keynote at the BI Conference, more details have shown up on the web:
Here’s what I gather:
- Kilimanjaro is the code name for the next release of SQL Server, due 2010
- Project Madison is the code name for what’s being done with DATAllegro
- Project Gemini is the new, exciting thing: an in-memory storage mode for Analysis Services. To quote Tom Casey in the Intelligent Enterprise article:
"It’s essentially another storage mode for Microsoft SQL Server Analysis Services with access via MDX, so existing applications will be able to take advantage of the performance enhancements."
But it’s clearly more than that – from the Forrester blog entry above:
"Its Gemini tool (to be available for beta testing sometime in 2009 and general availability in 2010) will not only enable power users to build their own models and BI applications, but easily make them available to power users, almost completely taking IT out of the loop. In Gemini, the in-memory, on the fly modeling will be done via a familiar Excel interface. Once a new model and an application is built in Excel, a power user can then publish the application to Sharepoint, making it instantly available to casual users. Not only that, but the act of publishing the model to Sharepoint also creates a SQLServer Analysis Services cube, which can be instantaneously accessed by any other BI, even non Microsoft, tool"
So, self-service cube design and in-memory capabilities. Sounds very, very reminiscent of Qlikview and other similar tools; and given that Qlikview is by all accounts growing rapidly, it’s an obvious market for MS to get into. I guess what will happen is that end users will get a kind of turbo-charged version of the cube wizard where they choose some tables containing the data they want to work with, and it builds a cube that works in ROLAP-ish mode on top of this new in-memory data store. We’ll also get even better query performance too (from COP? pointer-based? data structures).
All in all, super-exciting and despite all the hype about end-user empowerment I’m sure there’ll be even more opportunity for the likes of me to earn consultancy fees off this doing MDX work, tuning etc. But the point about end-user empowerment brings me back to Qlikview: I’ve never seen it, but it’s interesting because I’ve heard some very positive reports about it and some very negative ones too. From what I can make out it is very fast and easy-to-use, and has some great visualisation capabilities, but I’ve also heard it’s very limited in terms of the calculations you can do (at least compared to MDX); I’ve also heard that it’s marketed on the basis that you don’t need a data warehouse to use it – which perhaps explains some of its popularity, but also explains more of the negative comments that it’s had, because of course if you don’t build a data warehouse you’re going to run into all kinds of data quality and data integration issues. Perhaps this last point explains why Qlikview does so appallingly in the BI Survey’s rankings of how well products perform in a competitive evaluation. So something to be wary of if you’re giving tools to end users…
Anyway, if you’re at the BI Conference and have any more details or thoughts on this, please leave a comment!