Kilimanjaro, Project Gemini, Project Madison – even more new cool stuff

Ah, October 6th at last – the date when I was promised All Would Be Revealed. I’d been hearing rumours of something very new and exciting in the world of Microsoft BI for a while but never had any details (they probably reasoned that telling an inveterate blogger like me something top secret would be asking for trouble, but honestly I can keep my mouth shut when I need to); Mosha and Marco both mentioned it recently but didn’t give anything away either.

Anyway, to coincide with the keynote at the BI Conference, more details have shown up on the web:,568317.shtml

Here’s what I gather:

  • Kilimanjaro is the code name for the next release of SQL Server, due 2010
  • Project Madison is the code name for what’s being done with DATAllegro
  • Project Gemini is the new, exciting thing: an in-memory storage mode for Analysis Services. To quote Tom Casey in the Intelligent Enterprise article:
    "It’s essentially another storage mode for Microsoft SQL Server Analysis Services with access via MDX, so existing applications will be able to take advantage of the performance enhancements."
    But it’s clearly more than that – from the Forrester blog entry above:
    "Its Gemini tool (to be available for beta testing sometime in 2009 and general availability in 2010) will not only enable power users to build their own models and BI applications, but easily make them available to power users, almost completely taking IT out of the loop. In Gemini, the in-memory, on the fly modeling will be done via a familiar Excel interface. Once a new model and an application is built in Excel, a power user can then publish the application to Sharepoint, making it instantly available to casual users. Not only that, but the act of publishing the model to Sharepoint also creates a SQLServer Analysis Services cube, which can be instantaneously accessed by any other BI, even non Microsoft, tool"

So, self-service cube design and in-memory capabilities. Sounds very, very reminiscent of Qlikview and other similar tools; and given that Qlikview is by all accounts growing rapidly, it’s an obvious market for MS to get into. I guess what will happen is that end users will get a kind of turbo-charged version of the cube wizard where they choose some tables containing the data they want to work with, and it builds a cube that works in ROLAP-ish mode on top of this new in-memory data store. We’ll also get even better query performance too (from COP? pointer-based? data structures).

All in all, super-exciting and despite all the hype about end-user empowerment I’m sure there’ll be even more opportunity for the likes of me to earn consultancy fees off this doing MDX work, tuning etc. But the point about end-user empowerment brings me back to Qlikview: I’ve never seen it, but it’s interesting because I’ve heard some very positive reports about it and some very negative ones too. From what I can make out it is very fast and easy-to-use, and has some great visualisation capabilities, but I’ve also heard it’s very limited in terms of the calculations you can do (at least compared to MDX); I’ve also heard that it’s marketed on the basis that you don’t need a data warehouse to use it – which perhaps explains some of its popularity, but also explains more of the negative comments that it’s had, because of course if you don’t build a data warehouse you’re going to run into all kinds of data quality and data integration issues. Perhaps this last point explains why Qlikview does so appallingly in the BI Survey’s rankings of how well products perform in a competitive evaluation. So something to be wary of if you’re giving tools to end users…

Anyway, if you’re at the BI Conference and have any more details or thoughts on this, please leave a comment!

3 responses

  1. > I guess what will happen is that end users will get a kind of turbo-charged version of the cube wizard where they choose some tables containing the data they want to work with, and it builds a cube that works in ROLAP-ish mode on top of this new in-memory data store.
    It was all fine until this point. But you got completely off here. Take a look at the demo – there is no cube wizard of any kind, the user just browses the data, the model is being completely inferred automatically. There is no ROLAP either, Gemini is another storage mode (you have the exact quote in your blog), just like MOLAP, only it is not MOLAP, it is by columns and in memory
    > We\’ll also get even better query performance too (from COP?
    COP is definitely big one, there are more things, but still cannot talk about everything yet…
    And for your QV comparisons, the project was named "Gemini" for a reason. Gemini means "constallation of twins" in my language, the twins being power users and IT. You really should see the demo of how it works together – there is definitely need for data warehouse controlled by IT.

  2. I suppose we can kiss goodbye to one of the key points of BI then. One version of the truth? Ok, I\’m sure there will be one true picture that people can use but it everyone can make their own cube based on their own data and widely publish it then eventually it will be lost.
    I\’ll also admit that in some degree this already happens, eg. BI build a cube, accounts doctor it excel and take that to meetings as the truth. But with everyone doing it it could become chaos and potentially impossible to answer users query as to why their figures dont match the persons sat next to them.
    If someone/ anyone feels they can argue a case as to why this wouldn\’t happen I\’m all ears, because Gemini does sound interesting.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: