I suspect she’s got the wrong end of the stick when she says it’s a natural user interface project, but that’s beside the point. According to the website it is:
a distributed, scalable infrastructure that supports inference in large-scale probabilistic graphical models.
Basically a data mining app that can scale out across multiple servers; with a bit of imagination you could see how this could turn into data mining in the cloud, but we’re clearly a long way away from that happening.
I’ve been keeping an eye on this project not only because it allows Java developers to query Analysis Services via XMLA, but because I hope it will bring with it some good SSAS-compatible open source client tools. Out of the ones Julian mentions in his post, Saiku looks the most promising as far as I can see but when I have a spare moment I’d like to check them all out properly.
The only drawback with using olap4j with SSAS is that you need to configure http access to make it work, which is something most people don’t want to have to do. Hmm, wouldn’t it be nice if SSAS did this natively? Maybe it’s something that will come when we get SSAS in the cloud?
I’m tired, I’ve got a huge backlog of work, I’ve got approximately 3000 SQLCat branded strawberry-flavoured rock sweets in my garage, and thankfully I have no 2-hour conf call planned for tomorrow night. Yes, SQLBits 8 has now been and gone – and if you weren’t there, you missed the best SQLBits yet!
Undoubtedly the highlight for me was getting up on stage to introduce Steve Wozniak at SQLBits Insight on Thursday. I still can’t really believe he came but he did, he hung around and he seemed to enjoy himself. His wife even came down to the registration desk to ask for a SQLBits goody bag for him to take home, as a memento of his trip. The thought of the great man drinking his coffee from a SQLBits mug is beyond strange…
Apart from that brush with celebrity this was definitely the most fun event we’ve done so far. The glorious weather and the location right by the beach in Brighton couldn’t have been a bigger contrast with York in the rain last autumn; and unlike a lot of conferences I’ve been to where the parties feel a bit like school discos, with everyone standing round at the side looking self-conscious, everyone at SQLBits seemed to be having a really good time (the copious free booze probably helped to break the ice). And don’t just take my word for it, read blog posts from other people who were there like Ashley Burton, Richard Back, Per Hejndorf, Sascha Lorenz and many others. Jamie Thomson also did his roving reporter thing again and caught some great moments, including Allan Mitchell holding forth on coffee, plus some technical nuggets that do an excellent job of capturing the spirit of the event.
Anyway, I’d like to finish off by thanking my friends and colleagues on the committee and everyone who came for making it such a great three days. I hope to see you at the next one, whenever and wherever that might be!
Business and technical users, as well as vendors and consultants, are welcome to take part; you’ll get the chance to win one of 10 $50 Amazon vouchers if you do.
UPDATE: the deadline has now been extended to June 18th.
Continuing the theme of the Formula Cache, you may remember a post from a while ago where I showed how using a subselect in a query forced query scope – so that SSAS was unable to cache the results of calculations for more than the lifetime of a single query. Now this is very significant if you have calculations that take a long time to evaluate and you’re using Excel as a client tool, because Excel makes extensive use of subselects in its queries.
For example, if we take the calculation ‘ExpensiveCalc’ from that previous post and use it in an Excel pivot table as below:
We’ll find that every time we refresh the pivot table it’s painfully slow. This is because we’ve selected just one Year on columns and Excel has generated the following MDX query, using a subselect, as a result:
SELECT NON EMPTY Hierarchize({DrilldownLevel({[Date].[Calendar Year].[All Periods]},,,INCLUDE_CALC_MEMBERS)}) DIMENSION PROPERTIES PARENT_UNIQUE_NAME,HIERARCHY_UNIQUE_NAME ON COLUMNS FROM (SELECT ({[Date].[Calendar Year].&[2001]}) ON COLUMNS FROM [Adventure Works]) WHERE ([Measures].[EXPENSIVECALC]) CELL PROPERTIES VALUE, FORMAT_STRING, LANGUAGE, BACK_COLOR, FORE_COLOR, FONT_FLAGS
Not good. Luckily, we can avoid this happening in Excel 2010 by using the new named set functionality. If you go to the Pivot Table Tools/Options tab on the ribbon, and select ‘Create Set Based On Column Items’ from the Fields, Items & Sets menu:
…and create a new named set:
You’ll find that the MDX generated by Excel changes and there’s no subselect:
SELECT NON EMPTY {[Year 2001]} DIMENSION PROPERTIES PARENT_UNIQUE_NAME,HIERARCHY_UNIQUE_NAME ON COLUMNS FROM [Adventure Works] WHERE ([Measures].[EXPENSIVECALC]) CELL PROPERTIES VALUE, FORMAT_STRING, LANGUAGE, BACK_COLOR, FORE_COLOR, FONT_FLAGS
This means that although the pivot table will be slow to refresh when you click OK, on subsequent refreshes you will be able to benefit from the FE cache and the refresh will be practically instant. This is a very useful trick if your users have a number of Excel pivot tables they open on a regular basis; it won’t cure all performance problems but it’ll cure some at least.
Rob Kerr has just released what looks like an extremely useful tool on Codeplex to do load testing on SSAS: AS Performance Workbench. Check it out here: http://asperfwb.codeplex.com/