SiSense Prism

A few months ago I announced I was going to do a major series of reviews of client tools here… well, that fell flat (probably because it takes a bit too much effort to install and test one), but at least here’s one more review: Prism, from SiSense. Here’s their website:
http://www.sisense.com/

Strictly speaking it’s not just an Analysis Services client tool because it can work with data from a number of different sources such as relational tables, Excel and even Google spreadsheets and Amazon S3. I don’t know too much about their internal architecture but it seems to be based on storing the data retrieved from all these different sources in some kind of in-memory store, so I suppose in that way it’s similar to what will be coming in Excel with Gemini. They do treat Analysis Services as a data source seriously, though, and in fact one of the guys behind the company is Elad Israeli, who was behind a tool called MDXBuilder that those of you with very long memories might recall; so for the rest of this review I’ll concentrate on the AS client tool side of things.

First impressions are very good: the UI is very modern, uncluttered and easy to use. There are a few wrinkles in that there is no explicit support for AS2008 yet, and I had to go through a few hoops to get it to connect on my laptop which only had AS2008 installed; also they don’t show hierarchies grouped into dimensions, just a flat list of hierarchies from all dimensions, which is a pain when you have a lot of dimensions and hierarchies – they really should support folders etc. Since I’ve already mentioned this to SiSense hopefully this will be changed soon.

The tool itself is focused on creating dashboards and the starting point is a blank sheet on which you can drag ‘widgets’, which in turn can be hooked up to various data sources to display data. Examples of widgets are pivot controls, various different types of charts, images and textboxes, gauges, calendar controls, dropdown boxes and so on; it’s reminiscent of Reporting Services (but concentrates more on application building rather than pixel-perfect formatting) and PerformancePoint in this respect. I have to say that I found that I found the process of building a dashboard to be exceptionally easy and intuitive, and I was very impressed – I was able to put together something that worked very quickly, and it handled layout and formatting in such a way that even someone who is generally rubbish at report design like me could create a dashboard that looked professional. Here’s a screenshot of one I put together quite quickly:

prism1

One other very cool feature is the way that complex selections can be generated using a visual workflow, called ‘Questions’ in the product. You can read more about it on their blog here:
http://community.sisense.com/blogs/siblog/archive/2008/11/19/250.aspx
…but the easiest way of thinking about it is as something like the SSIS dataflow for MDX sets (similar to something I blogged about a while ago). Here’s an example of that returns the top 10 Dates by Internet Sales Amount unioned with the bottom 10 Dates by Internet Sales Amount where Internet Sales Amount is greater than $500:

image

I think this is the best way I’ve seen of letting users set up complex filters, although it probably is still only something a power user could understand.

At the moment Prism is just a fat client, so with no web-based version (yet) sharing dashboards is a matter of emailing .psm files or putting them on a network share; this will be a deal-breaker for some people. SiSense have, though, in my opinion made the right decision in implementing the functionality they have got very well before rushing off to tick all the boxes on potential buyers’ checklists and doing so badly. Overall, if you’re in the market for a desktop BI tool that supports Analysis Services as well as other data sources I can recommend taking a look at Prism.

OLAP PivotTable Extensions new release

For some reason I’ve not blogged about this before, but anyway the ever-industrious Greg Galloway has just released a new version of his OLAP PivotTable Extensions:
http://www.codeplex.com/OlapPivotTableExtend

It’s an Excel addin that gives you useful new functionality in Excel 2007 pivot tables connected to Analysis Services, such as the ability to add private calculations, view the MDX behind the pivot table, and (in the new release) search for members and other things. Definitely worth a look, and useful too if you’ve ever wondered how to work with the Excel pivot table in code.

I’m hosting a webinar for Panorama

As I’ve mentioned, my recent thoughts on client tools (see here and here) have prompted a lot of interest around the Analysis Services community. One result is that Panorama have asked me to host a webinar for them where I get to sound off about the state of the client tool market before they show you their latest stuff. Yes, I’ll be paid for it but I’m not going to be promoting their products directly (I feel like I need to justify myself!), just repeating my standard line that if you want to do anything serious with Analysis Services you should at least check out the range of third-party client tools available rather than stick blindly with what MS gives you – and given that Panorama are the largest vendor of third-party client tools for Analysis Services, they deserve to be on the list of tools to check out. Here’s the link to sign up for the webinar:

http://www.panorama.com/webcasts/archives/2008/webinar-with-cwebb-oct-21.html

Softpro Cubeplayer

While I was at SQLBits I had the pleasure of meeting Tomislav Piasevoli (who has come all the way from Croatia especially), someone who has been very active on the Analysis Services MSDN Forum recently and who knows a lot about MDX. His company, Softpro, sells an Analysis Services client tool called Cubeplayer and he very kindly gave up his lunchtime to gave me a detailed demo. As I said recently the general feeling of frustration surrounding Microsoft’s client tool strategy has made me look again at the third-party client tool market and decide to review some of these tools here, and this look at Cubeplayer is the first in the series. Remember, if you’ve got a client tool you’d like me to look at, please drop me a line…

The first thing to say about Cubeplayer is that it’s a tool for power users and consultants, not the average user who might want to browse a cube. As such it’s going to appeal to the fans of the old Proclarity desktop client, which it vaguely resembles in that it’s a fat client with a lot of very advanced query and analysis functionality. It’s not part of a suite – there’s no web client etc – but it includes dashboarding functionality that’s only available through the tool itself, and also has the ability to publish queries up to Reporting Services.

What can it do? Well, the web site has a good section showing video demos of the main functionality, but here are some main points:

  • It can certainly do all the obvious stuff like drag/drop hierarchies to build your query, as well as more advanced selection operations such as the ability to isolate individual members in a query, drill up and down on individual members or all members displayed. It also has a number of innovative features like the ability to click on a cell and drill down on it, which means that you drill down on every member on every axis associated with this cell.
    cubeplayer
  • It also supports all the more advanced types of filtering and topcounts that you’d expect from a tool like this; in fact, it seems to do pretty much anything you can do in MDX. This does give you an immense amount of power and flexibility, but sometimes at the expense of ease-of-use. Take the old nested topcount problem, solved in MDX using the Generate function: any advanced client tool will have to handle this scenario and Cubeplayer certainly does in its Generate functionality, but would you even expect a power user to be able to understand what’s going on here and remember they have to click some extra buttons to get this to happen?
  • There’s a nice feature where you can choose to display the data in your grid either as actual values, ranks, percentages or other types of calculation. This makes it really easy to make sense of large tables of data.
    showas
  • It has a whole load of built-in guided analyses such as ‘How Many?’, ‘Show Me’, ABC analysis (for segmentation) and Range analysis. This for me is a real selling point – I’ve been asked several times, for example, about doing ABC analysis with Analysis Services and I’m not sure I’ve seen another tool that does it.
    abc
  • Another cool thing I’ve not seen before is the ability to put two queries side-by-side and, if they have selections on the same hierarchy, do operations like unions, intersects and differences on the selections.
  • There’s an MDX editor (with intellisense and other useful stuff) where you can write your own queries. Again, not something that even a power-user might want to do, but if you’re a consultant who knows MDX it’s a useful feature for those times when you know you can write the query but you can’t get the query builder to do exactly what you want.
  • You can generate Reporting Services reports from a query view. You probably already know my opinions on the native support for Analysis Services within Reporting Services, and this certainly does make it much easier to create cube-based reports.

Overall, definitely worth checking out if you’re in the market for this type of tool. There are a few criticisms to be made: as I said, I’m not sure it’s as easy to use as it could be although this is partly the price you pay for the richness of the functionality; there are some strange lapses in UI design such as the way all dialogs have ‘Accept’ and ‘Cancel’ buttons with icons on, instead of the usual plain ‘OK’ and ‘Cancel’; and charting is competent but not up to the standards of the best visualisation tools (I think many vendors would do well to look at their tools and ask themselves "What would Stephen Few think?" – it might not be very complimentary).  In my opinion, though, it’s a very strong and mature tool and the positives far outweigh the negatives.

One final point: Tomislav mentioned he was looking for reseller partners outside Croatia. If you’re interested in this I can put you in touch with him.

Google, Panorama and the BI of the Future

The blog entry I posted a month or so ago about XLCubed where I had a pop at Microsoft for their client tool strategy certainly seemed to strike a chord with a lot of people (see the comments, and also Marco’s blog entry here). It also made me think that it would be worth spending a few blog entries looking at some of the new third party client tools that are out there… I’ve already lined up a few reviews, but if you’ve got an interesting, innovative client tool for Analysis Services that I could blog about, why not drop me an email?

So anyway, the big news last week was of course Google’s announcement of Chrome. And as several of the more informed bloggers (eg Nick Carr, Tim McCoy) the point of Chrome is to be not so much a browser as a platform for online applications, leading to a world where there is no obvious distinction between online and offline applications. And naturally when I think about applications I think about BI applications, and of course thinking about online BI applications and Google I thought of Panorama – who incidentally this week released the latest version of their gadget for Google Docs:
http://www.panorama.com/newsletter/2008/sept/new-gadget.html

Now, I’ll be honest and say that I’ve had a play with it and it is very slow and there are a few bugs still around. But it’s a beta, and I’m told that it’s running on a test server and performance will be better once it is released, and anyway it’s only part of a wider client tool story (outlined and analysed nicely by Nigel Pendse here) which starts in the full Novaview client and involves the ability to publish views into Google Docs for a wider audience and for collaboration. I guess it’s a step towards the long-promised future where the desktop PC will have withered away into nothing more than a machine to run a browser on, and all our BI apps and all our data will be accessible over the web.

This all makes me wonder what BI will be like in the future… What will it be like? Time for some wild, half-formed speculation:

  • Starting at the back, the first objection raised to a purely ‘BI in the cloud’ architecture is that you’ve got to upload your data to it somehow. Do you fancy trying to push what you load into your data warehouse every day up to some kind of web service? I thought not. So I think ‘BI in the cloud’ architecture is only going to be feasible when most of your source data lives in the cloud already, possibly in something something like SQL Server Data Services or Amazon Simple DB or Google BigTable; or possibly in a hosted app like Salesforce.com. This requirement puts us a long way into the future already, although for smaller data volumes and one-off analyses perhaps it’s not so much an issue.
  • You also need your organisation to accept the idea of storing its most valuable data in someone else’s data centre. Now I’m not saying this as a kind of "why don’t those luddites hurry up and accept this cool new thing"-type comment, because there are some very valid objections to be made to the idea of cloud computing at the moment, like: can I guarantee good service levels? Will the vendor I chose go bust, or get bought, or otherwise disappear in a year or two? What are the legal implications of moving data to the cloud and possibly across borders? It will be a while before there are good answers to these questions and even when there are, there’s going to be a lot of inertia that needs to be overcome.
    The analogy most commonly used to describe the brave new world of cloud computing is with the utility industry: you should be able to treat IT like electricity or water and treat it like a service you can plug into whenever you want, and be able to assume it will be there when you need it (see, for example, "The Big Switch"). As far as data goes, though, I think a better analogy is with the development of the banking industry. At the moment we treat data in the same way that a medieval lord treated his money: everyone has their own equivalent of a big strong wooden box in the castle where the gold is kept, in the form of their own data centre. Nowadays the advantages of keeping money in the bank are clear – why worry about thieves breaking in and stealing your gold in the night, why go to the effort of moving all those heavy bags of gold around yourself, when it’s much safer and easier to manage and move money about when it’s in the bank? We may never physically see the money we possess but we know where it is and we can get at it when we need it. And I think the same attitude will be taken of data in the long run, but it does need a leap of faith to get there (how many people still keep money hidden in a jam jar in a kitchen cupboard?). 
  • Once your data’s in the cloud, you’re going to want to load it into a hosted data warehouse of some kind, and I don’t think that’s too much to imagine given the cloud databases already mentioned. But how to load and transform it? Not so much of an issue if you’re doing ELT, but for ETL you’d need a whole bunch of new hosted ETL services to do this. I see Informatica has one in Informatica On Demand; I’m sure there are others.
  • You’re also going to want some kind of analytical engine on top – Analysis Services in the cloud anyone? Maybe not quite yet, but companies like Vertica (http://www.vertica.com/company/news_and_events/20080513) and Kognitio (http://www.kognitio.com/services/businessintelligence/daas.php) are pushing into this area already; the architecture this new generation of shared-nothing MPP databases surely lends itself well to the cloud model: if you need better performance you just reach for your credit card and buy a new node.
  • You then want to expose it to applications which can consume this data, and in my opinion the best way of doing this is of course through an OLAP/XMLA layer. In the case of Vertica you can already put Mondrian on top of it (http://www.vertica.com/company/news_and_events/20080212) so you can already have this if you want it, but I suspect that you’d have to invest as much time and money to make the OLAP layer scale as you had invested to make the underlying database scale, otherwise it would end up being a bottleneck. What’s the use of having a high-performance database if your OLAP tool can’t turn an MDX query, especially one with lots of calculations, into an efficient set of SQL queries and perform the calculations as fast as possible? Think of all the work that has gone into AS2008 to improve the performance of MDX calculations – the performance improvements compared to AS2005 are massive in some cases, and the AS team haven’t even tackled the problem of parallelism in the formula engine at all yet (and I’m not sure if they even want to, or if it’s a good idea). Again there’s been a lot of buzz recently about the implementation of MapReduce by Aster and Greenplum to perform parallel processing within the data warehouse, which although it aims to solve a slightly different set of problems, it nonetheless shows that problem is being thought about.
  • Then it’s onto the client itself. Let’s not talk about great improvements in usability and functionality, because I’m sure badly designed software will be as common in the future as it is today. It’s going to be delivered over the web via whatever the browser has evolved into, and will certainly use whatever modish technologies are the equivalent of today’s Silverlight, Flash, AJAX etc.  But will it be a stand-alone, specialised BI client tool, or will there just be BI features in online spreadsheets(or whatever online spreadsheets have evolved into)? Undoubtedly there will be good examples of both but I think the latter will prevail. It’s true even today that users prefer their data in Excel, the place they eventually want to work with their data; the trend would move even faster if MS pulled their finger out and put some serious BI features in Excel…
    In the short-term this raises an interesting question though: do you release a product which, like Panorama’s gadget, works with the current generation of clunky online apps in the hope that you can grow with them? Or do you, like Good Data and Birst (which I just heard about yesterday, and will be taking a closer look at soon) create your own complete, self-contained BI environment which gives a much better experience now but which could end up being an online dead-end? It all depends on how quickly the likes of Google and Microsoft (which is supposedly going to be revealing more about its online services platform soon) can deliver usable online apps; they have the deep pockets to be able to finance these apps for a few releases while they grow into something people want to use, but can smaller companies like Panorama survive long enough to reap the rewards? Panorama has a traditional BI business that could certainly keep it afloat, although one wonders whether they are angled to be acquired by Google.

So there we go, just a few thoughts I had. Anyone got any comments? I like a good discussion!

UPDATE: some more details on Panorama’s future direction can be found here:
http://www.panorama.com/blog/?p=118

In the months to come, Panorama plans to release more capabilities for its new Software as a Service (SaaS) offering and its solution for Google Apps.  Some of the new functionality will include RSS support, advanced exception and alerting, new visualization capabilities, support for data from Salesforce, SAP and Microsoft Dynamics, as well as new social capabilities.

XLCubed (and a rant about Microsoft’s client tool strategy)

The other week I stopped by in Maidenhead to see the guys at XLCubed, and to take a look at their latest stuff. XLCubed have been around a long time and their Excel addin AS client has always been one of the best out there, but with the improved Analysis Services support in Excel 2007 (especially with the introduction of ‘convert to formulas’) and the Proclarity acquisition has put a squeeze on the client tools sector. A lot of the third party client tools out there, XLCubed included, are better in a lot of ways than the equivalent Microsoft offerings but it’s often hard to explain to someone who isn’t very experienced with Analysis Services what the advantages are and why they represent a good reason to buy a non-Microsoft product. So, in order to survive, you need a clear, unique selling point and XLCubed now have one in the form of Microcharts after they bought Bonavista Systems last year (I blogged about the Microcharts product in its original form here). Microcharts gives you the ability to create sparklines, bullet graphs and other in-cell charts, which is not only impressive when used in conjunction with regular Excel and Reporting Services (with or without AS as a data source) but enters the realm of extreme coolness when you see how it’s been integrated with XLCubed.

Here’s just one example of the kind of dashboard you can build with XLCubed:

CIODashboard_550X 

You can see a whole page of sample dashboards here:
http://www.xlcubed.com/en/Demo_Overview.html

Nice, eh? I should also mention they have an excellent data visualisation blog that’s well worth a read:
http://blog.xlcubed.com/

While on the subject of client tools, can I veer off on a tangent here and criticise Microsoft’s strategy in this area? In my opinion (and just about everyone I’ve met agrees with me, not least disgruntled ex-Proclarity employees) what they’ve done has actually harmed the core Microsoft BI market over the last two years. Before the Proclarity acquisition it wasn’t an ideal situation, for sure, since telling customers that they had to buy their client tools from a third party looked bad. But what Microsoft have done is bought the leading third-party client tool and effectively chucked it in the bin, saying people should use Excel and PerformancePoint instead. Excel 2007 is a good client tool but a) a lot of companies are still on Excel 2003 and before, and are not going to upgrade just for the sake of a BI project, b) it has nowhere near the kind of advanced functionality that the Proclarity desktop tool had and never will, and c) it still has a few glaring problems (see here for example); PerformancePoint too is encouraging but very much a version 1.0. Microsoft’s long release cycles for both mean that we have to wait way too long for any upgrade in functionality, and in the meantime we’re left with a vacuum: the third party client tool market has been weakened because now all customers will want to use Microsoft client tools as a first choice, but these client tools are not yet up to scratch. Why on earth didn’t they carry on developing the Proclarity product line for a few more years until a smoother transition could be made? Why the prejudice against standalone client tools? Once again I’m left with the feeling that senior people in Redmond have little idea what’s going on in the real world and more importantly are insulated from the impact that their decisions have on the bottom line. On the positive side, though, Microsoft’s actions have given companies like XLCubed the breathing space they needed to innovate and survive.

nextanalytics

Hands up who remembers OLAP@Work? If you do you’ve been working with Analysis Services for a long time, back when it was still OLAP Services… for those of you who don’t, it was one of about four options you had if you wanted a client tool circa 1999; it was an Excel addin and it was pretty good. Anyway, I’ve just seen this article on Intelligent Enterprise on what Ward Yaternick, the guy who founded OLAP@Work, has been up to since leaving Business Objects (which bought and eventually killed OLAP@Work):
http://www.intelligententerprise.com/blog/archives/2008/06/bi_innovation_f.html

He’s been working on something called nextanalytics:
http://www.nextanalytics.com/

Poking around on the site it looks quite interesting; certainly there are lots of mentions of MDX so I guess it supports Analysis Services as a data source (although it supports a lot of other data sources too). The key thing is that it allows you to create complex queries and calculations using a scripting language. Clearly this scripting language allows you to do the same kind of things you can do with MDX and indeed one particular entry on the nextanalytics blog caught my eye:
http://www.nextanalytics.com/component/option,com_myblog/Itemid,342/show,Can-a-business-intelligence-product-be-used-to-answer-analytic-questions-.html/

I was about to leave a comment when I saw that Mosha had beaten me to it. Mosha’s right that contrary to what the original entry says, what Ward is describing is certainly possible in MDX, but Ward also has a point that it’s not something that someone with an average knowledge of MDX could accomplish. Can nextanalytics prove itself to be easier to use than MDX? Time will tell. I’ll have to download the open source version of it (available here: http://www.codeplex.com/nextanalyticsOS) to try it out. When I have a spare moment, of course, which at the current rate is going to be some time next year.

%d bloggers like this: