Radius90 from 90 Degree Software

Continuing the theme of free stuff I’ve received, the nice people at 90 Degree Software have been chatting with me on the phone, inviting me to webcasts and sending me eval versions of their Reporting Services report generation tool Radius90 for, oohhh, over a year and a half now. So why haven’t I blogged about the tool yet (ok, I did mention it once but only briefly)? Because it didn’t support Analysis Services as a data source. But now with the release of Radius90 Version 2 it does at last, so it’s review time.

Radius90 is one of those tools that has appeared to meet the need for an end-user friendly means of creating Reporting Services reports. BIDS is all very well for developers but you wouldn’t give it even to a power user; while Report Builder is a bit simplistic in terms of the reports it can generate, its UI is confusing and as far as its support for Analysis Services goes it’s very poor (see this white paper for a complete list of the limitations of Report Builder on an AS data source). Radius90 gives you a nice Office 2007-style interface that combines the power of BIDS with the kind of ease-of-use that Report Builder was aiming at. I like it a lot, and to be honest it’s hard to see how you could come up with a better tool for the job. It’s so good, in fact, that anyone who’s seen what Microsoft have got planned for Report Builder in RS2008 (see this entry on Brian Welcker’s blog or this webcast if you haven’t) will notice some very strong similarities with Radius90. I guess, in the long-term, this could cause problems for 90 Degree but they’ve got enough time before Katmai arrives and people start migrating to put in place some more distinguishing features, and as it is they already have some nice collaboration functionality whereby you can reuse pieces of your own and other people’s reports via a peer-to-peer network plus a good extensibility story.

What about their Analysis Services support? I found the following video which demos this and also acts as a good basic introduction to the report design process:

As you’ll see if you watch the video, MDX queries are generated with a wizard and while it’s easy to use I’d have preferred to see a drag-and-drop interface where you can see the results your query returns as you build it – they’ve taken a much more relational database-type approach rather than an OLAPy one, and I guess it’s down to what your users are comfortable with and not a major issue anyway. Once the wizard has completed you don’t seem to be able to change the query except through editing the MDX (or at least I haven’t worked out how to, I could be wrong) which is a bit of a pain, although it’s pretty quick to delete a query and create a new one and binding a query and the fields in it to table is very easy to do. It doesn’t enforce the whole ‘only put measures on columns’ nonsense when you supply your own MDX, and I got all excited when I used a query which had a non-measures dimension on columns and it not only worked but gave me comprehensible column names, but as soon as I tried to crossjoin more than one dimension on columns it soon broke down… hohum. Of course the users that this tool is aimed at aren’t going to be writing their own MDX anyway, so again I’m not too bothered about this. I also noticed that query slicing has been implemented using subcubes rather than the WHERE clause so that any calculated members that rely on looking at the currentmember on a hierarchy that you’re slicing by won’t work properly – it’s a fairly common mistake and one that Mosha mentions here, but I’ve reported it to them and I’m sure it’ll get fixed pretty soon. It’s the only real showstopper I found, though, and in general the MDX it produces is clean and efficient. UPDATE: it turns out I was using a pre-release version and this problem has now been fixed – sorry…

Overall, then, Radius90 is definitely recommended. Its AS support is still a bit immature but even in its present state it’s still adequate for most tasks; perhaps going forward they can make sophisticated support for AS data sources one of their selling points over Report Builder?

Reporting Services, MDX and Aggregated Values

Undoubtedly the best time to find out about an undocumented change in functionality in a product is halfway through a demo of said functionality to a large group of people. This happened to me last week: I was teaching my MDX course (written on the last CTP of SP2) and had just finished my speech on all the rubbish aspects of Reporting Services/Analysis Services integration and was trying to show how Reporting Services automatically filtered out all but the lowest level of granularity of data from an MDX query (see Teo Lachev’s post here: http://prologika.com/CS/blogs/blog/archive/2006/02/08/853.aspx and Reed Jacobsen’s posts here: http://sqljunkies.com/WebLog/hitachiconsulting/archive/2006/08/07/22359.aspx and here: http://sqljunkies.com/WebLog/hitachiconsulting/archive/2006/08/04/22346.aspx for details) when I found that it wasn’t doing it any more. Later on I emailed Teo to ask if he knew anything about this change – he didn’t but he asked the RS dev team and they gave him some details (he then blogged about it here: http://prologika.com/CS/blogs/blog/archive/2007/06/10/aggregate-rows-and-sp2.aspx). So, a small victory for the people – Reporting Services now no longer tries to force its own aggregation functionality on you and you always see the full results of your query, unless you’re already using the RS Aggregate function in your reports. This is apparently in response to customer demand. Just be sure to check any existing RS/AS reports you’ve got in production to make sure they’re not displaying extra rows now!

I’m now hopeful that the message will get through about the other stupid restrictions that RS places on AS data sources, such as only being able to put the measures dimension on columns. I’ve not kept up with the changes in RS2008 as much as I should (Teo again has a good overview of what’s coming here: http://prologika.com/CS/blogs/blog/archive/2007/06/09/teched-2007-us-memoirs.aspx) but I’ve not heard that these restrictions will be lifted. The problem is of course that RS expects to do all the aggregation of data itself, but the RS dev team don’t seem to understand that if I’m using AS then I will have designed all my aggregation business logic into my cubes and dimensions and I couldn’t care less about what RS can do in this area (for an example of their mindset, see the somewhat patronising comments on this posting on Connect from last year: http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=125414).

While we’re talking about Katmai I might as well mention why I’ve not blogged about the new AS features in it – it’s because there aren’t all that many. You’ve probably already read Mosha’s post about the attribute relationship designer (http://sqlblog.com/blogs/mosha/archive/2007/06/07/katmai-june-ctp-attribute-relationships-tab.aspx); Vidas Matelis has covered the other cube design wizard changes here: http://www.ssas-info.com/VidasMatelisBlog/?p=22#more-22. More features will come in later CTPS but overall it’s really going to be all about performance improvements and manageability, and while I know a bit about some of the planned features it’s too early to blog about them because it’s too early to say what’s going to actually get included. Suffice to say that for the serious AS developer there are going to be some very welcome improvements but there won’t be anything that excites the marketing people.

Officewriter *could* be licensed for inclusion in Katmai

OK, finally a chance to do some proper blogging now that my session is over and I’ve got hold of a wireless connection. The conference is going well – I can’t believe they’ve got 2600 people to come for a Microsoft BI conference! It just goes to show that the avalanche is starting.
And now, some news. Yesterday Jeff Raikes announced Microsoft was acquiring Softartisans in his keynote speech, which of course got everyone excited until they released that what he meant to say was that they haven’t bought the company just licensed the technology for possible inclusion in Katmai Reporting Services. I blogged about Softartisans’ product, Officewriter, a while ago and have actally just written a white paper for them on using their components inside SSIS to create a batch reporting solution (similar to what you can do with SSRS and data driven subscriptions) which should be available on their site any day now. Here’s the press release on it all:

Report Models White Paper

There’s a new white paper by Jordi Rambla (of Solid Quality Mentors, who I also a lot of work with) on “Creating, Editing, and Managing Report Models for Reporting Services” available here:
I think Report Builder’s support for Analysis Services is even worse than the rest of Reporting Services’ support for Analysis Services – which means it’s pretty bad – but at least I now have a list of all of its quirks.

Reporting Services and Essbase White Paper

When I first heard that this was going to be possible I wondered whether there was any kind of hidden agenda here, but I made some enquiries and was assured that there wasn’t – apparently Hyperion customers had been asking for it. Pity Reporting Services is such a pain to use with multidimensional data sources…

Improving Performance of Analysis Services-Sourced Reporting Services Reports

This is something I picked up on the other week, when I was doing a job tuning some Reporting Services reports which were running off Analysis Services (there were other, more interesting findings but I’ll leave them for a later date). When you create MDX queries using the RS query builder, the reports look something like this:


My customer was creating some very large reports that were up to several hundred pages long, and in some cases they took over a minute to render. What I realised was that the above query contains a number of cell properties that aren’t actually needed, such as BACK_COLOR, FORE_COLOR, FORMAT_STRING, FONT_NAME, FONT_SIZE and FONT_FLAGS. They can safely be removed from the query, as follows:



There’s no change to the results returned or how most reports will behave (assuming you’re not somehow referencing these properties somewhere), and in my case it made a noticeable difference in the amount of time taken to render the reports – reports that previously took 60 seconds to run now took 50 seconds. I would guess that this is because for very large queries, there’s a significant overhead involved with including all these unnecessary property values in the resultset

Reporting Services and Server Aggregates

Recently I was contacted by Peter Koller from Norway, asking me about some bizarre behaviour he’d seen with calculated members disappearing from query resultsets in Reporting Services. I had a suspicion about why it was happening and came up with a workaround, but asked him to post it as a bug which he duly did:
Now much as I’m tempted, I’m not going to go off on another rant about the fundamental flaws in the way support for Analysis Services is implemented in Reporting Services. I’m going to seize on the glimmer of hope contained in the following sentence:
For a future release and maybe service pack, we are considering adding an explicit switch that allows treating server aggregate rows as "detail rows".
What, let Reporting Services actually display the results of your MDX query without adulteration? Sounds like a dangerously sane idea! I’d like to propose some community action (I’m currently in France so I must have become infected with Gallic militancy): can anyone who agrees with me that this feature should be in the next service pack leave a comment at the above link? Hopefully if a few comments get posted then it’ll help persuade the RS team to do something.


I had a good time at the PASS European conference last week – had a few drinks, met a lot of people, and learnt a lot too. One of the sponsors of the event was a new company who have a product for Reporting Services that I hadn’t heard of:
What their product does is offer a user-friendly way of creating Reporting Services reports. At this point you’re probably saying, hold on – isn’t that what Report Builder does? Well, yes, and even though in their marketing materials they go to great lengths to say that they aren’t competing with Report Builder (a great example of the ‘big lie’ theory applied) anyone with half a brain can see that they are. Which, of course, might be a big problem if Report Builder didn’t suck… well, maybe that’s harsh, but I’ve yet to meet anyone who actually likes it. Its big problem is its confusing UI, which is in turn linked to RB’s ability to create queries which span multiple tables – it’s not easy to convey table relationships, which fields it makes sense to display next to each other etc. Bob SQL Reporting Services blog has some good detail on this:
Having seen some demos of it – although not actually used it – RSInteract has a slick AJAX-y UI, quite smart-looking and clear, so I’d put it that down in its favour. It achieves this through not being anywhere near as ambitious as Report Builder – you can only write reports which are based on one table, view or stored procedure as I understand it. On one hand this is quite limiting, and it does push work back to the IT guys to make sure all the data you want is available in that table/view/sp, but if you genuinely do want a tool that non-technical users can use then I think that’s a necessary sacrifice because it removes a lot of complexity.
One downer, at least from my point of view, is that I understand that their Analysis Services support isn’t ready yet, although it’s coming soon. Overall though, definitely worth checking out.

Brian Welcker on Analysis Services/Reporting Services integration

I’ve been quite vocal in my disapproval of the way that Analysis Services and Reporting Services integration has been handled over the last few months, so it’s only fair to publicise a posting on Brian Welcker’s blog which is I guess aimed at answering people like me:
I don’t accept some of the points he makes, though, and hopefully by the time you read this the comment I submitted will have gone through moderation and be visible.
However I do think the fact that the problem has been addressed in this way highlights the openness of Microsoft’s development teams and the positive effect that their willingness to blog, post on newsgroups etc has on customer satisfaction. Even though I’m still not satisfied with the functionality in question after reading Brian’s post, the feeling that I’m able to express my grievances and have them heard is makes me happier than if I thought no-one at all was listening.

Parameterising by Measures in Reporting Services 2005

If you’ve been reading this blog for a while, you probably know that I’m not the greatest fan of the way that support for MDX has been implemented in RS2005. Anyway, here’s a little tip that might soften the pain…
In RS2005 you are always forced to put the Measures dimension on the columns axis in your queries if you’re using a data source of type ‘Microsoft SQL Server Analysis Services’ (you can still get the glorious AS/RS2K experience if you use an OLEDB connection instead along with the AS OLEDB Provider). While this is irritating at the best of times, it does have one serious consequence: how can you parameterise a query by Measures? It doesn’t look like the RS dev team thought this would be a valid scenario, but in fact in my experience there are plenty of times where you do want to do it. It is possible though, and here’s a solution:
First of all, you need to create a query that will give you a list of measures on rows. The following AdventureWorks example shows how to do this:

MEASURES.MeasureDisplayName AS

SELECT {MEASURES.MeasureUniqueName, MEASURES.MeasureDisplayName} ON 0,
ON 1
FROM [Adventure Works]

It relies on there being a level on a hierarchy somewhere in your cube that has more members on it than there are measures, so it’s a bit of a hack, but this isn’t a problem most of the time and is the easiest way of solving the problem in MDX. You just create a set with the same number of members in as you have measures, put that on rows in your query, and then using calculated measures return the name/unique name of the measure which has the same rank in the set Measures.AllMembers as the currentmember on rows (Date.Date in this case) has in that set.
After you’ve created a new report parameter tied to this resultset you can create the query you want to parameterise. Once again the need to have measures on columns needs to be worked around – this time you need to create a calculated measure in your WITH clause, put that on columns, and then parameterise the definition of that calculated measure. Here’s an example:
Before this will work though, you need to declare the query parameter used by this query. To do this, click on the ‘query parameters’ button in the toolbar above where your MDX appears (it looks like an @ symbol overlaid on a table) and then fill in the following values:
  • Parameter as the name of your parameter, without the leading @. So in our example it would be MyMeasure
  • Dimension needs to be left blank – the Measures dimension doesn’t appear on the dropdown list
  • Hierarchy needs to be blank too
  • Multiple Values needs to be left unchecked
  • Default must be entered, but you can use an expression such as MEASURES.DEFAULTMEMBER

Thanks go to my colleague Tony for working this last bit out.

You can then bind this query to a table in your report and select different measures to slice by. Hope this helps!