Speaking at the PASS Summit 2009

The agenda for the PASS Summit 2009 has been announced, and I’m pleased to say that I’ve been accepted as a speaker. A full listing of who’s speaking can be found here:
http://summit2009.sqlpass.org/Agenda/ProgramSessions.aspx
http://summit2009.sqlpass.org/Agenda/SpotlightSessions.aspx

I’ll be doing a session on ‘Designing Effective Aggregations in Analysis Services 2008’. Hope to see some of you there!

Error messages in MDX SELECT statements and what they mean

Anyone that has tried to learn MDX will know that, when you make a mistake somewhere in your code, the error messages that Analysis Services gives you are pretty unhelpful. It was suggested to me recently while I was teaching an MDX course that I should blog about common error messages and what they actually mean; so here’s a list of a few example queries using Adventure Works that return confusing errors, the error messages themselves, and details on how to solve the problems. I’ve deliberately concentrated on query-related errors rather than calculation-related errors (that can be a future blog post); if you can think of any more errors that I should cover please leave a comment.

1) Query causing error:

SELECT
{[Measures].[Internet Sales Amount]} ON COLUMNS 
[Date].[
Calendar Year].MEMBERS ON ROWS
FROM [Adventure Works]

Error message: Query (3, 1) Parser: The syntax for ‘[Date]’ is incorrect.

The first step to solving this fairly simple syntax error is understanding the values in brackets in the error message. (3,1) indicates that the error is at character 1 on the third line of the query, where we have the expression [Date].[Calendar Year].MEMBERS; we should also see a red squiggly underneath this text in SQL Management Studio. There’s nothing wrong with this expression though, apart from the fact that it’s in the wrong place: what has happened is that we’ve forgotten to include a comma after COLUMNS immediately beforehand. If we put one in, the query runs.

Solution:

SELECT
{[Measures].[Internet Sales Amount]} ON COLUMNS, 
[Date].[
Calendar Year].MEMBERS ON ROWS
FROM [Adventure Works]

 

2) Query causing error:

SELECT
{[Measures].[Internet Sales Amount]} ON COLUMNS,
[Date].[Calendar].[Calendar Year].MEMBERS.CHILDREN ON ROWS
FROM [Adventure Works]

Error message: Query (3, 1) The CHILDREN function expects a member expression for the 1 argument. A tuple set expression was used.

This is a very common error that people encounter while learning MDX, and it all comes down to understanding the difference between sets, tuples and members. In a lot of situations Analysis Services is very forgiving: if it expects a set and you give it a single member, then it will cast that member into a set with one item in it for example. It can’t do this for you all the time, though, and you do need to understand what kind of object each function returns and/or expects for a parameter. In this case, the problem is that the .CHILDREN function needs to be passed a member and the .MEMBERS function returns a set (strictly speaking, as the error says, it’s a set of tuples); therefore we can’t use the two functions together. If we want to find all of the children of all years, we can use the DESCENDANTS function instead, which can accept a set as its first parameter.

Solution:

SELECT
{[Measures].[Internet Sales Amount]} ON COLUMNS,
DESCENDANTS(
[Date].[Calendar].[Calendar Year].MEMBERS
, [Date].[Calendar].[Calendar Semester])
ON ROWS
FROM [Adventure Works]

 

3) Query causing error:

SELECT
[Measures].[Internet Sales Amount], [Measures].[Internet Tax Amount] ON COLUMNS,
[Date].[Calendar].[Calendar Year].MEMBERS
ON ROWS
FROM [Adventure Works]

Error message: Parser: The statement dialect could not be resolved due to ambiguity.

Analysis Services supports no less than three query languages: MDX, DMX and a very limited subset of SQL. As a result, when you run a query it needs to work out what query language you’re using and can easily get confused if you make a mistake. In the query above we’ve given a list of the two measures we want to see on the columns axis, but we’ve forgotten to surround this list in braces to turn it into a set – and it’s a set that is required for the axis definition. This is an error that is commonly made by people with a background in SQL, and indeed the problem here is that the error has made the query look a bit too much like SQL or DMX. Putting in braces where they’re needed fixes the problem and removes the ambiguity.

Solution:

SELECT
{[Measures].[Internet Sales Amount], [Measures].[Internet Tax Amount]} ON COLUMNS,
[Date].[Calendar].[Calendar Year].MEMBERS
ON ROWS
FROM [Adventure Works]

4) Query causing error:

SELECT
{[Measures].[Internet Sales Amount1], [Measures].[Internet Tax Amount]} ON COLUMNS,
[Date].[Calendar].[Calendar Year].MEMBERS
ON ROWS
FROM [Adventure Works]

Error message: Query (2, 2) The member ‘[Internet Sales Amount1]’ was not found in the cube when the string, [Measures].[Internet Sales Amount1], was parsed.

A fairly straightforward error this: we’ve tried to reference a member that doesn’t exist in our query – it’s the extra 1 on the end of the name that’s the problem. The way to avoid this is to always let Analysis Services generate unique names for you, and you can do this by dragging the member (or any other object) from the metadata pane in SQL Management Studio into the MDX query pane when you’re writing queries. Here, using the correct member unique name solves the problem.

Solution:

SELECT
{[Measures].[Internet Sales Amount], [Measures].[Internet Tax Amount]} ON COLUMNS,
[Date].[Calendar].[Calendar Year].MEMBERS
ON ROWS
FROM [Adventure Works]

Note that for dimensions other than the Measures dimension, what happens in this scenario depends on how you’ve set the MDXMissingMemberMode property. By default if you write something that looks like it could be an MDX unique name, but which isn’t actually the unique name of a member on a hierarchy, Analysis Services will simply ignore it. So the following query returns nothing on rows because the year 2909 doesn’t exist in our Calendar hierarchy:

SELECT
{[Measures].[Internet Sales Amount], [Measures].[Internet Tax Amount]} ON COLUMNS,
{[Date].[Calendar].[Calendar Year].&[2909]}
ON ROWS
FROM [Adventure Works]

And worse, the in this query a genuine syntax error is completely ignored too:

SELECT
{[Measures].[Internet Sales Amount], [Measures].[Internet Tax Amount]} ON COLUMNS,
[Date].[Calendar].[Calendar Year].MAMBERS
ON ROWS
FROM [Adventure Works]

Microsoft .NET Provider for Oracle dead

Products, die, come back to life, die again… Anyway, those of you building cubes from Oracle data sources may be interested to know that the Microsoft .NET provider for Oracle has been killed off. Here’s the official announcement-disguised-as-a-blog-entry:
http://blogs.msdn.com/adonet/archive/2009/06/15/system-data-oracleclient-update.aspx

Yet again, it’s those persuasive “customers, partners, and MVPs” who are asking for the product to be canned. I just hope that there aren’t any “customers, partners, and MVPs” who are asking for Steve Ballmer to sell Microsoft to Google for $1, because then we’ll all be in trouble… Anyway, needless to say, not everyone is pleased:
http://www.theregister.co.uk/2009/06/18/microsoft_kills_oracle_connector/

One blog entry I’ve had half-written for a long time is a discussion of the problems faced when building cubes from Oracle data sources (thanks for your help on this David; anyone else with any hints, please leave a comment). There are a lot of bugs and data type issues that need to be dealt with, and from what I’ve gathered using the Microsoft .NET provider was part of how they could be worked around. 

PerformancePoint Planning – Back from the Dead!

Here’s the official announcement from http://www.microsoft.com/bi/partners/default.aspx :

Financial Planning Accelerator

Microsoft is pleased to make available the Financial Planning Accelerator (FPA). The FPA is source code and project files derived from the PerformancePoint Server 2007 Planning module. Based on requests from customers and partners, we are making this code available on a no-cost, individual license.
This is unsupported source code that customers and partners can use to support or change PerformancePoint Server Planning functionality. Derived object code files can be distributed to end users with Microsoft SharePoint Server Enterprise Client Access Licenses. To obtain access to the FPA a license agreement between Microsoft and the customer or partner is required. After that agreement is in place, download instructions will be made available.
Please e-mail
fpasupp@microsoft.com to request the agreement.

It’s not exactly open source, but it does mean that the partners who were hit hardest when PerformancePoint Planning was killed off can now get their hands on the source code, modify it and sell it on to their customers so long as those customers have the right Sharepoint licences. The question is now, will anyone take Microsoft up on this offer?

Implementing Analysis Services Drillthrough in Reporting Services

For some reason I’ve never needed to implement Analysis Services drillthough (note: not the same thing as Reporting Services drillthrough; why can’t they use consistent terminology?) in a Reporting Services report. Of course, Reporting Services support for Analysis Services being what it is, it’s not a straightforward task and since I’ve recently come across a few good blog posts that discuss the different ways you can do it I thought I’d link to them.

The main problem is that you can’t execute an MDX Drillthrough statement using the MDX query designer and the Analysis Services data source. You have four options then:

  1. You can execute the Drillthrough statement through an OLEDB data source instead. Gurvan Guyader shows how to do this in the following blog entry (in French, but with lots of screenshots):
    http://gurvang.blogspot.com/2009/05/drillthrough-ssas-dans-ssrs.html
    The problem with using an OLEDB data source is that you lose the ability to use parameters and have to use Reporting Services expressions to dynamically built your Drillthrough statement instead.
  2. It turns out you can also execute a Drillthrough statement by pretending it’s DMX, and so use regular MDX parameters, as Francois Jehl describes here (also in French):
    http://fjehl.blogspot.com/2009/06/drillthrough-ssas-dans-ssrs-ajout-au.html
  3. If you buy Intelligencia Query (which, as always, I need to state that I have a financial interest in) then Drillthrough statements now work with no tricks necessary:
    http://andrewwiles.spaces.live.com/Blog/cns!43141EE7B38A8A7A!562.entry
  4. Last of all, you can try not using a Drillthrough statement at all and use an MDX query instead to get the same data. You will lose some functionality though by doing this, however, most notably the MAXROWS option.

Google Fusion Tables

Well, well, well… another week, another BI-related announcement from Google. Jamie Thomson just brought my attention to Google Fusion Tables which got released this week with almost no fanfare (maybe Google wanted to avoid the kind of backlash they got with Google Squared?). Jamie’s first comment was pretty much inline with what I thought: this looks a lot like a basic version of Gemini, or indeed any other DIY BI tool. Basically you upload data, you can filter it, aggregate it, edit it and even join datasets together; then you can format the results as tables, maps, charts and so on and share the results with other people. You can find out more about how it works here:
http://tables.googlelabs.com/public/faq.html
http://googleresearch.blogspot.com/2009/06/google-fusion-tables.html

So, even though I’ve got loads to do today I had to check it out, didn’t I? Google provide a number of different free datasets for you to play with, but I thought I’d have a go with some data about the hot topic of the moment here in the UK: MP’s expenses. This data is available in Google spreadsheet form – ideal for loading into Fusion Tables – from the Guardian data store site:
http://www.guardian.co.uk/news/datablog/2009/may/08/mps-expenses-houseofcommons

After a bit of trial and error (and Fusion Tables is definitely prone to errors – although of course it is a beta) I managed to create a view that shows the average value of MP’s expense claims, excluding travel expenses, as a bar chart. I’m supposed to be able to share it here and I’ve got the HTML, but at the time of writing I can’t get the gadget to embed in this blog post. When I do, I’ll update this post to include it. In the meantime here’s a screenshot:

image  

Nevertheless, it’s fun even if it’s not quite a useful business tool yet. But hmmm… is it just me or does Google have some kind of BI strategy?

UPDATE: this article has a little more detail on the technology behind it:
http://www.itworld.com/saas/69183/watch-out-oracle-google-tests-cloud-based-database
although I think it’s a bit premature to say that this is going to kill Oracle, Microsoft and IBM…

Google Wave, Google Squared and Thinking Outside the Cube

So, like everyone else this week I was impressed with the Google Wave demo, and like everyone else in the BI industry had some rudimentary thoughts about how it could be used in a BI context. Certainly a collaboration/discussion/information sharing tool like Wave is very relevant to BI: Microsoft is of course heavily promoting Sharepoint for BI (although I don’t see it used all that much at my customers, and indeed many BI consultants don’t like using it because it adds a lot of extra complexity) and cloud-based BI tools like Good Data are already doing something similar. What it could be used for is one thing; whether it will actually gain any BI functionality is another and that’s why I was interested to see the folks at DSPanel not only blog about the BI applications of Wave:
http://beyondbi.wordpress.com/2009/06/01/google-wave-the-new-face-of-bi/
…but also announce that their Performance Canvas product will support it:
http://www.dspanel.com/2009-jun-02/dspanel-performance-canvas-adds-business-intelligence-to-google-wave/
It turns out that the Wave API (this article has a good discussion of it) makes it very easy for them to do this. A lot of people are talking about Wave as a Sharepoint-killer, and while I’m not sure that’s a fair comparison I think it’s significant that DSPanel, a company that has a strong history in Sharepoint and Microsoft BI, is making this move. It’s not only an intelligent, positive step for them, but I can’t help but wonder whether Microsoft’s encroachment onto DSPanel’s old market with PerformancePoint has helped spur them on. It’s reminiscent of how Panorama started looking towards SAP and Google after the Proclarity acquisition put them in direct competition with Microsoft…

Meanwhile, Google Squared has also gone live and I had a play with it yesterday (see here for a quick overview). I wasn’t particularly impressed with the quality of the data I was getting back in my squares though. Take the following search:
http://www.google.com/squared/search?q=MDX+functions#
The first results displayed are very good, but then click Add Next Ten Items and take a look at the description for the TopCount function, or the picture for the VarianceP function:
squared

That said, it’s still early days and of course it does a much better job with this search than Wolfram Alpha, which has no idea what MDX is and won’t until someone deliberately loads that data into it. I guess tools like Google Squared will return better data the closer we get to a semantic web.

I suppose what I (and everyone else) like about both of these tools is that they are different, they represent a new take on a problem, unencumbered by the past. With regard to Wave, a lot of people have been pointing out how Microsoft could not come up with something similar because they are weighed down by their investment in existing enterprise software and the existing way of doing things; the need to keep existing customers of Exchange, Office, Live Messenger etc happy by doing more of the same thing, adding more features, means they can’t take a step back and do something radically new. Take the example of how, after overwhelming pressure from existing SQL Server users, SQL Data Services has basically become a cloud-based, hosted version of SQL Server with all the limitations that kind of fudge involves. I’m sure cloud-based databases will one day be able to do all of the kind of things we can do today with databases, but I very much doubt they will look like today’s databases just running on the cloud. It seems like a failure of imagination and of nerve on the part of Microsoft.

It follows from what I’ve just said that while I would like to see some kind of cloud-based Analysis Services one day, I would be more excited by some radically new form of cloud-based database for BI. With all the emphasis today on collaboration and doing BI in Excel (as with Gemini), I can’t help but think that I’d like to see some kind of hybrid of OLAP and spreadsheets – after all, in the past they were much more closely interlinked. When I saw the demos of Fluidinfo on Robert Scoble’s blog I had a sense of this being something like what I’d want, with the emphasis more on spreadsheet than Wiki; similarly when I see what eXpresso is doing with Excel collaboration it also seems to be another part of the solution; and there are any number of other tools out that I could mention that do OLAP-y, spreadsheet-y type stuff (Gemini again, for example) that are almost there but somehow don’t fuse the database and spreadsheet as tightly as I’d like. Probably the closest I’ve seen anyone come to what I’ve got in mind is Richard Tanler in this article:
http://www.sandhill.com/opinion/daily_blog.php?id=45
But even then he makes a distinction between the spreadsheet and the data warehouse. I’d like to see, instead of an Analysis Services cube, a kind of cloud-based mega-spreadsheet, parts of which I could structure in a cube-like way, that I could load data into, where only I could modify the cube-like structures containing the data, where I could define multi-dimensional queries and calculations in an MDX-y but also Excel-y  and perhaps SQL-y type way – where a range or a worksheet also behaved like a table, and where multiple ranges or worksheets could be joined, where they could be stacked together into multidimensional structures, where they could even be made to represent objects. It would also be important that my users worked in essentially the same environment, accessing this data in what would in effect be their own part of the spreadsheet, entering their own data into other parts of it, and doing the things they love to do in Excel today with data either through formulas, tables bound to queries, pivot tables or charts. The spreadsheet database would of course be integrated into the rest of the online environment so users could take that data, share it, comment on it and collaborate using something like Wave; and also so that I as a developer could suck in data in from other cloud-based data stores and other places on the (semantic) web – for example being able to bind a Google Square into a range in a worksheet.

Ah well, enough dreaming. I’m glad I’ve got that off my chest: some of those ideas have been floating around my head for a few months now. Time to get on with some real work!

Upcoming BI User Group Events

Two UK SQL Server user group dates to flag up for anyone with an interest in BI:

Unfortunately I’m going to be out of the country on the 10th, but it looks like it will be a good evening…

BeginRange and EndRange connection string properties

Using the Timeout connection string property is a good way of making sure that your queries don’t run for too long, but sometimes – for example when you’re using SSRS – you want to restrict the amount of data that a query returns. You can’t properly do this with Analysis Services, but it is almost possible…

Consider the following query on Adventure Works:

SELECT
{[Measures].[Internet Sales Amount],[Measures].[Internet Tax Amount]}
ON 0,
[Date].[Date].[Date].MEMBERS
ON 1
FROM [Adventure Works]

It returns 1189 rows and 3 columns. If you click on any of the cells containing data in SQL Management Studio, to see the cell properties, you’ll see that the CellOrdinal property contains the index of each cell in the cellset. So the top left hand cell is ordinal 0, the one to its right is 1, and so on until the last column where it starts again one row down:

image

Using the BeginRange and EndRange connection string properties, you can limit the cells in a cellset that actually get populated with data. Note that you can’t restrict the overall number of cells though, which would be more useful. Both these properties take an integer value which represents a cell ordinal: BeginRange is the first cell ordinal you want to contain data, EndRange is the last cell ordinal. Their default value is –1, which for BeginRange means start at the first cell ordinal and for EndRange means end at the last cell ordinal. So, for example, with BeginRange=4 and EndRange=7, running the query above would give the following output:

image

As I said, the overall number of cells in the cellset remains the same, but only the cells in the range we specified actually contain data. This ‘filtering’ happens after the query axes have been resolved, as far as I can see, so adding NON EMPTY on Rows for example does not filter out any of the empty rows. If you were using SSRS, however, you could do this filtering at the DataSet level.

If you look in Profiler you’ll see that these properties have an affect on the amount of work SSAS does at query time. On a cold cache, with no BeginRange and EndRange set, the query scans all of the year partitions in the Internet Sales measure group as you would expect. But with BeginRange and EndRange set as above, on a cold cache SSAS only reads data from the 2001 partition.

BTW, remember that if you’re experimenting with these connection string properties in SQLMS, when you’re finished you’ll need to either close and reopen SQLMS or set BeginRange=-1 and EndRange=-1 as a result of this bug (which still doesn’t seem to be fixed in SP1).

Yet more Gemini demos dissected

Some more Gemini demos have appeared on the BI Blog, with more new Gemini features revealed, so let’s step through them and see what we can see…

  • 2:45 Nothing much so far we haven’t seen already. However the toolbars are much easier to see in this video and the first thing pointed out is the list of tables loaded into Gemini listed at the bottom of the screen.
  • 3:15 We can also see a lot more of the toolbar at the top here too. In the ribbon we can see the following areas:
    • New Table, with buttons to import new tables from a database or from the clipboard
    • Table Tools, with buttons to create relationships between tables and to manage relationships. So far I’m getting a very strong feeling of relational database concepts coming through – ok if the Gemini user is familiar with them (perhaps through Access), but is it asking too much of a user to think in terms of tables and joins?
    • Columns Tools. Can’t see much here, but we saw a bit earlier in the demo that on the far right hand side of the data area it seems you can add new columns onto the end of the table, and the buttons here allow you to manage these columns, delete them, resize them etc.
    • Sort and Filter, pretty much self-explanatory
    • Calculations. Again can’t see much here, but the button at the top says Manual. I wonder if it’s going to give you the option to either automatically apply all your calculations, or only apply them when you press a button (in case calculation takes a long time).
    • View. The options here are Pivot Table and Switch to Excel. I guess the demo is done in some kind of Table view, and we’ll have the option to view the data instead in a pivot table or go to Excel and work with the data there in the way we would with any other external data source.
  • 4:17 The now inevitable OOHHH moment in a Gemini demo where we see 20 million rows of data being manipulated in memory. Of course, though, the amount of data we can work with will not only depend on how much memory we have but also how well it can be compressed. From what I understand of COP databases like Gemini, you get great compression because it only stores the distinct values held in each column; but if your data contains a lot of different values then you won’t be able to compress it as much and you won’t be able to work with as much of it. I think.
  • 4:46 And not wishing to sound like Mr Sceptical, but watching all these demos of sorting and filtering large amounts of data very quickly raises a question in my mind: are all the rows in the table actually sorted and filtered, or does Gemini just do enough sorting and filtering to fill the screen? Finding the top 30 or so rows out of 20 million based on a value is certainly impressive, but it’s not the same as sorting all those 20 million rows.
  • 6:23 The Manage Relationships dialog. Again, very relational and strangely non-visual as well; I’d have expected a graphical representation of the two tables joined, just like you’d get in any other database tool. Maybe it’s not ready yet though.
  • 6:55 Looks like our first sight of DAX. The expression is:
    sumx(RELATEDTABLE(Purchase), Purchase[PurchaseSourceId])
    Hmm, again seems more like a SQL expression (a sum/inner join) translated to Excel rather than anything resembling MDX. It does the calculation very quickly although it’s the first time something has been less than instant.
  • 0:25 We’re in Excel now, using a pivot table, but notice that on the right-hand side we have the ‘Gemini task pane’ so perhaps it’s not a regular pivot table?
  • 2:48 Create Relationship dialog. Again it doesn’t seem very graphical, and notice the use of relational database terminology again with the mention of primary keys and foreign keys; for someone who is used to working with databases this is fine, the obvious term to use, but are these concepts we should expect Gemini users to understand? Shouldn’t things be less technical, more user friendly?
  • 2:59 Interesting that creating a relationship takes a few seconds and some crunching to do. I wonder what’s going on here exactly? Cube reprocessing?
  • 3:43 Show Values As menu option – ok, this is what you get in Excel anyway, but am I right in thinking there are a lot more options here now than are available in 2007? Maybe I’m wrong, but this all seems to be Excel calculations rather than calculations happening in Analysis Services.
  • 8:15 The Excel workbook containing this data is 203MB – interesting, because although Gemini is in-memory, it’s clearly possible to persist the data to disk if it’s being stored inside the workbook somehow.

One last point prompted by all the relational database-related terms we’ve seen: if I was a pure SQL Server relational database guy, with no interest in Analysis Services, I’d still like to get my hands on Gemini and use it server side if it’s this quick. Which goes back to a point I’ve made before in the past that if Analysis Services could be used inside SQL Server as an invisible layer to speed up the execution of data warehouse/BI style TSQL queries, in the same way as Oracle OLAP can be, it would be very cool. Just think of that working with Madison, in fact…