Speaking next week in Zurich

I do quite a lot of work in Switzerland: I lived in Basel for three years and still have a lot of friends and business contacts there as a result. I’ll be there next week, in fact, and while I’m there I’ll be speaking at the May meeting of the Swiss PASS chapter in Zurich. I’ll be doing the same session I did at PASS Europe last week (so no prep time needed, luckily) on ‘Designing Effective Aggregations in SSAS 2008’. All the details are here:

Hope to see some of you there…

SQLSentry Performance Advisor for Analysis Services

I’m currently in Germany at the PASS European Conference. I ran a pre-conf with Allan Mitchell two days ago on monitoring SSAS, SSRS and SSIS, an expanded version of the material I used at the PASS Summit last year; we talked about how you could build a monitoring solution for the MS BI stack yourself, but for the last few months I’ve also been talking to the guys at SQL Sentry – who are also here exhibiting – about their new product called Performance Advisor for Analysis Services which has just been made public. I’ve had an in-depth demo and my first impression is that this is the first Analysis Services monitoring solution that I would actually want to use; it looks really, really good. Here’s a screenshot:

I believe SQL Sentry have the first solution that will monitor the SQL Server relational engine, Analysis Services, and SSIS and SSRS as well. As I’ve said before, I can’t believe it’s taken this long for a big third party vendor to notice the MS BI monitoring market! You can find out more and sign up for the beta here:
I’ll be getting hold of a beta version myself soon, and I’ll blog in more detail then.

Of course, collecting this information is one thing – actually understanding what it tells us, and using that information effectively, is something else completely. To be honest, it’s only after the prep work I did for my precon that I’ve started to think about this problem properly. Indeed, it’s only really when this kind of monitoring data is easily accessible that best practices for what to monitor can emerge; perhaps the SQLCat team need to write a white paper on this subject? In our preconf Allan also did a few cool demos using data mining on perfmon data to predict when a server is going to ‘go pop’ (in his words), and I think there are a lot of interesting possibilities here too.

Using MDX to browse the file system

One very obscure feature of Analysis Services that I’ve only ever seen documented in the books “Microsoft SQL Server Analysis Services 2005” and its successor “Microsoft SQL Server 2008 Analysis Services Unleashed” (both highly recommended for advanced SSAS users, by the way) is the fact that you can use MDX to browse the file system of the server that Analysis Services is running on, with some restrictions. Full details on this can be found on P804 of the 2008 book but it neglects to give any real examples of how to do it, so since it took me a few minutes to work out the correct syntax to use I thought it was worth a blog post.

There are four MDX extensions you can use, which can be executed from SQL Management Studio just like any other MDX statement.

  • SystemGetLogicalDrives returns a list of the drives on your AS box, with the amount of free space.
  • SystemGetSubdirs returns a list of directories under a specified path. This only actually works on directories that you have listed on the AllowedBrowsingFolders server property for Analysis Services (which you can find in SQL Management Studio by right-clicking on your instance name in the Object Explorer and clicking Properties) and their parents, so on my machine the following query returns only the Program Files directory:
    SystemGetSubdirs ‘C:\’
  • SystemGetFiles returns all the files in a given directory, again only working for the directories listed in AllowedBrowsingFolders. For example on my machine:
    SystemGetFiles ‘C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Backup\’
    returns all the files in my backup directory because that’s automatically included in AllowedBrowsingFolders, whereas
    SystemGetFiles ‘C:\’
    returns an empty result set.
  • SystemGetFileExists returns whether a file exists or not in a directory you’re allowed to browse. So, for example:
    SystemGetFileExists ‘C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Backup\MyBackupFile.abf’
    will return 1 if the file MyBackupFile.abf exists, or 0 if it doesn’t.

Clearly this functionality is quite useful if you’re automating things like backups and you don’t have administrative rights to the server file system, although you need to be an Analysis Services administrator to do this.

Update Isolation Level connection string property

I recently came across a new connection string property, Update Isolation Level, mentioned briefly in BOL but with so few accompanying details that I had no idea how it could be used. Here’s what is currently documented:

From http://technet.microsoft.com/en-us/library/ms145488.aspx :


When updated cells do not overlap, the Update Isolation Level connection string property can be used to enhance performance for UPDATE CUBE. For more information, see ConnectionString.

From http://msdn.microsoft.com/en-us/library/microsoft.analysisservices.adomdclient.adomdconnection.connectionstring.aspx :

Update Isolation Level


Specifies whether the cells in a single UPDATE CUBE statement are isolated from each other. The default is not isolated.

Luckily Akshai Mirchandani and Marius Dumitru from the dev team were able to provide me more details on it…

Basically, Update Isolation Level is useful when you are using writeback and you are writing values back to multiple cells within the same UPDATE CUBE statement. This point is important: you won’t see a benefit if you’re just writing back to one cell in your UPDATE CUBE statement, or if you’re issuing multiple single-cell UPDATE CUBE statements within a transaction. What it allows you to do is to say that each tuple within the SET clause in the UPDATE CUBE statement doesn’t overlap with any of the others; if that’s true, AS doesn’t need to do as much work at execution time and so the UPDATE CUBE statement runs faster.

Let’s take a look at some examples (I’ve used a simple cube I built based on the Adventure Works database, but it’s not Adventure Works). First of all, the following UPDATE CUBE statement contains two tuple assignments in the SET clause which overlap:

([Measures].[Sales Amount], [Order Date].[Date].&[20010701]
, [Product].[Product].&[1], [Customer].[Country – Customer].&[21383]) = 10,
([Measures].[Sales Amount]) = 10

The first tuple assigns the value 10 to Sales Amount for a particular Date, Product and Customer; the second assigns the same value to a tuple that contains just Sales Amount, and this tuple overlaps with the previous tuple because its value is the aggregation of all Sales Amounts for all Dates, Products and Customers. However, in the following example the two tuples do not overlap because they assign to Sales Amount for different Products:

([Measures].[Sales Amount], [Order Date].[Date].&[20010701]
, [Product].[Product].&[1], [Customer].[Country – Customer].&[21383]) = 2,
([Measures].[Sales Amount], [Order Date].[Date].&[20010701]
, [Product].[Product].&[486], [Customer].[Country – Customer].&[21383]) = 3

The Update Isolation Level connection string property has two possible values:

  • Update Isolation Level=1 means that each tuple update is guaranteed to be Isolated, ie the tuples don’t overlap
  • Update Isolation Level=2 means that each tuple update is not guaranteed to be Isolated

So, in theory, for the second example above if you set Update Isolation Level=1 in the connection string, the statement should run faster. I couldn’t see any differences in my limited testing but I’m assured that in more complex scenarios there are improvements – how much will depend on the amount of overlap, the structure of the cube and so on.

No BI Conference this year

Something else I’m a bit late on, having been away on holiday this week, is the announcement that there won’t be a Microsoft BI Conference this year:

Apparently the same customers who told Microsoft to kill PerformancePoint planning have also been saying that they should only hold the BI Conference every other year. As an aside, did these marketing people ever get told on their MBAs not to treat their customers like idiots? Is there anyone out there that still falls for that ‘our customers have told us’ line? Surely the whole point of blogging – even when it’s done by a marketing person – is that it’s meant to be (or should appear to be) an honest, person-to-person type of communication and doesn’t work when you simply reuse the same content you would have stuck in a press release?

Anyway, while I agree with many of the points Marco makes about a separate BI Conference being necessary, it makes my life easier in a way – it means I no longer have to make a decision about whether to attend the PASS Summit or the BI Conference.  PASS is the conference to attend and will have extra BI content; I’ve already submitted one abstract and will think of some more potential topics to talk about (any suggestions?), so with a bit of luck I’ll see you there.

SSAS2008 and Kerberos

Kerberos is notoriously difficult to set up – it’s something I’ve always left to someone else to do! Anyway, I’ve just come across a very detailed white paper from Boyan Penev and Mark Dasco about setting up Kerberos for Microsoft BI which I thought was worth linking to for future reference:

However, I’ve heard from a number of sources (Dr John Tunnicliffe; Dan English) recently about problems with getting Kerberos to work with Windows 2008 Server or Vista, and this post from John Desch at MS confirms the bug – worse, it doesn’t seem to be fixed in SP1 (which got released this week):

One to watch out for…