Ninth Blog Birthday

This is the ninth anniversary of the first post on my blog, and every year at this time I take a moment to reflect on what’s happened in the last twelve months in my professional life and in the world of Microsoft BI.

Without a doubt 2013 has been the year of Power BI. It’s true we’ve had PowerPivot Power Pivot for a while now, but in my opinion the combination of Excel 2013, Power Query, Power Map, Power BI Sites and Q&A is a much stronger proposition for customers interested in self-service BI; I’ve already blogged at great length about what I think are the strengths and weaknesses of Power BI (see here, here and here) so I won’t repeat myself here. As you would expect Microsoft marketing has gone into overdrive to promote it (have you entered the Power BI competition yet?) and it will be in 2014 that we see whether Power BI is a success or not. What will success look like though, if it comes? It won’t be lots of customers lining up to buy Power BI in they way they’ve bought traditional BI solutions, I think: instead it will be organisations that have already signed up for Office 365 being upsold to Power BI based on their existing commitment to the Office platform. This presents a number of challenges to someone like me who makes a living as an independent consultant and trainer.

At the moment more than 90% of my consultancy income comes from SSAS and MDX, but the overall percentage of my time that I spend doing consultancy has reduced over the last few years to about 60%. This is partly the result of SSAS and MDX skills becoming more widespread; partly due to the fact that I’ve been promoting my public and private training more aggressively; and possibly due to fewer new SSAS projects kicking off. In the future I expect this trend to continue. Just how much consultancy will be necessary in the world of self-service Power BI solutions remains to be seen, but it’s going to be less than is necessary for corporate SSAS solutions and the rates will probably be lower too.

For the same reason, though, the demand for all forms of training for Power BI will almost certainly be much greater. That’s why I’ve been scheduling more public training courses through Technitrain; why I’ve signed up to write a book on Power Query next year; and why I’ve started recording video training courses with Project Botticelli (there’s a new video available there, by the way, on set and member functions). If I’m honest I prefer doing consultancy to training and I don’t think you can be a really good trainer if you don’t have a substantial amount of practical experience gained from consultancy, so I’m going to have to make a special effort to maintain a balance between the two.

Speaking at conferences and user groups is an aspect of my work that I’ve always really enjoyed, and I’m pleased to say that I’ll be speaking at the PASS BA Conference next year for example. I’m also still involved with SQLBits but please, please don’t ask me when the next SQLBits will be – we don’t have anything to announce yet but I can assure you we are working on it and I promise there will be one in 2014. I won’t be speaking at quite so many events as I have done in the past however. I travel a lot for work and this makes it hard to justify spending even more time away from my family, especially at weekends, so I’ve made a conscious decision to cut down on my speaking engagements. The thing is that the number of SQL Server events has increased a lot in the last couple of years and this has led to an increased number of invitations to speak, and I’m one of those people who finds it hard to say no when someone asks me to do something. I’m just going to have to be a bit more choosy from now on, and concentrate on events close to home, events that coincide with business trips and online sessions.

All that remains is to wish you all a happy and prosperous 2014, and to thank you for reading my blog! This time next year I’ll have been blogging for ten years, and that’s a scary thought…

Now(), The Formula Engine Cache And The Where Clause

Back in 2009 I blogged about how the use of the Now() function inside calculated members prevents the results of those calculations being cached for longer than the lifetime of a query. It might be worth rereading that post before you carry on to get some background on the problem here:
https://cwebbbi.wordpress.com/2009/09/10/now-and-the-formula-cache/

Today I had an interesting email from a customer (thank you, Per!) showing me something I didn’t know: that using the Now() function in the Where clause of a query has the same effect.

Here’s the repro. On the MDX Script of the Adventure Works cube, I created the following calculated measure:

CREATE MEMBER CURRENTCUBE.MEASURES.TEST AS [Measures].[Internet Sales Amount] + 1;

Now consider the following query:

SELECT {MEASURES.TEST} ON 0,

{[Date].[Day Name].&[1]}

ON 1

FROM [Adventure Works]

WHERE(

STRTOMEMBER("[Date].[Calendar Year].&[" + "2003"  +  "]")

)

 

The first time the query is run you can see in Profiler the SSAS Storage Engine retrieving values; the second time it’s run you can see the values for MEASURES.TEST being returned from the Formula Engine cache.

Now consider this second query:

SELECT {MEASURES.TEST} ON 0,

{[Date].[Day Name].&[1]}

ON 1

FROM [Adventure Works]

WHERE(

STRTOMEMBER("[Date].[Calendar Year].&[" + cstr(Year(Now())-10)  +  "]")

)

 

It returns the same result as the previous query (at least so long as the current year is 2013), but you can see in Profiler that the second time the query is run, once again the Storage Engine cache is queried and the Formula Engine cache is not used:

image

So clearly the use of the Now() function in the Where clause is enough to prevent the use of global scope by the Formula Engine cache, and some limited testing suggests the same applies for other non-deterministic functions like Username() – which is hardly surprising. Something to watch out for, then…

Technitrain 2014 Course Schedule: Power BI, Power Pivot, SSAS, MDX and more

I’m pleased to announce that all of the courses I’m running in London next year are now live on the Technitrain site. I’m teaching a lot of them, of course, but I’ve also got quite an impressive array of other trainers who’ll be teaching too: Allan Hirt, Klaus Aschenbrenner, Jamie Thomson, Bob Phillips, Andy Leonard and Thomas Kejser. If you’re looking for top quality SQL Server education then you know where to come!

Here’s the full list of courses:

 

BI Survey 13

Just a quick post to say that the BI Survey 13 is now open and would like your feedback about the BI tools you’re using. Here’s the link:
https://digiumenterprise.com/answer?link=1358-CTDDUWQN

It’s worth doing because you’ll get a summary of the results and will get entered in a draw for some Amazon vouchers; and of course you get to make sure that lots of nice things are said about the Microsoft BI stack! I’ll blog about the results later in the year when they come out – the findings are always very interesting.

PASS Summit 2011–Day 3 Keynote

The third and final keynote at the PASS Summit, and this morning I’ve been given a space on the official blogger table at the keynote! Actually this just means I’ve got a table to rest my laptop on and access to a power strip, but it’s an honour nonetheless.

There are several things that I saw yesterday that are worth mentioning. Probably the most interesting session was from Cathy Dumas about Tabular: among other things she demoed a DAX Editor plugin for BIDS SQL Server Data Tools that is going to make everyone’s life soooo much easier; it will give us something like an MDX Script editor, intellisense, colour coding and so on. She has blogged about it here and I can’t wait for it to be made available. Also I came across the Data Explorer team site and blog; if you are interested in getting to play with it when it’s ready then you get your email address added to the invite list.

Anyway, back to the keynote and today it’s PASS Summit favourite Dr David DeWitt covering Big Data. It’s not a standard marketing session, more of a lecture, and all the better for it; DeWitt is a very talented speaker and more importantly takes a balanced approach to describing the SQL vs NoSQL debate. Interesting points to note:

  • He thinks that the arrival of NoSQL is not a paradigm shift, in the way that the move from hierarchical databases to relational databases was. The assertion that SQL is not dead, not surprisingly, goes down well with this audience.
  • Hadoop. I’m not even going to try to summarise this section of the talk but it is an excellent introduction to how it works, and if you’re even vaguely interested in Hadoop (which you should be given Thursday’s announcements) then you need to watch this – I believe will be available to view on demand somewhere (the slide deck is here). It is, honestly, the best explanation of all this I’ve ever seen and there are several thousand people here in this room who agree with me…
  • He does a comparison of Hive vs Parallel Data Warehouse v.next on the same queries, same data and same hardware, and shows that PDW can outperform Hive by up to 10x. This demonstrates that a parallel database still has advantages over a NoSQL approach as far as performance goes in many cases, although of course each has its own strengths and weaknesses and performance isn’t the only consideration.

This was not only an enthralling and educational talk, but it was also great marketing from Microsoft’s point of view. You can announce Hadoop for Windows to a room full of SQL Server types and however many whizzy demos you do, and however much woo-hooing goes on, if we don’t really understand the technology we’ll go back to our day jobs and ignore it. On the other hand, teach us what the technology actually does and you’ll get us interested enough to try it out for ourselves and maybe even use it on a project.

Finally, if you’re at the Summit today come and find me at the Birds of a Feather lunch on the SSAS Performance Tuning table, or later on the BI Expert Pod this afternoon.

‘Events in Progress’ Part 3 – Average Age of Unpaid Invoices

Sorry for the relative quiet on here recently, I’ve been caught up with helping to organise SQLBits and other stuff… But anyway, this week I’m working with a company who asked me to tackle what must be a very common business problem: they have a fact table where one row represents an invoice they have generated, and they need to know at any point in time what the average age of all their unpaid invoices is, and monitor whether this value goes up or down. The point of this is, of course, is that if the average age of outstanding invoices starts to go up over time then it means that their customers are getting slower to pay them and they may potentially have a cashflow problem.

The solution is a variation on the ‘events in progress’ model that I blogged about recently but with an interesting twist that I thought was worth describing. First of all, we need to prepare some data so we can reproduce the scenario in Adventure Works – let’s pretend that the FactInternetSales table actually contains invoice data. We’ll need to massage the data a bit for two reasons:

  • In Adventure Works if you look at the OrderDate, ShipDate and DueDate columns there’s always exactly the same number of days between the dates, which doesn’t make for a great demo
  • In the real world some invoices will be unpaid when the fact table is loaded, others will be paid. We need to be able to handle both scenarios properly.

Once again, we need two fact tables. The first of these will contain all of our invoices, and for this I used the following named query in the Adventure Works DSV:

SELECT        CustomerKey, OrderDateKey
FROM            dbo.FactInternetSales

The second will be a fact table that contains information on invoices that have been paid:

SELECT       
CustomerKey,
OrderDateKey,
CASE WHEN SalesTerritoryKey > 5 THEN ShipDateKey ELSE DueDateKey END AS PaidDateKey,
CASE WHEN SalesTerritoryKey > 5 THEN 7 ELSE 12 END AS DaysToPayment
FROM            dbo.FactInternetSales
WHERE        (PromotionKey < 3)

A few things to notice in this second query:

  • I’ve filtered the fact table to remove a very small number of rows where PromotionKey is less than 3 – this will give us a few unpaid invoices to make the numbers more interesting.
  • I’ve created a new date key called PaidDateKey to make the invoices have different payment lengths.
  • Because these are paid invoices, we know how long they took to be paid and we can precalculate this and put it in the fact table in the DaysToPayment column.

We can now build the cube. It needs two measure groups, one based on each fact table, and we can add the Customer dimension too. Three measures need to be built: one count measure on each measure group and a sum measure on the DaysToPayment column. The Date dimension joins to the OrderDateKey on the All Invoices measure group, and to the PaidDateKey on the Paid Invoices measure group.

Now for the MDX. The first thing we need to do is find the total number of invoices generated to date and the total number of invoices that have been paid to date (and also to make sure that these totals aggregate up properly through the time dimension); we can then subtract the second from the first to find the total number of unpaid invoices:

CREATE MEMBER CURRENTCUBE.MEASURES.[Total Invoices To Date] AS NULL;

SCOPE(MEASURES.[Total Invoices To Date]);

    SCOPE([Date].[Date].[Date].MEMBERS);
        THIS = SUM(NULL:[Date].[Date].CURRENTMEMBER, [Measures].[All Invoices Count]);
    END SCOPE;

    SCOPE([Date].[Date].[All Periods]);
        THIS = (TAIL(EXISTING [Date].[Date].[Date].MEMBERS, 1).ITEM(0).ITEM(0)
                , MEASURES.[Total Invoices To Date]);
    END SCOPE;

END SCOPE;


CREATE MEMBER CURRENTCUBE.MEASURES.[Total Paid Invoices To Date] AS NULL;

SCOPE(MEASURES.[Total Paid Invoices To Date]);

    SCOPE([Date].[Date].[Date].MEMBERS);
        THIS = SUM(NULL:[Date].[Date].CURRENTMEMBER, [Measures].[Paid Invoices Count]);
    END SCOPE;

    SCOPE([Date].[Date].[All Periods]);
        THIS = (TAIL(EXISTING [Date].[Date].[Date].MEMBERS, 1).ITEM(0).ITEM(0)
                , MEASURES.[Total Paid Invoices To Date]);
    END SCOPE;

END SCOPE;

CREATE MEMBER CURRENTCUBE.MEASURES.[Unpaid Invoices Count] AS
MEASURES.[Total Invoices To Date] - MEASURES.[Total Paid Invoices To Date];

 

This is the easy bit. In order to find out what the average of the number of days the unpaid invoices have actually been unpaid, we need to find the number of days that each unpaid invoice has been open for, sum that up, and divide it by the total number of unpaid invoices. And to find this sum of the number of days that all unpaid invoices are open, we need to do the following:

  • Find the total up to yesterday of the [Unpaid Invoices Count] measure. This gives the total number of days that every invoice has ever been open.
  • Then sum the total to date of the Days to Payment measure we created on the Paid Invoices fact table, to find the total number of days that all paid invoices were ever open
  • Then subtract this second figure from the first, which will give us the sum of the number of days all currently unpaid invoices have been open

Here’s the code:

CREATE MEMBER CURRENTCUBE.MEASURES.[Sum of Ages of Invoices To Date] AS NULL;

SCOPE(MEASURES.[Sum of Ages of Invoices To Date]);

    SCOPE([Date].[Date].[Date].MEMBERS);
        THIS = SUM(NULL:[Date].[Date].CURRENTMEMBER.PREVMEMBER
                , MEASURES.[Unpaid Invoices Count]);
    END SCOPE;

    SCOPE([Date].[Date].[All Periods]);
        THIS = (TAIL(EXISTING [Date].[Date].[Date].MEMBERS, 1).ITEM(0).ITEM(0)
                , MEASURES.[Sum of Ages of Invoices To Date]);
    END SCOPE;

END SCOPE;

CREATE MEMBER CURRENTCUBE.MEASURES.[Sum of Ages of Paid Invoices To Date] AS NULL;

SCOPE(MEASURES.[Sum of Ages of Paid Invoices To Date]);

    SCOPE([Date].[Date].[Date].MEMBERS);
        THIS = SUM(NULL:[Date].[Date].CURRENTMEMBER, [Measures].[Days To Payment]);
    END SCOPE;

    SCOPE([Date].[Date].[All Periods]);
        THIS = (TAIL(EXISTING [Date].[Date].[Date].MEMBERS, 1).ITEM(0).ITEM(0)
                , MEASURES.[Sum of Ages of Paid Invoices To Date]);
    END SCOPE;

END SCOPE;

CREATE MEMBER CURRENTCUBE.MEASURES.[Difference of Sum of Ages] AS
MEASURES.[Sum of Ages of Invoices To Date] - MEASURES.[Sum of Ages of Paid Invoices To Date];

 

Finally, we can calculate the average like so:

CREATE MEMBER CURRENTCUBE.MEASURES.[Average Age of Unpaid Invoices] AS
IIF(MEASURES.[Unpaid Invoices Count]=0, NULL,
MEASURES.[Difference of Sum of Ages]/MEASURES.[Unpaid Invoices Count])
, FORMAT_STRING='#,#.00';

 

It’s pretty complex I’ll admit, but it does perform well. Here’s what the results look like in Excel:

image

I only wish I could come up with a way of getting some of my customers to pay this quickly…!

Sixth Blog Birthday

Every year on December 30th (or thereabouts) I write a blog post summarising what I’ve been up to this year, to mark this blog’s birthday. Today is my blog’s sixth birthday which, to be honest, I find pretty amazing – but then every year I’m amazed at how long I’ve kept at this! I only wish I could apply the same kind of willpower to making cash (I’d be a billionaire by now) or dieting, exercise or anything else…

From a blog point of view, the big change this year was moving away from Windows Live Spaces, where I’d started out, to WordPress. I’d been wanting to make the move for ages but had been too lazy to do so, and also unwilling to lose all my old posts and traffic; luckily when MS made the decision to retire Live Spaces they made sure it was very easy to migrate all my old content to WordPress and for that I’m very grateful – it almost makes up for the shabby treatment Live Spaces users had been getting in the preceding years.

I also got one of my biggest ever days from a blog traffic point of view during the controversy over the future of SSAS at PASS; I’m not even going to link to that particular post because it still gets plenty of hits and it’s something I’d prefer to move on from. It’s better to look forward to the undoubtedly cool stuff that will be coming with Denali rather than brood over any disappointment about what has happened, and one thing I’m certainly relieved about is that in 2011 there will be lots of cool stuff to blog about – at some points in the last year I’ve struggled to find anything new or interesting to blog about for SSAS (although as Greg Galloway has just shown, there’s still lots of interesting SSAS topics to blog about if you look). Bring on BISM!

Professionally things go from strength to strength and I’m going to be busier in the first half of 2011 than I’ve ever been in January/February, which are usually relatively quiet times for me. Again, with significant new technology arriving in the BI stack in Denali I would expect a lot of consultancy and training work to follow; I’m going to be expanding my training activities in other ways next year too, and I’ll be making a separate announcement about that here any day soon. SQLBits also continues to grow and grown, and I’m really excited about the plans for SQLBits 8 in Brighton this April. Since it doesn’t look like there’s going to be a PASS Europe this year I would say that anyone who’s serious about SQL Server in Europe should plan to come to what’s going to be the biggest and best SQL conference in Europe bar none; and even if you live outside Europe, but fancy a vacation, why not come too? We have an ever increasing number of US attendees and speakers!

Anyway, it’s time for me to go and do some real work and stop being self-indulgent. I hope everyone reading this has a great 2011. Thanks for reading!