Via Darren Gosbell, I see that the best practices included in the forthcoming SQL2005 Best Practices Analyzer have been turned into a white paper which you can download here:
Most of the recommendations are fairly obvious, some were new to me, some provoked a ‘they need to fix that’ response, and some I wasn’t sure I agreed with 100%. For example:
Avoid creating diamond-shaped attribute relationships
I quite often find myself designing diamond-shaped relationships in my attributes; the example they give in the text isn’t particularly good, but Day->Month->Year and Day->Week->Year is very common. OK most of the time I design user hierarchies along these paths anyway, but this need to design user hierarchies just for the sake of performance rather than because users actually want to see these hierarchies (and this isn’t the first time I’ve heard this recommended) is something that I don’t feel particularly happy about. It would be nice if we could make the choice between using attribute hierarchies and user hierarchies based on ease-of-use alone.
Avoid including unrelated measure groups in the same cube
This is the multiple measure groups and one cube vs multiple cubes with one measure group question that has been kicking around for at least a year. We were promised it was going to be addressed in the AS2005 Performance Guide but it wasn’t; when are we going to get some details published? In my experience I have seen a slight improvement in performance when you split cubes up in this way but nothing major, yet clearly there must be some reason for MS to keep recommending this so perhaps I’ve not come across the right scenario yet.
Avoid having very big intermediate measure groups or dimensions of many-to-many dimensions
Obviously having large intermediate measure groups and/or dimensions in m2m relationships is going to slow things down, but in my experience I’ve always been pleasantly surprised with the performance of m2m dimensions. I’ve seen intermediate measure groups with much more than 1 million members in perform really well…
Avoid having partitions with more than 20 million rows
As an aside, I recently noticed that in Eric Jacobsen’s recently blog entry on partitioning here he says you can also think about 2Gb (rather than a number of rows) as a rough guideline for the maximum size of a partition, and more interestingly that you should not have more than about 2000 partitions in your measure group. I asked him about this and it turns out that there are some internal limitations in AS at the moment, which hopefully will be fixed soon, that mean that slow down performance when you have large numbers of partitions. This does suggest that at the moment roughly speaking there’s a maximum size for a MOLAP cube to perform well of around 4Tb or 30 billion rows… not that I’ve ever seen a AS2005 MOLAP cube that big, but interesting to note.
Do set the Slice property on partitions that are ROLAP or partitions that use proactive caching
Based on Eric’s article mentioned in the last point, I think it’s also a good idea to set the Slice property on MOLAP partitions where possible too given that the automatic detection of which members appear in which partitions gets confused by overlapping slices. Greg Galloway recently did some experiments on this which I saw where he showed that if you set the slice on a MOLAP partition the auto-slice information isn’t used.
Avoid creating aggregations that are larger than one-third the size of the fact data
I quite frequently find myself designing aggregations which fail the one-third rule, although not by much, and for certain queries and calculations it can be a good idea. But I agree you should only do so as a last resort.