Why not to use the external assembly approach for dynamic security

If you’re familiar with dynamic security in SSAS, you probably know there are two basic ways of implementing it: the measure group approach, where security credentials are stored in a measure group inside the cube and the MDX set expression in your role queries this measure group to work out which members each user has access to; and the external assembly approach where the role uses a custom function MDX function from an external assembly to get the same information. Both approaches are covered in some detail in “Expert Cube Development” (still available in all good bookstores, BTW) but I’ve always preferred to use the measure group approach because a) I’m a rubbish .NET developer, and much more at ease modelling data into a measure group, and b) I’ve heard lots of complaints about opening a connection being really slow when you’re using the external assembly approach.

Anyway, to cut a long story short, I had the pleasure of meeting Gerhard Brückl (you may know him from the SSAS MSDN Forum) at SQLBits the other month and he had an interesting question for me: he’d been using the external assembly approach and found that once a user had connected to the cube once their security credentials were cached and the assembly wasn’t called again, even if the user disconnects and reconnects, until the cube’s cache was cleared. While this is good thing from a performance point of view, it has one big drawback as Gerhard pointed out: if the user’s security rights subsequently change the cube will not pick this up immediately. And of course this means that a user may have access to data they really shouldn’t have access to – which is a potentially big problem.

This initiated an interesting discussion with Akshai Mirchandani and various other SSAS obsessives about whether this could be worked around. The short answer is that it can’t be, at least not easily. You can execute an XMLA ClearCache on the whole cube but this might affect query performance quite significantly; you can reduce the impact by clearing the cache of just one measure group in your cube (you could even create a dummy measure group for this purpose) since this will clear the scope cache where the permissions are stored, without affecting the SE caches of the other measure groups. But you’re still left with the problem of knowing when to clear the cache: you’d need to write some code somewhere that checks to see if permissions have changed and runs a ClearCache if they have. As a result of all this, I’d say that if you’re using dynamic security, your permissions change regularly and you need the cube to reflect these changes as soon as possible, you should avoid using the external assembly approach altogether and use the measure group approach instead. 

8 thoughts on “Why not to use the external assembly approach for dynamic security

  1. For me, performance and low latency updates are just more reasons to use measure groups for dynamic security. I would encourage BI developers to use measure groups for dynamic security for its simplicity of architecture, and ease of testing/debugging.

  2. I have an interesting requirement for dynamic security, and I’d like to use measure groups but I’m not sure it’ll work. We have a single fact table, and for simplicity, a single Organization dimension. The problem now is that Organization role plays several different roles for each fact (examples like Supporting Org, Primary Org, Contracting Org, etc). If I’m a member of company A then I need to see all facts where company A plays some role. So for example, if I have a fact that says Company A is the Supporting Org and Company B is the Primary Org then I need to see it, but I shouldn’t see any facts where Company B is the supporting Org and Company C is the Primary Org. See what I mean? I have a copy of your Advanced BI book, so I have pondered cell based security but I worry it’s going to kill performance (we have hundreds of millions of rows). Are there any good ways of performing an “OR” across role playing dimensions for security? So far the only thing I’ve come up with is to create an artificial dimension that would essentially be the distinct combination of the various organizations and then link the fact table to that dimension, followed by having a factless security table that’d link user IDs to that artificial dimension and the ETL would do the work of determining if a user should have access to that “combination” of Orgs. This seems like it could potentially result in a dimension with millions of records, followed by a factless security table with tens of millions of records (maybe hundreds of millions). Any ideas are greatly appreciated, thanks.

    1. I worked on exactly the same problem last week, strangely enough, and we created an artificial dimension in exactly the way you describe. However in this case the artificial dimension was reasonably small so we didn’t encounter any of the problems you describe. Also there’s no need to build a factless fact table with millions of records – a factless fact table where a user is linked to a single company is sufficient, and you can then use LinkMember() to work out which members on the artificial dimension to grant access to. Hmmm, this could be an idea for a blog post… I tell you what, I’ll write it up this week.

      1. Thanks so much for the very quick reply – I’ll investigate the LinkMember() notion and I’ll look forward to your write up.

  3. I’ve tried both Assembly approach and MDX+Username() approach and can’t get it working to reflect the changes in the “bridge table”.
    In my experience the following:

    “if the user’s security rights subsequently change the cube will not pick this up immediately”

    is true for both approaches!

    I’ve also tried clearing the cache and nothing happens.
    I’ve posted more precise explanation on stackoverflow:

    http://stackoverflow.com/questions/13759541/analysis-services-does-not-clear-the-cache-as-instructed-dimension-dynamic-secu

    Can that be done?
    Can SSAS permission actually reflect the permissions stored in the relational database, so to react to when the content of relational table with the permissions changes.

    Or is it necessary to process everything all over again?
    ROLAP seemed like a reasonable solution, but that didn’t work either for me (see comment on stackoverflow)

    Thanks,

    Igor

    1. Hi Igor,

      Clearing the cache should work. From what you say it sounds like your XMLA is executing successfully – but it doesn’t sound as though it’s clearing the cache. Can you post the XMLA you’re using? Are you clearing the cache of the entire database?

      If you’re using the MDX+Username() approach, what you should be doing is reprocessing only the measure group based on your security bridge table. This will clear the SE cache for that measure group and the entire calculation cache.

      Chris

  4. Interesting article. Chris, do you know if this issue with not being able to “control” the SSAS caching of results from calling an external assembly, is still applicable in the latest SSAS-versions 2014/2016?

    For the challenge with knowing when to actually clear the cache when using the external assembly approach, I was thinking of using after update/insert/delete database triggers on the relevant sql-tables used in the security model. The code in the triggers would in some way initiate the reset of the cache in SSAS. But most certainly that method would also have some drawbacks and complications that I am not aware of.

    Martin

Leave a Reply