21st Blog Birthday: Centralised And Decentralised BI And AI

As the saying goes, history doesn’t repeat itself but it rhymes. While 2025 has seen the excitement around new advances in AI continue to grow in data and analytics as much as anywhere else, it’s also seen the re-emergence of old debates. In particular, one question has raised its head yet again: should there be a single, central place for your data to live and your security, semantic models, metrics and reports to be defined, or should you take a more distributed approach and delegate some of the responsibility for managing your data and defining and maintaining those models, metrics and reports to the wider business?

At first the answer seems obvious: centralisation is best. If you want a single version of the truth then all your data, all your semantic models and all your metrics should be centralised. Anything else leads to inconsistency between reports, inefficiencies, security threats and compliance issues. But while this is a noble ideal and is very appealing to central data teams building an empire I think history has already proved that this approach doesn’t really work. If it did, Microstrategy and Business Objects would have solved enterprise BI twenty years ago and all companies would have a long-established, lovingly curated central semantic model, sitting on an equally long-established, lovingly curated central data warehouse, that all business users love to use. That’s not the case though and there’s a reason why the self-service revolution of Tableau, Qlik and ultimately Power BI happened: old style centralised BI owned by a centralised data team solved many problems (notably the problems of the centralised data team) but not all, and most importantly not all those of the business users. I’m not saying that those older tools were bad or that centralised BI was a total failure, far from it; at best they provided an important set of quality-controlled core reports and at worst they were a convenient place for users to go to export data to Excel. But no-one can deny that those older tools died away for a reason and I feel like some modern data platforms are repeating the same mistake.

In contrast the Power BI approach – and now the approach of Fabric – of empowering business users within an environment where what they are doing can be monitored, governed and guided might seem dangerous but at the end of the day it’s more successful because it is grounded in the reality of how people use data. You can still manage your most important data and reports centrally but you have to accept that a lot, in fact most of the work that happens with data happens away from the centre. “Discipline at the core, flexibility at the edge” as my boss likes to say. This is as much a question of data culture as it is the technology that you use, but Power BI and Fabric support this approach by offering some tools that are easy to use for people whose day job might not be data and by being cheap enough to be enabled for a mass audience of users, while also providing other tools that appeal to the data professionals.

Central data teams sometimes think of their business users as children, and as a parent if you saw your six year-old pick up a bottle of vodka and try to take a swig you’d snatch it out of their hands in the same way that some data teams try to limit access to data and the tools to use with it. Buiness users aren’t children though, or if they are they are more like my pretty-much grown up children, and you can’t take that bottle of vodka away from them. If you do they’ll just go to the shops, buy another one and drink it out of your sight. Instead you can make sure they are aware of the dangers of alcohol, you can set an example of responsible consumption, you can educate them on how to make sophisticated cocktails as an alternative to drinking the cheap stuff neat. And while, inevitably, they will still make mistakes (think of that spaghetti Power BI model that takes four hours to refresh and two minutes to display a page as the equivalent of a teenage hangover) and some may go off the rails completely, as an approach it’s more likely to be successful overall than total prohibition in my experience.

This is an old argument and one you’ve heard before I’m sure. Why am I talking about it again? Well apart from the fact that, as I mentioned, some vendors are selling the centralise-everything dream once more, I think we’re on the verge of another self-service BI revolution that’s going to be even bigger than the one that happened fifteen or so years ago and maybe as big as the one that happened when Excel landed on desktop PCs forty years ago, a revolution driven by AI. Whether I like it or not or whether it will lead to better decisions or not is irrelevant, it’s coming. Developers whose opinion I trust like Jeffrey Wang are already saying how it’s transforming their work. More importantly I’ve tried it, it let me do stuff I couldn’t do before and even if the quality was not great it did what I needed, and most of all it was fun. Once business users whose job it is to crunch data get their hands on these tools (when the tools are ready – I don’t think they are quite yet), understand what they can do, and start having fun themselves it will be impossible to stop them. An agent grabbing a chunk of data from your centralised, secure data catalog and then taking it away to who-knows-where to do who-knows-what will be the new version of exporting to Excel. Already a lot of the BI managers I talk to are aware of the extent that their business users are feeding data into ChatGPT on their personal devices to get their work done, even if company rules tell them not to. We need to accept that business users will want to use AI tools and provide a flexible, safe, governed way for these new ways of working with data to occur.

No data platform is ready for this future yet because no-one knows exactly what that future will look like. I can imagine that some things will be familiar: there will probably still be reports as well as natural language conversations and there will probably still be semantic models behind the scenes somewhere. How those reports and semantic models get built and who (or what) does the building remains to be seen. The only thing I am sure of is that business users will have more powerful tools available to them, that they will use these tools and they will get access to the data they need to use with these tools.

Leave a Reply