Takes.
Most enterprises are debating which AI tools to approve. The better question: what must the enterprise actually be able to do?
AI is already in your enterprise. Not as a single initiative someone approved, but as dozens of local experiments, copilots, automations, and productivity hacks spreading faster than any governance structure can track.
One team runs copilots. Another automates workflows. A third builds something customer-facing. Meanwhile, security worries about exposure, legal worries about obligations, and nobody owns the whole picture.
The usual responses don’t work. Blocking slows learning, not adoption. And letting things spread unchecked turns distributed enthusiasm into distributed risk.
There’s a better starting point: capability mapping. Not “what tools should we allow,” but “what must the enterprise be able to do” to let AI adoption happen without losing visibility, control, or accountability.
Five capabilities stand out. I’ve written them up in full, including where NIST, ISO 42001, and the EU AI Act fit in.
"The code AI produces isn't qualitative." I hear this often. And it's worth exploring what's behind it.
To get consistent quality from agentic AI teams (or “swarms”), you need solid engineering practices around them: a clear Definition of Ready, a measurable Definition of Done, value-driven feature slicing, specs with demo criteria, linked acceptance & scenario tests, e2e testing. These aren’t special AI requirements. These are foundational software engineering practices.
What I’ve noticed is that teams who’ve had underwhelming AI results often haven’t applied these practices, not just to AI, but in general. And that makes sense. Without these foundations, quality is hard to produce consistently, regardless of who or what is writing the code.
So the skepticism is completely understandable. If you haven’t worked with these disciplines, you haven’t seen the conditions under which AI actually delivers.
The good news? It works both ways. Teams that adopt these practices to improve AI output often find that their entire engineering process levels up with it. That might be AI’s most underrated contribution: not replacing engineers, but raising the bar on how we engineer.
AI strategy is often not missing. It is expiring.
In many enterprises, the repeated signal “we don’t have an AI strategy” is not a literal statement. It is a cry for alignment. People can feel the pace of AI change: customer expectations are shifting, tools and capabilities are moving fast, and internal ways of working are changing. But they do not see that reality reflected clearly in priorities, decisions, and direction. That is the real issue.
The core insight: AI changes the half-life of strategic assumptions. This is why a static strategy document quickly becomes invisible, even if serious work went into it. It creates a false sense of control while the environment keeps moving.
In the article, I argue that a useful AI vision must explain two shifts at once: the external shift (from channels to assistant-mediated interaction) and the internal shift (AI as the next layer of IT, with a new mission for IT around safe autonomy and coherence).
It also touches on why the adoption barrier shifts from usability to trust, why “AI apps” is the wrong enterprise language, why buy vs build becomes a recurring portfolio decision, and why strategy must become a living mechanism by design.
Traditional software engineering process & practices are up for a big change.
It will be a tough one, not about technology but about changing processes to support a new mindset: “build like software is disposable”.
I fully agree with this post, and this comment nails it: ”… it’s (the change is) about removing the fear of rebuilding.”
I wonder how many enterprises are aware of this fundamental change and are already adapting their intake & software delivery processes? What about the traditional SDLC? And who is already facilitating this in their (automated) data (access) management processes? What does your new “self-service” platform offering look like?
It’s still high on my Data & AI roadmaps, I can tell you. A continued focus for 2026 and beyond.
For Agentic AI, risk assessment is a different league.
In my work as an architect responsible for enterprise data & AI platforms, risk assessment is core.
Recently I revisited a small, practical Agentic AI risk-scoring model I built while providing ad hoc consulting to an enterprise. It looks at the degree of Autonomy, the nature of the Data involved, and the degree of Exposure of the case, while also acknowledging compensating controls such as guardrails, human-in-the-loop, auditability, and kill switches.
I’m now trying this to some more real use cases (from discovery to production hardening), and it’s been effective to compare scenarios, justify controls, and communicate residual risk.
How are you scoring risk for Agentic AI?
Pointers to references, field practices, or cautionary tales?
The common (mis)understanding of "API" and "system integration" in data product thinking.
Reacting to a post by Andrew Jones on data products and integration. You don’t have to take my word for it.
I’m not sure I’d link the use of API’s to the term “system integrations”, but that’s more related to my observation of the common (mis)understanding and (mis)interpretation of the terms “API” and “system”.
"This meeting could have been an email" resonates with many. But what does it reveal about the DRI?
When a Directly Responsible Individual (DRI) opts for a meeting response, it often leads to time inefficiencies, delayed responses, and heightened project risks. Conversely, responding with a well-documented email featuring clear and sourced information showcases ownership and facilitates smoother progress without burdening an already packed schedule.
Building on the concept of the Meeting Ownership Ratio that I explored before, I now delve into why choosing to “reply with a meeting” serves as a revealing indicator that can be deciphered directly from your calendar. Organizations can leverage this behavior as a quality metric, especially when consultants are in the DRI role.
"That is all it [an API Gateway] ever was: a facade, not an architecture."
Reacting to a post on API Gateways vs ESBs. This is must-read for anyone dealing with the enterprise struggles of (system) integrations, API’s, ESB’s & API gateways.
In short: don’t get yourself fooled in replacing an ESB with an API Gateway.
(and some key concepts on integration styles are briefly explained and positioned)
"Talk to data" isn't just about natural language to SQL. It's about context-aware interaction.
In the final article of the Conversational Data Governance series, we explore what comes next: not just helping users discover or request access to data, but enabling them to query the data itself. It’s about context-aware interaction, where the user’s purpose, role, access level, and governance rules are all taken into account.
A governed GenAI interface doesn’t just respond to questions - it guides the user to the right data product, initiates access or quality workflows when needed, and enables governed interaction with the data itself, applying filters, warnings, and policy enforcement in real-time.
This only works if the underlying platform is ready: with modular data products, access control, product metadata, and automated governance workflows in place.
The result is a new kind of interface to data: adaptive, governed, and embedded in daily work. It changes how users ask, how data responds, and how organizations ensure responsible use. This final article closes the loop: from governance by documentation to governance by interaction.
Policies, workflows, and ownership structures alone aren't enough. If users can't navigate them, adoption stalls.
As data governance matures, the challenge shifts from defining rules to making them usable.
In this fourth article of the Conversational Data Governance series, we look at what it takes to embed governance within conversational AI, not as a chatbot layer, but as an active participant in the governance process. We explore how conversational systems can connect to metadata repositories, workflow engines, and registries to support the two key steps every user faces: finding the right data product and initiating the right governance process.
This architecture doesn’t automate governance away - it makes it accessible. Approvals still go to owners, decisions remain traceable, and friction becomes measurable. The result is a system that works the way people work, supporting both user autonomy and policy enforcement.
Data Product Portals are essential, but they depend on a critical assumption that breaks down quickly.
That assumption: that users already know which data product they need, how governance processes work, and when to trigger them.
This article explores how conversational interfaces complement these tools by translating questions into guided governance actions, reducing the need for prior process knowledge and making portals and workflows usable for a broader audience.
When users don’t know what they’re looking for, conversation becomes the missing interface.
More mature data governance introduces new complexity for users. Adoption risks stalling.
As organizations invest in more mature data governance, users start to feel the shift: access becomes conditional, purpose must be explained, governance gets formalized, …
These changes are necessary, but they also introduce new complexity for those who just want to use data to do their job. Adoption risks stalling if users don’t know where to start or how to work within the new boundaries.
A conversational interface doesn’t just reduce friction - it helps facilitate adoption by meeting each user at their level of familiarity, translating natural questions into governed actions and also suggesting what’s possible, not just enforcing rules.
It’s not about simplifying governance - it’s about making it usable, and facilitating change.
Conversational Data Governance: The Next Wave of Adoption and Participation.
Launching this new article series, exploring how we can bridge the gap between policy and participation by embedding governance into the way people naturally interact with data.
Why do so many data governance efforts struggle to gain traction? It’s not always about policy quality - it’s about how those policies show up in daily work. If users can’t engage with data governance through natural interactions, even the best frameworks go unused.
This first article explores why the next evolution in data governance isn’t just about policy or process, but about interface.
The Data Steward Is Evolving - Are Organizations Keeping Up?
As data product thinking becomes the new norm, many organizations are rethinking governance. But instead of adapting existing roles, they often introduce more roles - leading to confusion, overlap, and resistance.
This article explores how to rethink the steward role for the data product era. Key topics covered are how the data steward role must evolve in modern architectures, why adding new governance roles can create confusion rather than clarity and how the concept of a product-aligned steward embedded in delivery teams can help.
How are teams managing this shift in your organization? Are roles being adapted, or just added? Is the steward role still clearly defined?
I find "Managing Data as a Product" by Andrea Gioia one of the most insightful books I've come across in years.
Chapter 10 for example, on “Distributed Data Modeling”, provides a clear, concise yet thorough overview on the essentials of data modeling for everyone familiar with modeling for a data warehouse, struggling to apply data modeling principles in a modular, domain-oriented data product architecture.
I’ve not yet come across this content before: it is concrete and so relevant for the many teams and organizations struggling with this. I’m even considering to make this chapter required reading for all our engineers building data products :-)
As a consultant, ever felt your calendar is controlling you?
Paul Graham’s “Maker vs. Manager Schedule” concept explains the tension between two types of work: managing (lots of meetings and coordination) versus making (focused, uninterrupted time). When your days are packed with meetings, you’re stuck in manager mode, leaving little room for deep work.
I wrote a little article on a check I constantly apply on my consulting agenda: the Meeting Ownership Ratio. It’s a simple way to become more aware of how your calendar reflects your role: are you leading your time or constantly reacting?
The goal isn’t immediate change, but better insight.
Documenting data management processes with BPMN: it is about process and change management, not technology.
IT and data coming together also means revisiting an old skill. I find myself documenting data management processes with BPMN to explain it is about process and change management rather than technology or software solutions.
Product thinking on data struggles with existing data governance practices built for a centralized data warehouse.
When applying product thinking on data, one of the struggles organizations face is overcoming the impact on their existing data governance track, often still applying practices supporting a centralized data warehouse.
A technical metadata catalog coexists with a data (product) catalog.
Reacting to a discussion on metadata catalogs.
Platformization, data quality & lineage, data management process … they all generate (runtime) metadata that lays out the data on what such a business-oriented catalog is built.
The topic of the data catalog "as we know it" is finally up for debate in the industry.
Reacting to a write-up by Juha-Pekka Joutsenlahti: “So instead of blindly listing down tons of features that data catalogs should do, we should stop for a while and think about the actual usage. Who are the users? What are the actual use cases that a data catalog should do to make people’s lives better?”
I’m always trying to explain that also with data catalogs, the tool is just part of the solution for a need. Describe and implement the data management processes and actors and take the user experience into account. You’ll not only find the right tool(s) for the job but people will want to use it because indeed, it makes their lives better.
We’re up for a lot of change in this field, if you ask me.
I never understood why you'd want the data catalog "pull" model, if you have a choice.
Reacting to a discussion on metadata-driven catalogs.
A strong metadata layer is a foundation where you collect and combine metadata and when needed, treat it as any other data. It enables so much more than a data catalog.
And yes, this approach is a prerequisite to evolve to a data product catalog instead of just having a data catalog.
Data product and data contract standardisation: comparing the specification initiatives.
Open Data Contract Standard, Data Contract Specification, and Data Product Descriptor Specification, … It’s worth studying the similarities and differences between these initiatives.
A barebones schema for how data products facilitate the right data quality checks.
Since I keep running into the topic, and keep reusing the same barebones schema to explain, I thought I’d just share it.
Data contracts gaining momentum.
“The data contract specification is an open initiative to define a common data contract format. Think of an OpenAPI specification, but for data sets.”
- schema specification format: dbt, bigquery, avro, protobuf, sql, json-schema, custom
- data quality check format: SodaCL, montecarlo, custom
Data products promote the value of data next to applications, systems and services.
I especially value the concept of data products in a centrally-managed ICT product (management) world.
Decentralisation via a federation of data products - if done right - should give business users more direct access to data, data that is too often “locked” inside systems and services.
And doing it right of course means relying on data contracts to access the data, equivalent to using API’s to access application logic.
Shifting to a data-first mindset: culture first, or proof-of-value first?
Evolving from a software/system-oriented ICT (product) organization to a data (product) organization takes not only time, but also requires a lot from the people impacted in their day-to-day jobs.
I wonder however what would be the best approach. Focus on the data-first culture shift and awareness to start with, as a prerequisite for success, and evolve to data-driven process implementations after a while. Or provide proof-of-value with some data-driven initiatives, and count on the culture shift to gradually happen as a result, afterwards?
Or is one just not possible to succeed without the other, in whatever order?
These takes are cross-posted to LinkedIn. Join the conversation there.