Tools Come After. They Always Have.
- Barbara Stewart

- 17 hours ago
- 12 min read

There has never been a silver bullet. There will not be one now.
Every wave of technology in the last two hundred years has reshaped how organisations work how they connect, communicate, move materials, coordinate people, and reach markets. Each wave has been transformative in the ways the early evangelists predicted, and most of the ways they didn't. The steam engine genuinely did remake commerce. Electrification genuinely did transform the workplace. The telephone, the assembly line, the personal computer, the internet, mobile, cloud, SaaS each one delivered enormous value and changed what was possible.
But the value did not arrive evenly. In every wave, a small group of organisations captured most of the gain. A larger group captured a fraction of it. A larger group still ended up worse off than they started, having absorbed the cost of the new technology without ever capturing the productivity it was supposed to deliver. The differential between those groups has been remarkably consistent across two centuries of technological change. The organisations that captured the value were the same kind of organisations every time: the ones that understood themselves clearly enough to redesign their work around what the new technology made possible. The organisations that didn't capture the value were the ones that bought the technology first and tried to retrofit the operating model around it afterwards.
AI is the current version of this pattern. The technology is real. The transformation it enables is real. The disappointment that some organisations are about to experience is also real, and it will follow the same pattern it has followed for every previous wave of innovation. The organisations that understand themselves clearly will capture the value. The organisations that don't will absorb the friction.
The pattern is this. Tools come after. They always have. The work that has to come first is the unglamorous, harder, slower work of understanding how the organisation actually operates, beneath what it says about itself, beneath the org chart, beneath the official process and aligning the leadership team around what is really there. Until that work is done, every tool, every platform, every framework, every transformation programme, every AI deployment is being installed on top of a foundation that cannot support it. The intervention amplifies the misalignment instead of resolving it. The cost compounds. The leadership team commissions the next thing. The pattern repeats.
This is not a new observation. It has been true for as long as organisations have been buying tools to solve problems. What is new is that the cost of getting the order wrong is becoming impossible to absorb quietly, because AI is exposing it at machine speed. The misalignment that used to take eighteen months to surface in a failed transformation programme now surfaces in three months in a failed AI deployment. The cost is the same. The visibility is faster.
I am writing this because I think the next two to three years will see the industry slowly catch up to a principle that has, in fact, been true for two hundred. I have been saying so for most of my career. I would rather be early and right than late and validated.
This is the record.
What every wave has had in common

The history of commercial technology is a history of organisations confusing the availability of a new capability with the capacity to deploy it. The steam engine made factories possible. The factories that succeeded were the ones whose owners had understood the underlying logistics of materials, labour, and process before the engines arrived. The factories that failed were the ones that bought the engines first and tried to build the operating model around them.
Electrification followed the same pattern. The electric motor was a transformative technology. The firms that captured the value were the ones that redesigned their work around what the motor could do. The firms that bolted electric motors onto factories designed for steam belts captured almost no productivity gain at all. The Stanford economist Paul David documented this famously: it took thirty years for the productivity benefits of electrification to show up in the data, because for thirty years most factories were running electric motors through operating models designed for steam.
The personal computer followed the same pattern. The early productivity-paradox literature is essentially a long, embarrassed footnote to the same observation: organisations that bought computers without redesigning the work around them got almost nothing. Organisations that did the underlying work first got transformative gains.
The internet followed the same pattern. Cloud followed the same pattern. SaaS followed the same pattern. Each wave produced a small group of organisations that captured most of the value because they had done the foundational work, and a much larger group that bought the technology and watched it fail to land. The pattern has been so consistent across two centuries of technological innovation that it has its own academic literature, its own consulting case studies, and its own quiet acknowledgement among practitioners who have lived through several waves.
It has not, however, prevented the same mistake from being made every time. Each new wave produces a fresh wave of leaders who genuinely believe that this technology will be different. This one will solve the problem without requiring the underlying work. AI is the current iteration of that belief.
It will not be different. It has never been different. The pattern is older than any technology currently in use, and the pattern is going to determine which organisations capture the value from AI in exactly the same way it has determined who captured the value from every previous wave.
What the pattern looks like inside the organisation

The historical pattern repeats inside every individual enterprise that buys a tool before doing the underlying work. The mechanics are predictable enough that I have come to give them names, because patterns without language are harder to argue with and harder to address.
The leadership team commissions the new technology. Each leader in that conversation is looking at the same business through a different lens, the CEO seeing ambition versus delivery, the COO seeing execution drift, the CFO seeing value leakage, the CRO seeing inconsistent revenue, the CMO seeing launch chaos, the CIO seeing risk and tool sprawl. Each one is right about what they are seeing through their lens. None of them is sufficient on their own. The conversation moves to what to do....... which vendor, which platform, which timeline long before the leadership team has reconciled what they are actually solving for. The decision that follows is rational from inside whichever lens was loudest in the room when the budget got signed off. It is dysfunctional from outside it.
The technology then gets deployed into a foundation that has not been reconciled. It amplifies whichever lens commissioned it. The CFO's value leakage gets a finance dashboard. The COO's execution drift gets an operating model review. The CIO's risk gets a governance policy. Each intervention is a real response to a real lens. None of them addresses the painting that the rest of the leadership team is also looking at.
Six months later, the leadership team is back in the room. The new technology is producing competent outputs, but the value is not landing. The dashboard is producing data nobody is acting on. The operating model review has been received politely and quietly shelved. The governance policy is creating friction with the regional teams it was supposed to support. The temptation is to conclude that the technology was poorly chosen, the vendor was inadequate, or the change management was insufficient. The honest read is more uncomfortable. The technology was addressing different paintings, because the leadership team had never agreed on which painting they were standing in front of.
The cost of this pattern compounds. I have been calling the accumulated cost diagnosis debt: the build-up of every strategic decision built on inputs that no one stress-tested. The debt accrues quietly. It has the same compounding property as technical debt. It eventually has to be paid down, usually at the worst possible moment, by the team least responsible for taking it on.
AI is the current technology making this debt visible. When AI is deployed into a business that already has clear decisions, clean handoffs, owned outcomes, and a coherent way of working, it amplifies that clarity and the value compounds. When it is deployed into a business with fragmented execution, fuzzy ownership, and unreconciled leadership lenses, it amplifies that too. The output looks credible but isn't traceable. The teams using it lose confidence. The initiative gets quietly recategorised as "promising but early."
The model isn't failing. It's revealing. AI is, in a phrase I have come to use often, a stress test of your operating model. The enterprises getting transformative value from AI are not the ones with the best models. They are the ones whose work was already structurally legible before the model arrived. The pattern is the same pattern that determined who captured value from electrification, computing, and the internet. The order in which the work gets done determines whether the work works at all.
Why the order matters, and what it actually requires

The reason the order matters is mechanical, not philosophical. Tools amplify what they find. They cannot reconcile what they find. They cannot diagnose what they find. They have no faculty for distinguishing between a fragmented operating model and a coherent one, they amplify both. If the foundation is reconciled, the amplification produces value. If it isn't, the amplification produces faster, more visible, more expensive versions of the same misalignment.
This is true of every tool, in every wave, in every era. It will be true of whatever comes after AI. The order is not an opinion. It is a structural property of how technology and organisations interact.
The correct order is not complicated. It is unglamorous, and it is slower than leadership teams usually want it to be.
The first step is the work of structural diagnosis, done honestly. Not a survey. Not a workshop with sticky notes. A disciplined, evidence-led examination of how the organisation actually operates beneath what it says about itself: where the friction lives, where decisions stall or get re-opened, where ownership is fuzzy, where the official process diverges from the real one, where value is leaking, where execution breaks. This work is uncomfortable because it surfaces things the organisation has been quietly avoiding. It is also the only foundation on which durable subsequent work can be built. Every previous wave of technology has rewarded organisations willing to do this work and punished those that weren't. AI will do the same.
The second step is to bring the leadership team into reconciliation. Not alignment in the consultant sense agreement on the answer but reconciliation of the picture. Each leader articulating what they are seeing through their own lens, holding their interpretation in suspension long enough to see what the others are seeing, and treating the gap between the lenses as the diagnostic information rather than the noise to be smoothed over. This is the work most leadership teams have learned to avoid because it requires a level of honesty that is hard to ask for and harder to give. It is also the work that prevents almost everything that goes wrong downstream.
The third step is the part the industry usually starts with. The tools, the technology, the AI, the frameworks, the transformation programmes. These work and they work well when they are deployed into a reconciled foundation. They amplify whatever they find. When the foundation is reconciled, they amplify clarity. When it isn't, they amplify the misalignment that was already there.
The industry has been doing step three first, then trying to retrofit steps one and two when step three doesn't land. That sequence cannot work. It has never worked. It will not start working because the tools get better. The tools getting better, in fact, makes the failure mode more visible and more expensive, because the amplification is faster.
Real growth, the kind that compounds, the kind that survives leadership transitions, the kind that doesn't require the strategy to be rebought every two years has always come to organisations willing to dig to the root cause before reaching for the next tool. It is not the only thing that has produced enduring success, but it is one of the few things that has reliably differentiated the organisations that captured value from each technology wave from the ones that didn't.
What I have done about it

I have been writing about this pattern for years. Mentoring on it. Consulting on it. Speaking about it. Setting up a consultancy specifically to address it. Giving away frameworks, structured thinking, free advice, and substantial amounts of unpaid mentoring to people I thought might be in a position to do something about it. I have done this because I genuinely believed that if enough of the discipline could see the pattern, the practice would shift.
It hasn't. Not yet. The pattern has continued. The strategy rebuy loop has, if anything, accelerated. The amount of money being spent on tools, technology, and AI deployments that were always going to fail to land has grown rather than shrunk.
At some point, observing the pattern stopped being enough. If the work that should come first wasn't being done because nobody had built a credible way to do it at the scale enterprises needed, then someone had to build that. So I did.
Discovery is the diagnostic methodology. It is built, it is operational, and it has been refined across the work I have been doing for years. It is what the first step of the correct order looks like when it becomes formal, evidence-led, and disciplined enough to be reliable across different sectors and different sizes of enterprise. The Relativity Framework is the system that supports the second and third steps, the operating system that helps a leadership team work in a reconciled state once they have arrived in one, with an AI layer designed specifically to operate across structured organisational thinking rather than over the unreconciled debris that most enterprises currently feed their AI systems. Relativity launches in July of this year.
I am not naming Discovery and Relativity here because this is a sales asset. It isn't. This piece is not promoted, not gated, and not designed to convert anyone. I am naming them because the conviction I have just spent four thousand words articulating would be hollow if I had not put my own time, money, and professional risk into building something to act on it.
The structural choices I have made about how Discovery and Relativity work are themselves evidence of how seriously I hold the position. Discovery is mandatory before any Relativity engagement. I will not run Relativity work for a client who wants to skip the diagnostic step. This is commercially inconvenient. It loses business. Most consultancies would never refuse work that way. I refuse it because the entire premise of the piece you are reading is that the diagnostic work has to come first, and accepting clients who want to skip it would make me a hypocrite about the position I am defending. Relativity is sold as perpetual ownership with no subscription, no recurring fee, no forced renewals, and unlimited users. This is the lowest-revenue commercial model available for software of this kind. I have chosen it deliberately because the moment a software business depends on subscription renewals, its incentives quietly shift toward keeping clients dependent rather than making them capable. The platform is designed for clients to outgrow me. The internal capability transfer is built in. The training is built in. The eventual goal is that the client runs Relativity without external support, with the consulting relationship reducing rather than increasing over time. This is the opposite of how most consulting and software businesses are structured. It is the structural choice that makes me, on most quarters, the most expensive way to grow my own business.
These three choices, mandatory Discovery, perpetual ownership, designed-to-outgrow are not features. They are the commercial structure of the conviction. They are what I am willing to give up to build the work the way I believe it should be built. The reason I am stating them on the record is that the only credible way to claim conviction in writing is to point at the structural costs you have absorbed in service of it. Anyone can write about a pattern. Fewer people are willing to commission their own work in a way that demonstrably reduces their commercial upside in service of the position they are arguing for. That is what I have done.
What happens next
The pattern will continue until enough enterprises decide it costs more to keep ignoring than to address. Based on what I am watching with current AI deployments, that moment is closer than it has been at any previous point in my career. The failure rate is too high to explain away with the usual stories. The cost of repeated transformation programmes that don't land is outpacing the willingness of boards to fund them. The leadership teams that have been quietly carrying diagnosis debt are finding it surfacing in places that are too public to absorb.
When the conversation breaks open, the language will catch up. Some of the words I have been using diagnosis debt, structural friction, reconciliation of the picture, operating model legibility, AI as a stress test will become more common. Other people will arrive at similar formulations independently. That is fine. The point of putting language to a pattern is to make the conversation possible, not to own the words.
What matters, and the reason I am writing this piece now rather than waiting until the conversation is comfortable, is that the principle itself becomes durable. Tools come after. They always have. There has never been a silver bullet. There will not be one now. The organisations that succeed in the AI era will be the same kind of organisations that succeeded in every previous era of innovation. The ones willing to do the unglamorous work of understanding themselves clearly before they reach for the next tool that promises to do it for them.
I am willing to be early about this. I have been early about this for most of my career. I would rather be early and right than late and validated.
If you have read this far, you are probably either someone who has been seeing what I have been seeing, or someone who is starting to suspect that the pattern your enterprise has been quietly absorbing is not going to stay quiet for much longer. Either way, this piece is the record. It is here. It will stay here. It will be here in five years, when the conversation has caught up. It will be here in ten, when the language has settled and someone wants to know who was saying this before it was popular.
I was. I am. I will be.
The work itself is for another conversation.



