The end of the ground-up AI experiment

For the past three years, the dominant enterprise AI playbook has been bottom-up: run pilots everywhere, see what sticks, scale what works. It generated impressive adoption statistics and enormous learning. It has not, for most organisations, generated proportionate business value. The evidence is now clear enough that the best-performing companies are changing approach.

PwC’s 2026 predictions flag a significant strategic shift among AI leaders: the abandonment of crowdsourced, decentralised AI initiatives in favour of focused, top-down enterprise programs. The logic is straightforward, and the results are backing it up.

Why the experiment model ran out of road

The bottom-up AI experiment approach had real virtues. It built broad capability across the organisation. It surfaced use cases that leadership wouldn’t have identified from the top. It created internal champions and built familiarity with AI tools at every level. These are genuine benefits.

The problem is that pilots optimise for the wrong thing. A successful pilot demonstrates that AI can do something useful. It does not demonstrate that doing that useful thing at scale will change the competitive position of the business. The gap between ‘this works’ and ‘this matters’ is where most enterprise AI investment has gone to die.


“AI adoption numbers are easy to generate. AI outcomes that change the business are hard — and require a fundamentally different approach to investment.”


There’s also a resource problem. Running many pilots simultaneously dilutes both attention and investment. Each initiative gets enough resource to prove a concept, not enough to build robust infrastructure, change management, or integration with core systems. The result is a portfolio of interesting experiments that never compound into lasting capability.

What the top-down approach looks like in practice

The companies shifting to top-down AI programs share a common approach: senior leadership identifies a small number of workflows — typically three to five — where significant AI investment could materially change business outcomes. Not marginal improvement. Step-change.

Those workflows get concentrated resource: dedicated teams, proper data infrastructure, rigorous change management, and the organisational authority to redesign processes rather than just overlay technology on existing ones. The rest of the organisation continues with more exploratory AI use, but resource doesn’t flow there at scale.

The selection criteria for those priority workflows matters enormously. The best-performing companies are choosing on the basis of strategic leverage — where does AI change what the business can offer, or how fast it can move, or what it costs to compete? — rather than implementation ease or internal enthusiasm.

The organisational challenge

Shifting from bottom-up to top-down AI strategy is harder than it sounds. It requires telling parts of the organisation that their experiments — which may be genuinely impressive — are not going to receive scaled investment. It requires leaders to make strategic bets with high visibility and significant accountability. It requires resisting the pressure to spread investment widely enough that no one can be blamed if it doesn’t work.

It also requires a clear view of where the leverage actually is. Many leadership teams find, when they sit down to make these choices explicitly, that they don’t have enough clarity about their own business model — where value is created, where it’s competed away, where the real inefficiencies lie — to make confident strategic bets on AI. That’s a signal. The strategic work needs to happen before the AI investment decision, not after.

The diagnostic question

If you’re a business leader trying to assess where you stand, the most useful question isn’t ‘how many AI projects do we have running?’ It’s: ‘if I had to point to the three things our AI investment will have changed about this business in three years’ time, what would I say?’

If the answer is specific — clear workflows, clear outcomes, clear metrics — you’re probably in the top 20%. If the answer is a list of capabilities or tools or platforms, without a clear line to strategic outcomes, the experiment phase has run its course. The next phase requires a harder set of choices.

The bottom line: Impressive AI adoption numbers rarely produce proportionate business outcomes. The question to take into your next leadership conversation: do we have an AI strategy, or do we have an AI activity log?

Dane Tatana

Chief Executive Officer (Ngāti Raukawa, Ngāti Toa Rangatira)

Elevating the customer experience is Journey’s purpose. And nobody embodies that more than our managing director, Dane. A designer and CX strategist, Dane has worked with some of the most customer-obsessed brands in the world, throughout Europe, Middle East, North America and Australasia.

[AKL]

Nº 1 Boundary Road



Hobsonville Point

Auckland 0618

[LDN]

Nº 207 Old Street



London



EC1V 9NR

Brave Navigators for Bold Journeys.

[AKL]

Nº 1 Boundary Road



Hobsonville Point

Auckland 0618

[LDN]

Nº 207 Old Street



London



EC1V 9NR

Brave Navigators for Bold Journeys.

[AKL]

Nº 1 Boundary Road



Hobsonville Point

Auckland 0618

[LDN]

Nº 207 Old Street



London



EC1V 9NR

Brave Navigators for Bold Journeys.