
Teams of AI agents are replacing solo tools
The dominant model for AI at work — one assistant, one task, one conversation — is already giving way to something more complex and more powerful: coordinated networks of specialised agents that work in parallel, hand off to each other, and complete multi-step workflows without human intervention at each stage.
MIT Technology Review named agentic AI their top trend of 2026. The shift represents a fundamental change in how AI creates value inside organisations — and it has significant implications for anyone currently building or buying AI capability.
From assistant to orchestra
The first wave of enterprise AI was characterised by individual tools doing individual things. A model that summarises documents. A tool that generates copy. An assistant that answers questions about your data. Each valuable in isolation. Each fundamentally limited by its isolation.
Agentic AI changes the architecture. Rather than one model handling a task end-to-end, agent frameworks allow multiple specialised models to collaborate: one researches, another analyses, another drafts, another reviews, another routes the output to the right place. Each agent is an expert in its domain. The intelligence is in the coordination.
“The real design challenge isn’t picking the right tool. It’s orchestrating agents that work intelligently across your entire operation.”
Early implementations are already operating at scale. In financial services, agent networks are handling end-to-end loan processing — from document intake through risk assessment to decision and communication — with human oversight at defined checkpoints rather than every step. In software development, agent systems are writing code, running tests, diagnosing failures and proposing fixes in parallel loops that compress days of engineering work into hours.
Why this matters for your AI strategy
Most enterprise AI strategies are currently built around a tool-by-tool logic: identify a use case, find a model or product that serves it, deploy, measure, repeat. That approach made sense in the first wave. It is increasingly insufficient for the second.
Agentic frameworks require a different kind of thinking — less about what a single tool can do, more about how a system of capabilities can be composed to handle complex, multi-step processes. The questions change: How do we design handoffs between agents? Where does human oversight sit? How do we maintain accountability when no single agent owns the outcome? How do we ensure an agent network reflects our values and constraints, not just our instructions?
These are product design questions and operating model questions, not just technology procurement questions. Organisations that treat agentic AI as a vendor selection problem will find themselves with capable tools they can’t connect into coherent systems.
The governance challenge
One of the less-discussed implications of agentic AI is accountability. When a single model makes a decision, the accountability chain is relatively clear. When a network of agents produces an outcome — each one making micro-decisions that compound into a macro-result — accountability becomes genuinely complex.
The organisations deploying agent frameworks most effectively are investing as much in governance design as in technical architecture. They’re defining explicitly: which decisions require human sign-off, which can be automated with audit trails, which are outside the scope of any agent regardless of apparent capability. That clarity isn’t a constraint on speed — it’s what makes speed safe.
What to do now
You don’t need to be building agent networks today to start preparing for a world where they’re the norm. Three practical starting points:
Map your complex workflows. The highest-value targets for agentic AI are processes that currently require multiple people, multiple tools, and significant coordination overhead. Identifying those processes now gives you a clear pipeline of opportunities.
Audit your data infrastructure. Agent networks are only as good as the data they can access and the APIs they can call. Many organisations will find their data architecture is the binding constraint — better to know that now than after committing to an agent strategy.
Build the governance conversation early. The questions about human oversight, accountability, and agent scope need to be answered before deployment, not after an incident. Getting cross-functional alignment on those questions now is significantly easier than retrofitting governance to a live system.
The bottom line: If your current AI strategy is tool-by-tool, you’re building for yesterday. The organisations pulling ahead are thinking in systems — and the gap between them and everyone else is widening.
Dane Tatana
Chief Executive Officer (Ngāti Raukawa, Ngāti Toa Rangatira)
Elevating the customer experience is Journey’s purpose. And nobody embodies that more than our managing director, Dane. A designer and CX strategist, Dane has worked with some of the most customer-obsessed brands in the world, throughout Europe, Middle East, North America and Australasia.

