
AI is a collaboration problem, not a technology problem
The most consequential design work happening in AI right now has nothing to do with models, interfaces, or features. It’s about how humans and AI systems work together — who decides what, when a person steps in, and how expertise is preserved rather than replaced. Most organisations haven’t started that design work yet. The ones that have are building a durable advantage.
Microsoft’s 2026 AI trends report makes the shift explicit: AI is moving from a tool that answers questions to a genuine collaborator that works alongside people over time, with access to context, memory, and the ability to act. The framing from Microsoft’s Chief Product Officer for AI is clear: “The future isn’t about replacing humans. It’s about amplifying them.”
That’s the right aspiration. The hard part is designing for it.
Why collaboration design is harder than it looks
When most organisations talk about human-AI collaboration, they mean something relatively simple: a person uses an AI tool to do their job better. The AI handles the rote work; the human handles the judgement calls. Clean division of labour, easy to explain, easy to implement.
The problem is that this model breaks down as AI systems become more capable. The line between ‘rote work’ and ‘judgement’ is not fixed. AI systems are increasingly capable of performing tasks that we assumed required human expertise: synthesising complex information, identifying patterns across large datasets, generating recommendations in ambiguous situations. At what point does the human’s role shift from doing to reviewing? From reviewing to ratifying? From ratifying to rubber-stamping?
“The most important design question in AI right now isn’t what the system can do. It’s what the human does when the system can do almost everything.”
These are not hypothetical concerns. In sectors from healthcare to financial services to legal, organisations are already grappling with situations where AI systems are producing outputs that human reviewers are systematically approving without meaningful scrutiny — not because the humans are lazy, but because the volume is too high, the outputs are too confident, and the cost of disagreeing feels too great. The collaboration model has collapsed into automation with a human rubber stamp.
What genuine collaboration design looks like
Designing genuine human-AI collaboration requires answering a set of questions that most organisations haven’t asked explicitly. They’re not technology questions. They’re operating model questions:
Where does human judgement genuinely add value? Not where we currently apply human judgement, but where it actually changes outcomes. In many workflows, human review is present for accountability reasons but adds little epistemic value. In others, human pattern recognition and contextual knowledge is irreplaceable. Being honest about the difference is the starting point.
What does meaningful oversight look like at scale? If an AI system is producing 10,000 decisions a day and a human team is reviewing them, what does review actually mean? Is the volume compatible with genuine scrutiny? If not, the collaboration model needs redesigning before the system is deployed, not after.
How do we preserve expertise over time? One of the less-discussed risks of human-AI collaboration is skill erosion. When AI handles the tasks that previously built human expertise — the junior work that teaches people to think — the pipeline of expert humans who can meaningfully oversee AI outputs begins to thin. Collaboration design needs to account for how expertise is developed and maintained, not just how it’s applied.
What happens when the human and AI disagree? This is the question most organisations leave implicit. If a human reviewer overrides an AI recommendation, what happens? Is it logged? Investigated? Does it feed back into the system? The answer to this question shapes the entire incentive structure of the collaboration.
The design gap
At Journey, we see a consistent pattern: organisations invest heavily in AI capability and relatively little in collaboration design. The technology gets sophisticated. The human side of the system gets assumed. The result is AI that works in demos and disappoints in production — not because the model is wrong, but because the environment it’s operating in hasn’t been designed for it.
The organisations building durable AI capability are treating collaboration design as a first-class design problem — as important as the interface, the model, and the data. They’re involving the people who will work alongside AI in the design process, not just the deployment process. They’re testing collaboration models in the same way they test technology models: with real users, in real conditions, with honest measurement of outcomes.
The bottom line: The most important design work in AI right now isn’t happening in model labs. It’s happening — or not happening — in the gap between what AI systems can do and how the humans who work alongside them actually behave. Closing that gap is a product and operating model challenge. Most organisations haven’t started.
Dane Tatana
Chief Executive Officer (Ngāti Raukawa, Ngāti Toa Rangatira)
Elevating the customer experience is Journey’s purpose. And nobody embodies that more than our managing director, Dane. A designer and CX strategist, Dane has worked with some of the most customer-obsessed brands in the world, throughout Europe, Middle East, North America and Australasia.

