The Last Infrastructure Decision You'll Make Without Knowing It

Published:
9 min read

By Darryl Munro | Digital Leadership Academy

“I can smell the uranium on your breath from here.” — David Lange, Oxford Union, 1985

David Lange said that to a US nuclear advocate and, in doing so, crystallised a values position that would define New Zealand’s relationship with superpower politics for a generation. It wasn’t a technology argument. It wasn’t an economics argument. It was a clarity argument — a refusal to accept that dependency was inevitable just because everyone else had accepted it.

I’ve been thinking about that moment a lot lately. Because I think we are in the middle of a dependency decision of comparable consequence, and most organisations are making it without knowing it.


Is This the Electricity Moment?

The question being asked in boardrooms and strategy sessions right now — usually in the form of “what’s our AI strategy?” — is almost always framed as a productivity question. How do we get more output with fewer people? How do we automate the boring stuff? Where can we find efficiency gains?

That’s the wrong frame. And I say that as someone who has built and deployed AI systems in regulated industrial environments, not as a theorist.

The better question is this: Are we in the early 1900s equivalent of electrification, and if so, what are we actually deciding right now without realising it?

The electricity analogy is genuinely instructive. When factories began converting from steam and water power to electrical grids, they weren’t just upgrading their energy source. They were reorganising their entire production model around a centralised infrastructure they didn’t own or control. The capital economics were irreversible — once you’d rewired your factory floor, you weren’t going back to the mill race. And the companies that moved early and moved well built compounding advantages that their slower competitors never closed.

Large Language Models — LLMs, the technology underneath ChatGPT, Claude, Gemini, and the wave of AI tools now embedded in enterprise software — are exhibiting the same pattern. The compute economics are brutal. Training a frontier model costs hundreds of millions to billions of dollars. That limits the “power station” tier to three or four global players. Most organisations will never run their own frontier model any more than a 1910 factory would build its own power station.

The dependency is structural. The question is whether we govern it, or whether it governs us.


Why This Is Not Word, Outlook, or Zoom

Every major technology shift gets compared to the last one. And there’s a version of the AI conversation happening in organisations right now that treats LLMs as the next productivity tool — smarter search, faster drafting, better summarisation. The equivalent of moving from fax to email, or from desktop to video conferencing.

It isn’t. Here’s why.

Tools augment human action. LLMs substitute human judgment.

Word, Outlook, Zoom — these are passive instruments. You still decide what to write, who to contact, what to argue. The cognitive work is yours. When an LLM drafts the email, summarises the contract, generates the analysis, or recommends the decision — and humans ratify rather than originate — the cognitive labour has shifted. At scale, organisations stop developing certain judgment capacities entirely. That’s not augmentation. That’s substitution. And you can’t easily un-atrophy institutional judgment once it’s gone.

This technology learns the shape of your organisation.

Word didn’t know your business. An LLM embedded in your workflows — trained on your documents, integrated into your processes, connected to your knowledge base — starts to encode how you think and operate. It isn’t just a tool you use. It becomes a mirror of your organisation’s collective intelligence, and that mirror is hosted by someone else. The vendor relationship changes character entirely.

The failure modes are invisible until they’re catastrophic.

A corrupted Word file is obvious. A deleted email is recoverable. An LLM that has been subtly manipulated, that hallucinates with confident authority, that reflects back systematic biases baked into its training data — these degrade quietly. Organisations won’t notice until a bad decision pattern has compounded for months. In regulated industries, that’s not an IT problem. That’s a liability problem.

The labour market impact is compressing across years, not generations.

The shift from agricultural to industrial labour took two to three generations. Electrification reshaped manufacturing over decades. That compression allowed social institutions — education, unions, welfare systems, regulation — to adapt, imperfectly but sufficiently. LLMs are compressing the equivalent cycle into years, possibly faster. The social absorption capacity is genuinely untested. We are running an experiment on workforce capability and organisational judgment at a speed that has no historical precedent.


The Strategic Frame Most Organisations Are Missing

Most organisations are currently treating LLM adoption as a procurement decision. Which vendor? What cost? Which use cases will show quick wins? That’s understandable — it’s the frame that procurement and technology teams know how to operate in.

It’s also profoundly insufficient.

The decision you are actually making — usually implicitly, at speed, under competitive pressure, without governance — is closer to this:

We are deciding where human judgment lives in our organisation. We are doing it now. And we may not get to revisit it.

That is a board-level statement, not a technology team statement. And most boards aren’t hearing it in those terms.

The questions that actually matter are not about vendor selection or use case prioritisation. They are:

If your AI strategy document doesn’t have answers to these questions, it’s a use case list dressed up as a strategy.


The Antibiotic Parallel

I’ve used the electricity analogy because it’s the one most people reach for, and it has genuine explanatory power. But the analogy I keep returning to when I think about governance is different.

Antibiotics.

Transformative, genuinely life-changing technology. Adopted at massive scale because the upside was obvious and the urgency was real. Deployed without sufficient governance of the conditions and incentive structures that would lead to systematic misuse. And now we are managing the consequences of that governance failure in real time — antimicrobial resistance is one of the most significant public health threats of the coming decades, precisely because we optimised for short-term effectiveness and ignored the systemic risk.

The window to build governance infrastructure ahead of the dependency is closing. That’s the message that needs to land. Not to slow adoption — the competitive and productivity arguments for adoption are real. But to build the governance layer while you’re building the capability, not after the dependency is established and the leverage has shifted to the vendor.


What Good Governance Looks Like in Practice

I’ve written previously about inline AI governance — treating AI oversight as operational infrastructure rather than compliance documentation. The same principle applies at the organisational and sector level.

Good governance in this space doesn’t mean slowing adoption. It means:

Designing for model portability. Don’t hard-wire your processes to a specific vendor’s API. Build abstraction layers. The organisations that survived the transition from on-premise to cloud did so partly because they had designed for portability. The same discipline applies here.

Treating your data as the primary asset. The models are commoditising. Your data — your processes, your knowledge base, your fine-tuning corpus — is the real moat. Govern it accordingly. Know what you’re giving away when you accept a vendor’s terms of service.

Building explicit human judgment checkpoints. In regulated or high-stakes contexts, define where human sign-off is required and make that a design requirement, not an afterthought. This isn’t anti-AI — it’s how you maintain accountability in systems that can fail in non-obvious ways.

Watching the open-weights model landscape. Llama, Mistral, and the emerging generation of capable open models are the equivalent of on-site generation — not competitive with grid-scale frontier models for every task, but closing fast, and retaining your data sovereignty.


The Lange Moment

Here’s what struck me about the nuclear-free analogy as I’ve thought about this. NZ didn’t need to match superpower capability to take a principled stand. It needed moral clarity and the willingness to absorb the cost — which in that case was a downgraded intelligence relationship with the United States.

There are versions of that move available in the AI governance space, and New Zealand is arguably better positioned than most jurisdictions to make them. Not because we have outsized technology capacity, but because we have existing frameworks — particularly around Māori Data Sovereignty — that are internationally respected models for thinking about data as a collective resource requiring governance, not just a commercial asset to be optimised.

A harder line on citizen and organisational data being used to train commercial models, algorithmic accountability requirements in regulated industries, a commitment that no government decision affecting citizens will be made by an AI system without disclosed human accountability — none of these stop the global trajectory. But they establish a values posture. And values postures, taken seriously, have a way of exporting.


The Decision You’re Making Right Now

I want to leave you with a simple proposition.

Every organisation that is deploying LLMs into its core workflows right now — even tentatively, even in “pilot” mode — is making an infrastructure decision. It may not feel like one. It feels like a technology experiment, a productivity initiative, a way to stay competitive. But the cumulative effect of those decisions, across an organisation, across a sector, across an economy, is the rewiring of the factory floor.

The organisations that got electrification right didn’t just move fast. They understood what they were building toward. They made explicit decisions about what infrastructure they would own, what they would outsource, and where they would not accept dependency without governance.

That clarity is available to us right now, in this moment, before the dependency is fully established.

The question is whether we reach for it.


Darryl Munro is a senior technology and architecture leader with thirty years of experience across regulated industries. He writes about digital leadership, AI governance, and neurodiversity at the Digital Leadership Academy on Substack.


Tags: #AI #DigitalLeadership #AIGovernance #LLM #Technology #Strategy #NewZealand #FutureOfWork

New posts and updates straight to your inbox.

No spam. Unsubscribe anytime.