This Is Not the Industrial Revolution

24 March 2026 5 min read

Everyone says AI is just like the Industrial Revolution - machines replaced muscle, now they replace minds, but new jobs will appear. It's a comforting comparison. It's also wrong, and the difference matters.
You hear it everywhere. "It's just like the Industrial Revolution." Politicians say it. Tech CEOs say it. That well-meaning colleague at the school gate says it. The argument goes: machines replaced muscle jobs, but new ones appeared. AI will replace thinking jobs, but new ones will appear too. Relax. We've been here before.

It's a comforting story. There's just one problem with it.

The Industrial Revolution replaced our bodies. AI replaces our minds. And that difference changes everything about how this plays out.

The Comparison That Doesn't Quite Work

When steam engines arrived, they took over physical labour. Lifting, weaving, hauling. But the new economy still needed human brains. Factory management, engineering, accounting, teaching - all the thinking that organised the machines. The displaced weaver could, in theory, learn to operate the loom or manage the floor.

AI flips that. The jobs most exposed now are the cognitive ones - the writing, analysing, organising, and deciding that white-collar workers do all day. A Yale Budget Lab review found that since ChatGPT launched, job postings for structured, repetitive cognitive work dropped by 13%. Not manual labour. Desk jobs.

So when someone says "new jobs will be created," the obvious question is: if those new jobs are also cognitive, what stops AI from doing those too?

It's a question the Industrial Revolution never had to answer. Steam couldn't think. AI can.

The Selection Problem

There's a popular counter-argument that goes like this: AI generates, humans select. AI produces ten options, and you pick the right one. The value is in the choosing, not the making.

Right now, that's true. Anyone using AI tools day to day knows the pattern. You prompt, it produces, you filter. The judgment call is still yours.

But AI is already getting better at judgment. A Cambridge Judge Business School analysis found that AI now exceeds human baselines by over 100% in pattern recognition, diagnostics, and predictive analytics. It outperforms us at spotting what's there. The gap is closing on knowing what matters.

So will selection eventually be automated too? Probably, for a lot of things. Triage decisions, content moderation, scheduling, prioritisation - these are judgment calls, and AI will handle them fine. Some already do.

But here's what gets missed. Judgment without stakes is just pattern matching. When a parent complains that their child didn't get their first-choice activity, they don't want to hear that the algorithm was fair. They want to talk to someone who carries the consequence of the decision. Someone they can look in the eye.

That's not a cognition problem. It's a social contract.

Fewer People, More Output

The honest version of the future probably isn't mass unemployment. It's something more subtle and, in some ways, harder to fix.

The IMF estimates that AI will affect roughly 40% of jobs globally. But "affect" doesn't mean "eliminate." It often means: fewer people doing what used to take many. Ten teachers supported by AI might deliver what thirty did before. One accountant with AI tools does the work of a team.

BlackRock's Larry Fink said this month that the real risk isn't AI taking your job - it's AI concentrating the gains among those who own the technology. Productivity goes up. Employment stays flat. Wages stagnate. The pie grows, but fewer people get a slice.

This is already showing up. Unemployment among 20 to 30 year olds in tech-exposed roles has risen by nearly 3 percentage points since early 2025, according to Dallas Federal Reserve data. Meanwhile, employer demand for analytical and creative roles - the kind AI enhances rather than replaces - grew by 20%.

The pattern isn't replacement. It's concentration.

So Where Does That Leave Us?

Helen, a content creator whose work on this topic sparked this piece, makes an argument worth hearing. She lists six reasons humans aren't going anywhere: cheaper thinking creates new problems, AI learns from the past but doesn't invent the future, somebody has to be responsible, and people want human things - therapy from a person, teaching from a person, leadership you can look in the eye.

She's right on most of those. But the strongest point is the one about responsibility. Every role where outcomes carry real stakes will have a human in it. Not because AI can't do the thinking, but because someone has to answer for it. That demand gets stronger as AI gets more capable, not weaker.

The weaker point is "new problems will appear." They will. But if the new problems are intellectual ones, AI will be sitting right there, ready to help solve them with fewer people than before.

The Industrial Revolution created a middle class because it needed millions of human brains to run the new economy. AI might not need that. It might need thousands where it used to need millions. That's not a technology problem. That's a political one.

For parents, the question isn't whether your child will find work. It probably is whether the work available will carry the same security and reward that cognitive jobs did for the last fifty years. The skills that will matter aren't the ones AI can replicate - analysis, writing, coding. They're the ones it can't easily own: responsibility, relationships, the willingness to be wrong and accountable for it.

Nobody knows how this plays out. But we should probably stop telling ourselves we've seen it before. We haven't.
Share WhatsApp