Back to blog
TodayMarch 2, 2026

The Pipeline Problem

aisoftware engineeringleadershipculturefuture of work

Every few weeks, a CEO makes headlines predicting that AI will eliminate jobs, reshape entire industries, or render human labor obsolete. Sometimes it's Elon. Sometimes it's a tech founder trying to justify a valuation. The framing changes, but the message is consistent: AI is coming for your job.

I don't buy it. Not the way they're selling it.

The Mechanical Reality

AI is trained on human output. All of it. Every model, every capability, every "breakthrough" is downstream of human work that already happened. If humans stop producing novel thinking, the pipeline that feeds AI dries up. That's not a philosophical argument. It's a mechanical one.

This is the part that gets glossed over in the big predictions. AI doesn't know the future. It can't extrapolate what humans haven't yet created. It's a pattern-matching engine built on top of everything we've already done. The moment humans stop participating in the creation of new knowledge, new code, new ideas, AI either stalls, starts solving problems nobody has, or begins recycling its own output. None of those outcomes look like the revolution being promised on earnings calls.

Think about what happens if you actually follow the "replace the humans" logic to its conclusion. You train a model on 20 years of software engineering output. Great. It can reproduce patterns from those 20 years. But the problems your company will face in year 21 don't exist in that training data. The market shift, the new regulation, the customer behavior nobody predicted. Those require a human being who can sit with ambiguity, apply judgment, and build something that has never existed before.

AI is incredible at known problems. It falls apart on the unknown ones. And the unknown ones are where all the value lives.

What CEOs Are Actually Saying

When a CEO says "AI will replace workers," they're usually not speaking to you. They're speaking to investors. And what investors hear is "we can cut headcount and increase margins." Whether it's operationally true is secondary.

I don't think most of these leaders fully believe what they're saying in the way it sounds. Elon says provocative things because it moves markets and attention. Other CEOs are more measured in private than the headlines suggest. But the effect is the same. The prediction becomes a justification for reducing investment in human talent. And that's where the real damage starts. Not because AI took your job, but because your company's leadership convinced themselves it could.

There's a pattern I've noticed. A CEO announces an AI initiative. Headcount gets frozen or reduced. The remaining team is told to "use AI to be more productive." And for a while, it works. Output stays steady. Maybe it even goes up. But six months, twelve months later, the cracks show. The code is more generic. The architecture decisions are safe but uninspired. The technical debt is piling up because nobody's making the hard calls that require deep experience.

The problem is that productivity and capability are not the same thing. AI can help you move faster on known paths. It can't tell you which path to take. And it definitely can't tell you when to cut a new one.

The High-Skill Blindspot

The "AI replaces jobs" narrative makes some sense for highly repetitive, pattern-based tasks. It falls apart completely when applied to high-skill work like software engineering, where the job is mostly about navigating ambiguity and making judgment calls.

Software engineering isn't typing code. If it were, AI would've replaced us already. The actual work is understanding a problem domain that's poorly defined, making tradeoff decisions with incomplete information, recognizing when the requirements are wrong, and building systems that need to survive contact with the real world. That's judgment. That's experience. That's the thing you develop by being a human who has built things and watched them succeed or fail.

I've been building software for nearly 20 years. In that time, I've worked at Microsoft, DocuSign, Philips, and ICE. The hardest problems I've solved weren't hard because I didn't know the syntax. They were hard because nobody agreed on what the problem actually was. Because the system had grown in ways nobody anticipated. Because the business needed something yesterday that contradicted what they said they needed last quarter.

AI can autocomplete my code. It can generate boilerplate. It can even suggest architecture patterns. And I use it for all of those things. But it can't sit in a room where three teams disagree on scope and figure out the path forward. It can't look at a system under load and feel that something's off before the metrics confirm it. That's not mysticism. That's pattern recognition built on years of real experience. The kind of experience that only comes from doing the work.

The Cost of Being Wrong

The real risk isn't that AI replaces engineers. It's that companies believe it can, underinvest in human talent, and then spend years wondering why their products feel increasingly generic and their technical debt is out of control.

This is already happening. Companies that gutted their senior engineering teams in favor of "AI-augmented" junior teams are starting to feel it. The output looks fine on the surface. The velocity metrics are green. But the architectural decisions are shallow. The systems are fragile. The kind of deep, experienced thinking that prevents costly mistakes down the road is gone, and nobody notices until the mistakes show up.

Here's what I think is actually going to happen. AI will become a standard tool in every engineer's workflow, just like IDEs, version control, and CI/CD became standard before it. The engineers who learn to use it well will be more productive. The ones who don't will fall behind. But the humans who understand the problem, who carry the context, who make the calls that AI can't? They'll be more valuable, not less.

The companies that understand this will attract the best talent. The ones that don't will learn the hard way that you can't automate your way out of needing people who know what they're doing.

The Question Nobody's Asking

Instead of asking "what jobs will AI replace," we should be asking "what happens to AI when humans stop doing the work it learns from?" Because that's the question with the uncomfortable answer.

If you follow the replacement logic far enough, you arrive at a dead end. AI needs novel human output to improve. If you eliminate the humans producing that output, you don't get a smarter AI. You get a stale one. The CEOs predicting the end of human work are sawing off the branch they're sitting on. They just can't see it from where they're sitting.

Here's the thing nobody on those earnings calls wants to say out loud: if humans are out of work, then so is AI. Not eventually. Not theoretically. Mechanically. AI doesn't generate knowledge. It recombines the knowledge humans already produced. The models, the products, the revenue projections, all of it depends on a continuous supply of novel human thinking flowing into the system. Cut that supply and the models don't get smarter. They plateau. They start regurgitating themselves. The product degrades. The competitive moat evaporates.

Every CEO promising investors that AI will replace their workforce is describing a system that consumes its own fuel supply. The only version of the future where AI keeps getting better is the one where humans keep doing meaningful work. Not busywork. Not "prompt engineering." Real work. The kind that produces the novel output AI needs to stay useful.

So the next time a CEO announces that AI will make human workers obsolete, ask the follow-up question they're hoping you won't: and then what does the AI train on?

Share

Comments