Back to blog
This weekMarch 30, 2026

What If We Built Forward?

The software industry has developed an appetite for tearing down patterns that far outpaces its appetite for building something better. What if we stopped debating whether projects "need" good architecture and started making it invisible?

software architectureevent-sourcingabstractionsdeveloper experienceopinion

There's a genre of tech writing that's become really popular. You've seen it. "The patterns that failed." "The architecture advice that aged poorly." "Why [X] was always wrong." They get engagement. They get claps. They get head nods from developers who've been burned by over-engineered codebases.

And I get it. I've worked in systems where the architecture felt like it existed to serve itself. Where a five-line feature touched twelve files across four projects. That pain is real.

But I keep reading these posts and walking away with the same question: okay, so now what?

The Tear-Down Economy

The software industry has developed an appetite for deconstruction that far outpaces its appetite for proposing something better. We're really good at explaining why things failed. We're not nearly as good at explaining what should replace them.

Every few months, a new post goes viral about how layered architecture is dead, or abstractions are overrated, or CQRS was a mistake. The arguments follow a reliable pattern: someone applied a complex pattern to a simple problem, the result was painful, therefore the pattern itself is suspect. The comment section fills with "finally someone said it" and "I've been saying this for years."

What rarely follows is a serious alternative. The prescription is almost always some version of "just be pragmatic" or "start simple" Which, sure. But pragmatism isn't an architecture. Simplicity isn't a recovery strategy.

I think about this a lot because I've been on both sides. I've built systems that were over-abstracted and felt the drag. I've also inherited systems that had no architecture at all, where "pragmatism" meant every developer made a different decision and nobody could trace a request through the codebase without a map and a prayer.

The second kind of pain doesn't generate blog posts. Nobody writes "The Advice That Aged Poorly: Having No Plan." But that failure mode is just as common and often more expensive to recover from. The tear-down posts never mention it because it doesn't fit the narrative.

Misapplication Is Not Failure

When someone wraps a framework in a pass-through interface and calls it an abstraction, the problem isn't abstractions. It's that the abstraction was designed wrong. Confusing misapplication with failure is the central mistake in most of these critiques.

A common example: someone creates an IRepository<T> that mirrors the ORM's API method for method. Then they point at it and say "see, abstractions are useless." But that was never a real abstraction. A real abstraction is shaped by business intent, not by the technology it wraps. Your caching interface shouldn't look like Redis. It should look like your domain. GetActivePricingForRegion(regionId) is an abstraction. GetAsync(string key) is a wrapper.

The distinction matters because it changes what you learn from the experience. One teaches you to design better abstractions. The other teaches you to stop trying.

This applies across the board. Repositories that mirror database APIs, service layers that just delegate, interfaces with exactly one implementation that never change. These aren't evidence that patterns are broken. They're evidence that the team didn't ask the right question before applying the pattern.

The right question is always: what is this abstraction for? If your interface exists because "we always create interfaces," you've answered a process question, not an engineering one. If your interface exists because three different infrastructure implementations need to satisfy the same business contract, now you have a real reason. The pattern didn't fail. The reasoning that led to its application was absent.

And here's the thing. The architects who created these patterns said this from the beginning. Evans wrote about bounded contexts specifically to prevent DDD-everywhere. The CQRS originators warned against universal application. Clean Architecture was always presented with caveats about scale. The advice to be contextual isn't new. It just doesn't get as many claps.

The Systems-Level Bet

There's a difference between choosing patterns per-project and investing in a foundation that every project inherits. Most architecture criticism evaluates patterns in isolation and completely misses the compounding value of consistency.

I know engineers who use event sourcing for everything. Not because every project "needs" it, but because they've invested in a substrate that gives them recovery, replay, audit trails, and temporal queries for free. Any single project might have been "simpler" without event sourcing. But across a career or an organization, the amortized cost of that consistency is dramatically lower than making bespoke decisions about persistence, recovery, and state management every single time.

That's a systems-level bet. And it's invisible to the "just be pragmatic" crowd because they evaluate each project in a vacuum.

Think about Git. Git is event sourcing for code. Every commit is an immutable fact. You can replay history, branch, recover, bisect, audit. Nobody calls this over-engineering. Nobody writes blog posts about how version control "aged poorly." It's just how things work.

But imagine if every project had to decide whether it "needed" version control. Imagine the blog posts. "Git Is Overkill for Small Projects." "Why I Stopped Using Branches." "The Version Control Advice That Aged Poorly." It sounds absurd because we've internalized the value of the foundation. The pattern became invisible.

That's the opportunity the tear-down crowd is missing. Instead of asking "does this project need event sourcing?" or "does this project need good abstractions?", the better question is: can we make these properties so embedded in the tooling that the developer never has to ask the question at all?

Abstractions Aren't the Enemy. Bad Interfaces Are.

The real problem with most abstractions isn't that they exist. It's that they're designed to mirror the technology behind them instead of the business rules in front of them. Fix the interface design and the abstraction becomes an asset, not overhead.

When your caching layer exposes Set(key, value, ttl), you haven't abstracted anything. You've just added a layer of indirection over Redis. When your repository exposes GetAllAsync() and returns full entities, you've created a trap. The caller has no idea how much data that loads. The abstraction hides the very thing you need to see.

But when your interface speaks the domain's language, everything changes. The contract is defined by what the business needs, not by what the database supports. The implementation can be Redis, an in-memory dictionary, a SQL table. The domain doesn't care. That's what swappable means. That's what testable means. And that's the whole point.

This is where repositories get genuinely tricky, and I won't pretend otherwise. Query needs are shaped by both the domain and the data access patterns. GetOrdersReadyForFulfillment() is a clean business abstraction. But the moment you need filtering, pagination, sorting, and projection, you start feeling the pull toward exposing query mechanics. The repository wants to speak business language. Complex read paths want to speak query language.

I think this tension is actually one of the strongest arguments for event-based approaches rather than against them. When your read path is "query facts by tags," the abstraction and the business concept are the same thing. You're not translating between object shapes and table shapes. You're just asking for the facts relevant to your decision. The impedance mismatch that makes repository design so painful largely dissolves when your data model is your domain model.

The critics who say "just use the ORM directly" are solving the impedance mismatch by removing the abstraction layer. That works until you need to swap, test, or reason about your persistence at scale. The alternative is to dissolve the mismatch at a deeper level so the abstraction earns its keep.

Pragmatism Without a Foundation Is Just Drift

"Start simple" is fine advice for day one. But without a framework for how to evolve, simple doesn't stay simple. It becomes a collection of one-off decisions that nobody can reason about.

I've inherited these codebases. The ones where every developer was pragmatic, where every feature took the shortest path, where nobody imposed unnecessary structure. They're not simpler. They're just inconsistent. Each endpoint does things a slightly different way. Error handling varies by module. Some paths use transactions, some don't. There's no shared vocabulary for how the system behaves.

This is the failure mode that never gets a blog post. Nobody writes "Pragmatism Considered Harmful." But in my experience, it causes as much pain as over-engineering. Maybe more, because at least an over-engineered system is consistently over-engineered.

The solution isn't to choose between rigor and pragmatism. It's to invest in foundations that make the pragmatic choice and the rigorous choice the same thing. If your tooling makes event sourcing as easy as a database insert, you don't have to debate whether this project "needs" it. If your framework defines where business rules live by default, developers don't have to rediscover the answer for each feature.

This is what good architecture actually does. Not impose ceremony. Not add layers for the sake of compliance. It reduces decisions. It makes the easy path and the correct path converge. When that happens, the "pragmatic" developer and the "architecture astronaut" end up writing the same code. That's the goal.

Build Forward

The industry doesn't need more posts about what failed. It needs more energy spent on making the right patterns invisible, on building tooling that embeds good decisions into the substrate so developers don't have to fight for them.

I'm not saying we should stop questioning patterns. Question everything. That's how things get better. But there's a difference between questioning a pattern and dismissing it. Questioning leads to refinement. Dismissal leads to a vacuum that gets filled with whatever's trending next.

The patterns that "aged poorly" didn't fail because they were bad ideas. They aged because the tooling and developer experience around them didn't evolve. Layered architecture is painful when you wire it by hand. Event sourcing is painful when you manage projections manually. Abstractions are painful when you design them to mirror technology instead of business.

Fix the tooling. Fix the interfaces. Fix the developer experience. The patterns are fine.

Here's what I want to see more of: people building things that push us forward. Tools that make recovery and reproducibility a default, not a debate. Frameworks that encode where business logic lives so teams don't have to argue about it in every PR. Abstractions that are shaped by domain needs from the start, not bolted on as afterthoughts.

The conversation shouldn't be "should we use patterns?" It should be "how do we make the right patterns feel like they were never there?"

Because the best architecture is the kind you forget is running. Not because it's absent. Because it's so embedded in how you work that it stops being a thing you think about and starts being a thing you just have.

That's building forward.

Share

Comments