I want to make a case for something that's fallen out of fashion: the humble abstraction.
Somewhere along the way, our industry developed a reflexive distrust of abstraction layers. And I get it. We've all inherited codebases where someone wrapped every class in an interface, injected dependencies six levels deep, and created a UserServiceFactoryProviderManager that made you want to close your laptop and go outside. Bad abstractions are exhausting. Nobody is arguing otherwise.
But I think we've overcorrected. The conversation has shifted from "be thoughtful about abstractions" to "abstractions are overhead" to, in some circles, "just talk to the database directly." And in doing so, I think we've lost sight of something important: when abstractions are done well, they aren't a tax on your codebase. They're one of the most valuable things in it.
This post is my attempt to articulate why I still believe in them, how I use them in my own projects, and why I think they're the key to building systems that last.
What Changed My Thinking
Early in my career I watched a team spend three months migrating between data stores because every layer of the application had direct knowledge of the storage implementation. The data store hadn't just stored data. It had colonized the entire codebase. That experience taught me something I've carried ever since.
The migration should have been a surgical swap. Instead, it was a full-system rewrite. Controllers knew about table names. Services constructed raw queries. Business logic was interleaved with connection management. Every layer had tendrils reaching directly into the infrastructure.
The cost of an abstraction is paid once, upfront. The cost of not having one is paid continuously, forever, in ways you can't predict at the time you're making the decision. That team paid three months of engineering time because someone decided in week one that an abstraction layer for storage was "over-engineering."
What made it worse was that the original database choice wasn't wrong. It was perfectly reasonable given what the team knew at the time. The business requirements evolved, access patterns changed, and the original choice couldn't keep up. That's not a failure of planning. That's the normal life cycle of software.
The failure was architectural. Not "we picked the wrong database" but "we made it impossible to pick a different one later." Every query was handwritten against the specific database's API. Every data access pattern assumed the specific storage engine's capabilities. The team had built their house directly on top of the plumbing, and when the plumbing needed to change, the house had to come down.
I've seen variations of this story at every company I've worked at. The specific infrastructure changes (database, cache, message broker, search engine, external API), but the pattern is always the same: a decision that seemed permanent turns out to be temporary, and the cost of changing it is proportional to how deeply it was embedded in the codebase.
Abstractions are how you make that cost constant instead of proportional. They're insurance against the inevitable reality that your infrastructure decisions will need to change. The premium is small. The payout is enormous.
The Snap On/Off Principle
Every external dependency in your system should be swappable without touching your core logic. Database, cache, search, messaging. Snap on today, snap off tomorrow. Your domain doesn't care what's behind the interface.
The mental model is simple. Your domain logic describes what it needs: "I need to store events." "I need to query by tags." "I need to cache a result." These become interfaces. Pure contracts with no opinion about implementation.
Your infrastructure layer provides the how. A Postgres module that fulfills the storage contract. An Elasticsearch module that fulfills the search contract. A Bergcache module that fulfills the cache contract. Each one is self-contained. Each one can be swapped, replaced, or removed without the domain layer knowing anything changed.
The critical design decision is who defines the interface. The domain does. Not the infrastructure. When the domain defines what it needs, the domain stays pure. The infrastructure bends to accommodate the domain, not the other way around.
Most codebases I've seen do this backwards. They pick Postgres first, then shape their domain logic around what Postgres can do. They pick Redis first, then design their caching strategy around Redis's data structures. The infrastructure drives the domain instead of the other way around.
When you flip that relationship, something powerful happens: your domain logic becomes portable. It doesn't care where it runs. It doesn't care what database backs it. It expresses business rules as pure functions over facts, and the infrastructure is just plumbing that connects those functions to the real world.
The structure looks like this:
domain/ → Pure logic. No dependencies. Defines what it needs.
application/ → Orchestration. Thin. Connects domain to the world.
infrastructure/ → Provides what domain needs. Snap on/off.
├── postgres/
├── elasticsearch/
├── bergcache/
└── memory/ → For testing. Always build this one first.
That memory/ adapter deserves special attention. It's the canary in the coal mine for architectural integrity. If you can't run your entire domain layer against an in-memory implementation, your abstractions are leaking. Something in the domain knows too much about the infrastructure. Fix the leak before you add real infrastructure complexity.
I always build the memory adapter first. Before a single line of Postgres code. Before configuring a single Redis instance. If my domain logic works perfectly against a hash map, my interfaces are clean. If I find myself needing a method like executeRawSQL() on my storage interface, that's a leak. The in-memory adapter makes leaks immediately obvious because there's no SQL engine to delegate to.
Always Build the Memory Adapter First
Before writing any real infrastructure code, build an in-memory implementation of every interface your domain defines. If your domain works against a hash map, your abstractions are clean. If it doesn't, you have a leak to fix before adding real complexity.
This practice does two things. First, it validates the abstraction. The contracts are expressing genuine capabilities rather than being thin wrappers around a specific vendor's API. If the in-memory adapter can't implement the interface, the interface is probably too coupled to a specific technology.
Second, it makes testing genuinely fast and reliable. Every test runs against the in-memory adapter. No database containers to spin up. No connection strings to configure. No test data to seed and clean up. No flaky tests caused by network timeouts. Just pure logic executing against pure data structures. When tests are this fast, you actually run them. When you actually run them, you catch problems early. It's a virtuous cycle.
The in-memory adapter is not a mock. This distinction matters. A mock verifies that certain methods were called in a certain order. An in-memory adapter is a real implementation of the interface. It stores data, retrieves data, enforces constraints. The only difference between it and the Postgres adapter is where the data lives.
This means your tests are exercising real behavior. When you write a test that says "append three events, then query by tag and expect three results," that test runs real logic. The in-memory adapter actually stores the events and actually performs the query. If the test passes against the in-memory adapter, you have high confidence that the same operation will work against Postgres, because both adapters implement the same contract.
And here's the powerful part: you can run the exact same test suite against every adapter. Write the tests once against the interface. Execute them against the in-memory adapter in your CI pipeline (fast, reliable, no infrastructure). Execute them again against the Postgres adapter in an integration test environment (slower, but validates the real implementation). Execute them against any community-built adapter to verify it conforms to the contract. One test suite, unlimited implementations.
I've found this single practice does more for code quality than any other architectural decision I make. It forces clean interfaces, enables fast testing, and provides a built-in conformance suite for every adapter anyone ever writes.
The Decision Framework
Not everything deserves an abstraction. The goal is to have exactly as many as you need and not one more. Every interface should exist because there's a concrete reason for it, not because someone read a book about clean architecture and decided everything needs to be injectable.
Here's how I decide. Is this dependency swappable? If there's a realistic scenario where you'd swap it for a different implementation (different database, different cache, testing), it gets an interface.
Is it domain logic? If it's a business rule or a decision function, it doesn't need an abstraction. It needs to be a pure function. Facts go in, a decision comes out. No interfaces, no dependency injection.
Is it everything else? Concrete implementation. No interface. If you're never going to swap it and it's not domain logic, the abstraction would just be ceremony.
Here's the visual:
swappable?
├── yes → interface
└── no → is it domain logic?
├── yes → pure function
└── no → concrete implementation
Let me give examples of each path.
Swappable, gets an interface. Your storage layer. Your cache layer. Your search engine. External APIs you depend on. Any third-party service that might change, go down, or get replaced. These all get interfaces because the implementation behind them is, by definition, not permanent. Today it's Postgres. Tomorrow it might be something else. Today it's Elasticsearch. Tomorrow it might be a different search engine. The interface is what your domain cares about. The vendor is an implementation detail.
Domain logic, gets a pure function. "Can this user place this order?" "Is this account in good standing?" "Has this student already registered for this course?" These are decisions. They take facts as input and return a verdict. They don't need interfaces because there's nothing to swap. The business rule is the business rule. Wrapping it in an abstraction layer adds complexity without adding flexibility, and I think this specific pattern (over-abstracting domain logic) is responsible for a lot of the valid frustration people have with abstractions in general.
Everything else, stays concrete. Your logging configuration. Your startup bootstrap code. Your route definitions. Utility functions. Things that have one implementation and no realistic swap scenario. Abstracting these is ceremony. Skip it.
The common mistake I see is people applying the "interface everything" rule uniformly across all three categories. That produces the UserServiceFactoryProviderManager problem. The fix isn't fewer abstractions everywhere. It's abstractions in the right places and concrete implementations everywhere else.
Abstractions as the Product
In the sealed core architecture I've been building toward, the abstractions aren't just an internal convenience. They're the open surface that the community interacts with. They're the extension points, the composition layer, the thing that makes the sealed core useful to people who aren't me. The abstractions are the product.
When someone wants to connect LumineDB to their existing Kafka cluster, they don't need access to the query engine's internals. They need a well-defined adapter interface. When someone wants to build a custom projection on top of events, they need a projection trait that's clear, documented, and comes with a reference implementation they can study.
The abstractions are the invitation. They say: "Here's how to participate. Here's where your creativity lives. Here's the surface where you can build things I never imagined." A poorly designed abstraction is a bad experience for every developer who tries to extend the system. A well-designed one enables an ecosystem.
Consider the difference between these two experiences:
Weak open surface: "Here are the types. Here's the API. Good luck figuring out how to connect it to anything."
Strong open surface: "Here's the trait for storage adapters. Here's the in-memory reference implementation. Here's how events flow through the system. Here's exactly where your adapter plugs in. Here are three community-built adapters you can learn from."
The second one creates an ecosystem. Not because you gave away the engine, but because you gave the community everything they need to build the connective tissue between your engine and their world.
This is what I mean when I say abstractions are the product. In my previous post, I talked about the difference between sharing capability and surrendering cognition. Abstractions are how you share capability. They're the mechanism that makes it possible to give people full access to what your software does without giving them the internals of how it works.
The sealed core is the brain. The abstractions are the nervous system. They carry signals between the brain and the rest of the body. The brain doesn't need to know whether the hand is holding a pen or a wrench. It sends the same signals through the same pathways. The hand (the infrastructure adapter) translates those signals into action appropriate to whatever tool it's holding.
You can replace the tool without rewiring the nervous system. You can replace the hand without changing the brain. And you can extend the body with entirely new limbs (community-built integrations) as long as they connect through the same neural pathways.
That's what good abstractions give you: a stable nervous system that lets the rest of the organism evolve independently.
Principles I Keep Coming Back To
Five principles guide how I build abstractions: the domain defines interfaces, always build the memory adapter first, keep adapters self-contained, test at the interface boundary, and publish the interfaces rather than the implementations. Together they create systems that are both protected and extensible.
Domain defines interfaces, infrastructure implements them. Never let the database or the framework dictate what your domain logic looks like. The domain speaks in its own language. Infrastructure translates.
Build the memory adapter first. Before any real infrastructure. If it works in memory, your abstractions are clean. If it doesn't, you have a leak to fix.
Keep adapters self-contained. Each infrastructure adapter should be a module you can delete without breaking anything except the specific integration it provides. If removing the Postgres adapter causes compilation errors in your domain layer, something is coupled that shouldn't be.
Test at the interface boundary. Write tests against the abstract interface, then run those same tests against every adapter. One test suite, unlimited implementations.
Publish the interfaces, not the implementations. Your interfaces are the public API that the community codes against. Make them thoughtful. Document them well. Provide reference implementations. These are the most important lines of code in your project from the community's perspective.
I realize abstractions aren't the trendy thing to advocate for right now. The pendulum has swung toward simplicity, directness, and "just ship it." And I genuinely respect that instinct. Over-engineering is real. Premature abstraction is real. The UserServiceFactoryProviderManager is a cautionary tale worth remembering.
But the pendulum always swings too far. And right now I think we're at the point where teams are shipping systems that work great today and become unmaintainable in eighteen months because nobody invested in the connective tissue. The core logic is fine. The database works. The API responds. But the glue between all of it is hardcoded, brittle, and impossible to change without rewriting half the system.
That's the codebase I inherited early in my career. It's the codebase I've made it my mission never to build again.
Abstractions, done right, are how you prevent it. Not by abstracting everything. Not by wrapping every class in an interface. But by identifying the seams where change is likely, designing clean contracts at those seams, and building a nervous system that lets the rest of the organism evolve independently.
Build the nervous system. The rest of the organism will thank you.