SW engineering, engineering management and the business of software

subscribe for more
stuff like this:

Just Use Postgres

One of the more mind-blowing articles I’ve read in the past couple years was Stephan Schmidt’s Just Use Postgres for Everything.

Earlier this year, I spun up a greenfield tech project and realized that as a solo developer / tech co-founder, my limitations were mostly around operational overhead and not code and features. Just Use Postgres let me simplify the Infra and DevOps stack by relying more on code. This is a great trade off for a move-fast, prototype-heavy, super-early stage startup.

It worked. Really well.

I ended up building a simple webstack with Elixir, Phoenix and Liveview on top of Postgres and that’s just about it. I did some simple CICD with github actions. This setup will likely last me thru my next two or three eng hires.

UPDATE 2026-01: I’m up to an eng team of 5 and we are just now moving away from the “Just Use Postgres” to Still Mostly Using Postgres except for some docker sidecars for one off work such as pdf generation and also Oban (which is itself backed by Postgres, so the theme holds). No regrets, it’s been a very positive decision in hindsight.

The Core Idea

The core concept of “Just Use Postgres” is simple: shift complexity away from devops and into code.

Fewer moving parts means you move faster. More importantly, you can make architectural changes faster. When your infrastructure is basically “one Postgres instance,” new developers can get the full stack running on their laptop in minutes, not days.

What Postgres Replaced

Here’s what I didn’t need:

Elasticsearch/Typesense for search. Postgres has full-text search and trigram matching built in. I wrote about this in my Postgres Text Search 101 post. For an early-stage product, it’s more than good enough.

Redis/RabbitMQ for job queues. Postgres can be a perfectly fine job queue. SELECT ... FOR UPDATE SKIP LOCKED gives you a reliable, transactional work queue without adding another service to your stack. Sequin has a great writeup on building your own SQS or Kafka with Postgres if you want to see how far you can push this.

Redis/Memcached for caching. JSONB columns and materialized views handle most of the caching patterns I needed. Unlogged tables work great for ephemeral data you’d normally throw in Redis. As an odd but positive side effect, when your data and your cache are on the same machine, you eliminate an entire class of cache invalidation headaches.

A separate key-value store. JSONB columns are a schemaless key-value store that also happens to support indexing and querying. It’s not as fast as Redis for hot-path lookups, but it’s more durable. It’s definitely simpler than adding Mongo.

UPDATE 2026-01: AI/LLMs are unreasonably good at querying JSONB columns. They are good at SQL in general, but if postgres’s somewhat odd JSONB syntax bothered you, AI makes it easier.

Why It Works

The conventional wisdom is that you should use the best tool for each job. This sounds reasonable until you realize that “best tool for the job” has a hidden cost: every tool you add is another thing that can break, another thing to deploy, another thing new hires need to learn, and another thing keeping you up at night.

The best tool for the job is often the tool you already have running, as long as it can do the job.

Postgres can do a surprising number of jobs. Not all of them optimally. But at early stage, optimal doesn’t matter. Speed of iteration matters. Simplicity of deployment matters. Being able to reason about your entire system matters.

This is the same principle I wrote about years ago in Principles of Scalable Architectures: simple as possible, as few components as possible. “Just Use Postgres” is that principle taken seriously.

When It Breaks Down

I’m not going to pretend this works forever. It doesn’t.

When your search queries start taking hundreds of milliseconds and you’ve exhausted your indexing options, it’s time for a dedicated search engine. When you’re processing tens of thousands of jobs per second, you probably want a real message broker. When you need sub-millisecond cache lookups at massive scale, Redis earns its keep.

But here’s the thing: you’ll know when you get there. And when you do, you’ll be migrating one well-understood component at a time, not untangling an operational ball of spaghetti.

Just Use Postgres

Are you early stage? Then start with Postgres. Use it for everything you can. Add complexity only when your scale forces your hand.

You’ll ship faster, sleep better, and (hopefully) spend your limited engineering bandwidth on stuff that moves the needle on your business value.



in lieu of comments, you should follow me on bluesky at @amattn.com and on twitch.tv at twitch.tv/amattn. I'm happy to chat about content here anytime.


the fine print:
aboutarchivemastodonblueskytwitchconsulting or speaking inquiries
© matt nunogawa 2010 - 2023 / all rights reserved