Back

Using PostgreSQL as a Dead Letter Queue for Event-Driven Systems

258 points13 daysdiljitpr.net
TexanFeller13 days ago

Ofc I wouldn't us it for extremely high scale event processing, but it's great default for a message/task queue for 90% of business apps. If you're processing under a few 100m events/tasks per day with less than ~10k concurrent processes dequeuing from it it's what I'd default to.

I work on apps that use such a PG based queue system and it provides indispensable features for us we couldn't achieve easily/cleanly with a normal queue system such as being able to dynamically adjust the priority/order of tasks being processed and easily query/report on the content of the queue. We have many other interesting features built into it that are more specific to our needs as well that I'm more hesitant to describe in detail here.

j4512 days ago

Very few things dna start at an extremely high scale event processing.

There’s also an order of magnitude higher events when doing event based work in processing.

This seems like a perfectly reasonable starting and gateway points that can have things organized for when the time comes.

Most things don’t scale that big.

gytisgreitai12 days ago

So perhaps don’t use kafka at all? E.g. Adyen used postgresql [1] as a queue until the outgrew. In this case it seems there are a lot of things that can go south in case of major issue on the event pipeline. Unless the throughput is low.. but then why kafka?

[1] https://www.adyen.com/knowledge-hub/design-to-duty-adyen-arc...

tracker112 days ago

RDBMS are pretty well understood and very flexible, more still with the likes of JSONB where parts of your schema can be (de)normalized for convenience and reducing joins in practice. Modern hardware is MUCH more powerful today than even a decade and a half ago. You can scale vertically a LOT with an RDBMS like PostgreSQL, so it's a good fit for more use cases as a result.

Personally, at this point, I'm more inclined to reach for a few tools than to try to increase certain types of complexity. That said, I'm probably more inclined to introduce valkey/redis earlier on for some things, which I think may be better suited to MQ type duties without an actual MQ or more complex service bus over PG... but PG works.

Especially for systems that you aren't breaking up queues because of the number of jubs, so much as the benefits of a logical separation of the work from the requestor. Email (for most apps), report generation, etc... all types of work that an RDBMS is more than suitable for.

j4512 days ago

Probably not worth using a sledgehammer (Kafka) for an ant.

Lots of ppl do resume building only to realize rolls like Kafka at start vs scale can be very different.

It’s best to learn events from the ground up including how, when, and where you may outgrow existing implementation approaches let alone technologies.

rbranson13 days ago

Biggest thing to watch out with this approach is that you will inevitably have some failure or bug that will 10x, 100x, or 1000x the rate of dead messages and that will overload your DLQ database. You need a circuit breaker or rate limit on it.

rr80813 days ago

I worked on an app that sent an internal email with stack trace whenever an unhandled exception occurred. Worked great until the day when there was an OOM in a tight loop on a box in Asia that sent a few hundred emails per second and saturated the company WAN backbone and mailboxes of the whole team. Good times.

with12 days ago

This is the same risk with any DLQ.

The idea behind a DLQ is it will retry (with some backoff) eventually, and if it fails enough, it will stay there. You need monitoring to observe the messages that can't escape DLQ. Ideally, nothing should ever stay in DLQ, and if it does, it's something that should be fixed.

microlatency12 days ago

What do you use for the monitoring of DLQs?

with11 days ago

At my last workplace, we used pure AWS Cloudwatch. At my new workplace, we use Grafana+Sentry

shayonj13 days ago

This! Only thing worse than your main queue backing off is you dropping items from going into the DLQ because it can’t stay up.

pletnes13 days ago

If you can’t deliver to the DLQ, then what? Then you’re missing messages either way. Who cares if it’s down this way or the other?

xyzzy_plugh13 days ago

Not necessarily. If you can't deliver the message somewhere you don't ACK it, and the sender can choose what to do (retry, backoff, etc.)

Sure, it's unavailability of course, but it's not data loss.

konart13 days ago

If you are reading from Kafka (for example) and you can't do anything with a message (broken json as an example) and you can't put it into a DLQ - you have not other option but to skip it or stop on it, no?

Misdicorl12 days ago

Your place of last resort with kafka is simply to replay the message back to the same kafka topic since you know it's up. In a simple single consumer setup just throw a retry count on the message and increment it to get monitoring/alerting/etc. Multi consumer? Put an enqueue source tag on it and only process the messages tagged for you. This won't scale to infinity but it scales really really far for really really cheap

singron13 days ago

Generally yes, but if you use e.g. the parallel consumer, you can potentially keep processing in that partition to avoid head-of-line blocking. There are some downsides to having a very old unprocessed record since it won't advance the consumer group's offset past that record, and it instead keeps track of the individual offsets it has completed beyond it, so you don't want to be in that state indefinitely, but you hope your DLQ eventually succeeds.

But if your DLQ is overloaded, you probably want to slow down or stop since sending a large fraction of your traffic to DLQ is counter productive. E.g. if you are sending 100% of messages to DLQ due to a bug, you should stop processing, fix the bug, and then resume from your normal queue.

+1
awesome_dude12 days ago
RedShift113 days ago

The point is to not take the whole server down with it. Keeps the other applications working.

rbranson13 days ago

Sure, but you still need to design around this problem. It’s going to be a happy accident that everything turns out fine if you don’t.

plaguuuuuu12 days ago

Could one put the DLQ messages on a queue and have a consumer ingest into pg?

(The queue probably isnt down if you've just pulled a message off it)

j4512 days ago

It will happen eventually in any system.

No need to look down on PG because it makes it more approachable and is more longer a specialized skill.

exabrial13 days ago

> FOR UPDATE SKIP LOCKED

Learned something new today. I knew what FOR UPDATE did, but somehow I've never RTFM'd hard enough to know about the SKIP LOCKED directive. Thats pretty cool.

scresswell12 days ago

Yes, SKIP LOCKED is great. In practice you nearly always want LIMIT, which the article did not mention. Be careful if your selection spans multiple tables: only the relations you explicitly lock are protected (see SELECT … FOR UPDATE OF t1, t2). ORDER BY matters because it controls fairness and retry behaviour. Also watch ANALYZE: autoanalyze only runs once the dead to live tuple threshold is crossed, and on large or append heavy tables with lots of old rows this can lag, leading to poor plans and bad SKIP LOCKED performance. Finally, think about deletion and lifecycle: deleting on success, scheduled cleanup (consider pg_cron), or partitioning old data all help keep it efficient.

exabrial12 days ago

I can see how that'd be extremely useful with LIMIT, especially with XA. Take a stride, complete it, or put it back for someone else.

Something I've still not mastered is how to prevent lock escalation into table-locks, which could torpedo all of this.

metanonsense12 days ago

only learned about SKIP LOCKED because ChatGPT suggested it to solve some concurrency problem I had. Great tool to learn such things.

indigo94512 days ago

Great tool that wrote the blog post in the OP also, so it's quite versatile.

deepsun1 day ago

> CREATE INDEX idx_dlq_status ON dlq_events (status);

> CREATE INDEX idx_dlq_status_retry_after ON dlq_events (status, retry_after);

You don't need two indices when one is a prefix of another. Just one `idx_dlq_status_retry_after` will do the job.

with12 days ago

Great application of first principles. I think it's totally reasonable also, at even most production loads. (Example: My last workplace had a service that constantly roared at 30k events per second, and our DLQs would at most have orders of hundreds of messages in them). We would get paged if a message's age was older than an hour in the queue.

The idea is that if your DLQ has consistently high volume, there is something wrong with your upstream data, or data handling logic, not the architecture.

microlatency12 days ago

What did you use for the DLQ monitoring? And how did you fix the issues?

with11 days ago

We strictly used AWS for everything and always preferred AWS-managed, so we always used SQS (and their built-in DLQ functionality). They made it easy to configure throttling, alerting, buffering, concurrency, retries etc, and you could easily use the UI to inspect the messages in a pinch.

As far as fixing actual critical issues - usually the message inside the DLQ had a trace that was revealing enough, although not always so trivial.

The philosophy was either: 1. fix the issue 2. swallow the issue (more rare)

but make sure this message never comes back to DLQ again

renewiltord13 days ago

Segment uses MySQL as queue not even as DLQ. It works at their scale. So there are many (not all) systems that can tolerate this as queue.

I have simple flow: tasks are order of thousands an hour. I just use postgresql. High visibility, easy requeue, durable store. With appropriate index, it’s perfectly fine. LLM will write skip locked code right first time. Easy local dev. I always reach for Postgres for event bus in low volume system.

kristov12 days ago

Why use shedlock and select-for-update-skip-locked? Shedlock stops things running in parallel (sort-of), but the other thing makes parallel processing possible.

jeeybee12 days ago

I maintain a small Postgres-native job queue for Python called PGQueuer: https://github.com/janbjorge/pgqueuer

It uses the same core primitives people are discussing here (FOR UPDATE SKIP LOCKED for claiming work; LISTEN/NOTIFY to wake workers), plus priorities, scheduled jobs, retries, heartbeats/visibility timeouts, and SQL-friendly observability. If you’re already on Postgres and want a pragmatic “just use Postgres” queue, it might be a useful reference / drop-in.

cpursley12 days ago
nextaccountic12 days ago

Indeed, pgmq is exactly the Postgres queueing system that you would build from scratch (for update skip locked and all that), except it's already built. Cloud providers should install this extension by default - it's in a really sweet spot for when you don't want or need a separate queue

shoo12 days ago

re: SKIP LOCKED, introduced in postgres 9.5, here's an an archived copy [†] of the excellent 2016 2ndquadrant post discussing it

https://web.archive.org/web/20240309030618/https://www.2ndqu...

corresponding HN discussion thread from 2016 https://news.ycombinator.com/item?id=14676859

[†] it seems that all the old 2ndquadrant.com blog post links have been broken after their acquisition by enterprisedb

upmostly12 days ago

We just published a detailed walkthrough of this exact pattern with concrete examples and failure modes:

PostgreSQL FOR UPDATE SKIP LOCKED: The One-Liner Job Queue https://www.dbpro.app/blog/postgresql-skip-locked

It covers the race condition, the atomic claim behaviour, worker crashes, and how priorities and retries are usually layered on top. Very much the same approach described in the old 2ndQuadrant post, but with a modern end-to-end example.

victor10612 days ago

Love your product. Will you ever provide support of duckdb/motherduck? Wish there is a generic way you provided to add any database type

upmostly12 days ago

Thanks, glad you like it.

DuckDB is on our radar. In practice each database still needs some engine-specific work to feel good, so a fully generic plugin system is harder than it sounds. We are thinking about how to do this in a scalable way.

cmgriffing12 days ago

Only slightly related, but I have been using Oban as a Postgres native message queue in the elixir ecosystem and loving it. For my use case, it’s so much simpler than spinning up another piece of infrastructure like Kafka or rabbitmq

nottorp12 days ago

Hmm that raises a question for me.

I haven't done a project that uses a database (be it sql or no sql) where the amount of deletes is comparable to the amount of inserts (and far larger than like tens per day, of course).

How does your average db server work with that, performance wise? Intuitively I'd think it's optimized more for inserts than for deletes, but of course I may be wrong.

branko_d12 days ago

Why use string as status, instead of a boolean? That just wastes space for no discernable benefit, especially since the status is indexed. Also, consider turning event_type into an integer if possible, for similar reasons.

Furthermore, why have two indexes with the same leading field (status)?

storystarling12 days ago

Boolean is rarely enough for real production workloads. You need a 'processing' state to handle visibility timeouts and prevent double-execution, especially if tasks take more than a few milliseconds. I also find it crucial to distinguish between 'retrying' for transient errors and 'failed' for dead letters. Saving a few bytes on the index isn't worth losing that observability.

branko_d12 days ago

> Boolean is rarely enough for real production workloads. You need a 'processing' ... 'retrying'... 'failed' ...

If you have more than 2 states, then just use integer instead or boolean.

> Saving a few bytes on the index isn't worth losing that observability.

Not sure why having a few well-known string values is more "observable" than having a few well-known integer values.

Also, it might be worth having better write performance. When PostgreSQL updates a row, it actually creates a new physical row version (for MVCC), so the less it has to copy the better.

throw_away_62312 days ago

Postgres supports enum that would fit this use case well. You get the readability of text and the storage efficiency of an integer. Adding new values used to require a bit of work, but version 9.1 introduced support for it.

hans_castorp12 days ago

Postgres does index de-duplication. So it's likely that even if you change the strings to enums, the index won't be that much smaller.

> Furthermore, why have two indexes with the same leading field (status)?

That indeed is a valid question.

awesome_dude12 days ago

I think that using Postgres as the message/event broker is valid, and having a DLQ on that Postgres system is also valid, and usable.

Having SEPARATE DLQ and Event/Message broker systems is not (IMO) valid - because a new point of failure is being introduced into the architecture.

Andys12 days ago

We did this at Chargify, but with MySQL. If Redis was unavailable, it would dump the job as a JSON blob to a mysql table. A cron job would periodically clean it out by re-enqueuing jobs, and it worked well.

nicoritschel12 days ago

lol a FOR UPDATE SKIP LOCKED post hits the HN homepage every few months it feels like

whateveracct12 days ago

and another CTO will use this meme as a reason to "just use Postgres" for far longer than they should lmao

throw_away_62312 days ago

I’ll take “just use Postgres” over “prematurely add three new systems” any day. Complexity has a cost too.

Using Postgres too long is probably less harmful than adding unnecessary complexity too early

whateveracct12 days ago

It probably is, but I don't like to operate as if I will inevitably make giant mistakes. Sometimes there isn't a trade off - you can just be good lolol.

Both are pretty bad.

gytisgreitai12 days ago

Would be interesting to see the numbers this system processes. My bet is that they are not that high.

tantalor12 days ago

This is logging.

bdangubic12 days ago

Care to elaborate? I do not understand how is this logging, it is quite opposite of logging as once the retry works the DLQ gets wiped out - would assume you would like logging to be persistent with at least a little bit of retention?

reactordev13 days ago

Another day, another “Using PostgreSQL for…” thing it wasn’t designed for. This isn’t a good idea. What happens when the queue goes down and all messages are dead lettered? What happens when you end up with competing messages? This is not the way.

direwolf2012 days ago

The other system you're using that isn't Postgres can also go down.

Many developers overcomplicate systems. In the pursuit of 100% uptime, if you're not extremely careful, you removed more 9s with complexity than you added with redundancy. And although hyperscalers pride themselves on their uptime (Amazon even achieved three nines last year!) in reality most customers of most businesses are fine if your system is down for ten minutes a month. It's not ideal and you should probably fix that, but it's not catastrophic either.

hinkley12 days ago

What I’ve found is that, particularly with internal customers, they’re fine with an hour a month, possibly several, as long as not all of your eggs are in one basket.

The centralization pushes make a situation where if I have a task to do that needs three tools to accomplish, and one of them goes down, they’re all down. So all I can do is go for coffee or an early lunch because I can’t sub in another task into this time slot. They’re all blocked by The System being down, instead of a system being down.

If CI is borked I can work on docs and catch up on emails. If the network is down or NAS is down and everything is on that NAS, then things are dire.

plaguuuuuu12 days ago

good luck doing anything if kafka is down though

reactordev12 days ago

>The other system you're using that isn't Postgres can also go down.

Only if DC gets nuked.

Many developers overcomplicate systems and throw a database at the problem.

mwigdahl12 days ago

Wow, TIL there was an atomic attack on the capitol in October!

+1
reactordev12 days ago
direwolf2012 days ago

Which system is immune to all downtime except the DC getting nuked?

+1
reactordev12 days ago
fcarraldo13 days ago

There are a ton of job/queue systems out there that are based on SQL DBs. GoodJob and SupaBase Queues are two examples.

It’s not usable for high scale processing but most applications just need a simple queue with low depth and low complexity. If you’re already managing PSQL and don’t want to add more management to your stack (and managed services aren’t an option), this pattern works just fine. Go back 10-15yrs and it was more common, especially in Ruby shops, as teams willing to adopt Kafka/Cassandra/etc were more rare.

reactordev12 days ago

And there are a ton that aren’t.

tlb12 days ago

I think the PG designers would be surprised by the claim that it wasn't designed for this. Database designers try very hard to support the widest possible range of uses.

If all queue actions are failing instantly, you probably want a separate throttle to not remove them from the Kafka queue, since you'd rather keep them there and resume processing them normally instead of from the DLQ when queue processing is working again. In fact, the rate limit implicitly enforced by adding failure records to the DLQ helps with this.

hnguyen1413 days ago

How so? There are queues that use SQL (or no-SQL) databases as the persistence layer. Your question is more specific to the implementation, not the database as persistence layer itself. And there are ways to address it.

senbrow13 days ago

Criticism without a better solution is only so valuable.

How would you do this instead, and why?

reactordev12 days ago

Watching a carpenter try to weld is equally only so valuable. I think the explanation is clear.

odie553313 days ago

You wouldn't ack the message if you're not up to process it.

trympet12 days ago

I prefer using MS Exchange mailboxes for my message queue.

tonymet12 days ago

Postgres is essentially a b-tree with a remote interface. Would you use a b-tree to store a dead letter queue? What is big O of insert & delete? what happens when it grows?

Postgres has a query interface, replication, backup and many other great utilities. And it’s well supported, so it will work for low-demand applications.

Regardless, you’re using the wrong data structure with the wrong performance profile, and at the margins you will spend a lot more money and time than necessary running it . And service will suffer.

quibono12 days ago

What would you use?

tonymet11 days ago

for parity functionality and better performance, a Redis list .