Pessimistic Locking – When Optimism Just Won’t Cut It
Let’s talk about a locking strategy that isn’t as “hopeful” as its name suggests — pessimistic locking.
You probably remember I wrote about optimistic concurrency control recently, right? This post is kind of the “other side” of that story.
Today, we’ll look into how pessimistic locking works, when it actually makes sense to use it, and why it’s not the dinosaur some devs assume it is.
🤔 Why Lock Anything at All?
Let’s say you’re building a system that allows users to book concert tickets. You don’t want 100 people thinking they all booked the last 2 seats, right? That’s where concurrency control kicks in — keeping your data from going rogue in high-traffic situations.
🛡 What is Pessimistic Locking?
Pessimistic locking assumes conflicts will happen — and takes a “don’t let anyone else touch it while I’m using it” approach.
It works like this:
- You lock a resource before you read or write it.
- While the lock is held, no other transaction can read or write that resource.
- After you’re done, you release the lock and go on your way.
This is typically implemented via SELECT ... FOR UPDATE
in SQL databases, which locks the selected row until the transaction commits or rolls back.
🛠 When Should You Actually Use It?
Let’s be honest — we don’t always need to be this cautious. But here are some times when pessimistic locking really shines:
- 🔄 High contention: Multiple users updating the same data frequently.
- 💳 Financial transactions: Where accuracy is more critical than speed.
- 🧾 Inventory updates: You don’t want people buying stock that doesn’t exist.
For example, if you're reserving hotel rooms or deducting balances, and your user experience or business model can't tolerate retries, pessimistic locking is your guy.
⚠ But... There Are Trade-offs
Yes, like anything good in life, it comes with baggage:
- ❌ Performance Hit: It slows things down because locks can block other transactions.
- 🔁 Deadlocks: If two transactions wait on each other’s locks, your DB can choke.
- 🧯 Extra Cleanup: If a process crashes mid-transaction, locks might linger unless properly handled.
So, while it helps avoid retries (unlike optimistic locking), you could end up locking the DB so tight that nothing moves — classic over-correction.
Conclusion
Pessimistic locking is not a villain. It's a tool — and like any tool, it’s about using it in the right context.
Don’t use a chainsaw when a scalpel will do — but don’t hesitate to bring out the big guns when things get chaotic.
In many systems I’ve seen (and sometimes inherited 😅), folks jump to the “let’s just retry everything” approach without even considering whether simple row-level locks might solve their issue cleanly and reliably.
And if you’re thinking: "Wait, so when should I pick one over the other?"
Well, that sounds like a good idea for the next post 😉
Member discussion