Hacker News new | past | comments | ask | show | jobs | submit login

Postgres defaults to "read committed" isolation, rather than repeatable read or serializable isolation. I have mostly noticed the difference when dealing with updating HTTP session objects in a transaction; with "read commited" isolation, concurrent requests can overwrite newly-written data. With serializable isolation, the transaction that tries to overwrite new data with older data (read at the beginning of the request) is aborted.

(The fix for this is difficult, but I basically don't use the session for much transient data anymore, so this became less of an issue. Before setting up the database to be strict, I never even thought of this as a problem. But obviously it is.)




I wouldn't say it is "obviously" a problem: READ COMMITTED are perfectly reasonable semantics, and are preferable to many applications (e.g. many apps aren't written to retry transactions that abort due to serialization conflicts). Which isolation level ought to be the default is a matter of debate, but I don't think using RC by default is "obviously" wrong -- if an application depends on a particular isolation level for correctness, it should set it explicitly in any case.


Just remember that with read committed isolation, certain access patterns can silently trash your data.

People are afraid of transactions aborting for some reason, as they think that's an error. It's not; you just retry your transaction, which will now be using correct data. "Read committed" opens you up to race conditions in exchange for transactions rarely (never?) aborting themselves.

I think the default should be "never trash my data without telling me".




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: