Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
PostgreSQL Serializable Isolation - false positives
-- (This code was run in PostgreSQL 9.6.1)
-- Demonstration of how serializable isolation for PostgreSQL, which detects possible
-- interference between concurrent transactions, can produce false positives
-- in psql, create the following table
-- open a second psql session
-- in session 1, type this
SELECT * FROM users WHERE username = 'alice';
INSERT INTO users ("username") VALUES ('alice');
-- don't type `END;` just yet; we want a concurrent transaction
--- in session 2, type this
SELECT * FROM users WHERE username = 'bob';
INSERT INTO users ("username") VALUES ('bob');
-- now type `END;` in each of the sessions.
-- whichever one you end second will give you
-- this error:
--- ERROR: 40001: could not serialize access due to read/write dependencies among transactions
--- DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt.
--- HINT: The transaction might succeed if retried.
--- LOCATION: PreCommit_CheckForSerializationFailure, predicate.c:4659
-- PostgreSQL is saying "I think the second one to end would have gotten
-- a different result if the first one had been complete before it started"
-- In this case, it wouldn't have; serializable isolation is giving is a
-- false positive for interference between concurrent transactions
Copy link

felixge commented Jul 22, 2017

I just came across this after making similar observation.

Myon in IRC gave me the hint to add a primary key to my val column, and it indeed fixed the false positive. Turns out postgres relies on indexes to implement granular SSI. I.e. in your case you should be able to fix the false positive by adding a index on the username column.

A sequential scan will always necessitate a relation-level predicate lock. This can result in an increased rate of serialization failures. It may be helpful to encourage the use of index scans by reducing random_page_cost and/or increasing cpu_tuple_cost. Be sure to weigh any decrease in transaction rollbacks and restarts against any overall change in query execution time.

Copy link

llooz commented Aug 29, 2017

I have the exact same issue even if i add an unique index on username
@nathanl have you solved this issue?

Copy link

I know this is two years later, but I wanted to chime in. I've been tracking a similar issue down, and this information helped a lot. One thing to note is that when your select statements aren't selective enough to use an index, it appears to do the entire table because of the seq_scan that happens.

This means when you use the examples on the postgres SSI wiki page here:

This makes me assume (and based on some casual observation) that it will have to table scan for them every time due to:

  1. Missing indexes on the columns used, making it seq_scan, causing the whole table to be considered the "range".
  2. Not enough rows, causing the query plan to ignore the indexes even if they existed because of data loading performance when everything is in the same page.
  3. Too large of a percentage of the table match, making the query not selective enough to avoid a seq_scan even though indexes exist.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment