Skip to content

Instantly share code, notes, and snippets.

@arvenil
Last active August 15, 2023 14:14
Show Gist options
  • Save arvenil/b46e927c943fa7495780ea2ae5492e78 to your computer and use it in GitHub Desktop.
Save arvenil/b46e927c943fa7495780ea2ae5492e78 to your computer and use it in GitHub Desktop.
SKIP LOCKED with partitioned table

Create data table and partition it by state

CREATE TABLE data (
   id bigserial not null,
   state smallint not null DEFAULT 1,
   updated_at timestamp without time zone default now()
) partition by list(state);
create table data_pending partition of data for values in (1);
create table data_processing partition of data for values in (2);
create table data_done partition of data for values in (3);

Generate test data

INSERT INTO transactions
SELECT generate_series(1,1000) AS id, 1 AS state, NOW();

Move data from pending state to processing in batches

UPDATE transactions
SET transactions.state = 2, updated_at = NOW()
WHERE id = (
    SELECT id
    FROM transactions
    WHERE state = 1
    LIMIT 10 FOR UPDATE SKIP LOCKED
)
RETURNING id;

You can now process them in application and move to done state when finished.

However, this doesn't work as FOR UPDATE SKIP LOCKED fails for partitioned tables with following error

ERROR: tuple to be locked was already moved to another partition due to concurrent update

Here is full script to test this

cat > test-skip-locked-with-partitions << 'SCRIPT'
PSQL_CMD="psql -q -U postgres"

eval $PSQL_CMD > /dev/null << EOF
CREATE TABLE IF NOT EXISTS data (
   id bigserial not null,
   state smallint not null DEFAULT 1,
   updated_at timestamp without time zone default now()
) partition by list(state);
CREATE TABLE IF NOT EXISTS data_pending partition of data for values in (1);
CREATE TABLE IF NOT EXISTS data_processing partition of data for values in (2);
CREATE TABLE IF NOT EXISTS data_done partition of data for values in (3);

INSERT INTO data
SELECT generate_series(1,1000) AS id, 1 AS state, NOW();
EOF

function run {
eval $PSQL_CMD > /dev/null << EOF
    UPDATE data
    SET state = 2, updated_at = NOW()
    WHERE id IN (
        SELECT id
        FROM data
        WHERE state = 1
        LIMIT 10 FOR UPDATE SKIP LOCKED
    )
    RETURNING id;
EOF
}

for i in {1..100}; do (run &) done
SCRIPT
bash test-skip-locked-with-partitions

Which results in

root@e92e3fa7fecf:/# bash test-skip-locked-with-partitions
ERROR:  tuple to be locked was already moved to another partition due to concurrent update
ERROR:  tuple to be locked was already moved to another partition due to concurrent update
ERROR:  tuple to be locked was already moved to another partition due to concurrent update
ERROR:  tuple to be locked was already moved to another partition due to concurrent update
ERROR:  tuple to be locked was already moved to another partition due to concurrent update
ERROR:  tuple to be locked was already moved to another partition due to concurrent update
ERROR:  tuple to be locked was already moved to another partition due to concurrent update
ERROR:  tuple to be locked was already moved to another partition due to concurrent update
ERROR:  tuple to be locked was already moved to another partition due to concurrent update

This works well if table is not partitioned. However, query with WHERE state=1 has hard time when there are millions of records. Adding index doesn't help as Query Planner tends to ignore it and do Seq Scan anyway. Workaround for now is to create separate queue table, which is like custom partitioning.

INSERT SAD CAT PICTURE

@nkravi
Copy link

nkravi commented Aug 15, 2023

I was wondering if you have found a solution for this?
We are running into the same issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment