Skip to content

Instantly share code, notes, and snippets.

@yang-wei
Last active January 24, 2021 00:39
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save yang-wei/a018ca9c81e352946f678a3f9ff6849d to your computer and use it in GitHub Desktop.
Save yang-wei/a018ca9c81e352946f678a3f9ff6849d to your computer and use it in GitHub Desktop.
Talking about Isolation in ACID

Playing with MySQL

Read Committed

  • prevent dirty write

    • row level locks
  • prevent dirty read

    • row level locks could be use but not performant
    • write new value and remember old value at the same time. return old value while transaction is not done

Snapshot isolation (or repeatable-read)

  • prevent nonrepeatable read

nonrepeatable read

-- transaction level: read committted

-- session A starts --
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from account;
+----+--------+
| id | amount |
+----+--------+
|  3 |   3000 |
+----+--------+
-- not yet commit --

-- session B starts --
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql> update account set amount = 6000 where id = 3;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> commit;
Query OK, 0 rows affected (0.01 sec)
-- session B ends --

-- session A continue --
mysql> select * from account;
+----+--------+
| id | amount |
+----+--------+
|  3 |   6000 | 
+----+--------+
3 rows in set (0.00 sec)

-- non repeatable read happens within the same transaction --

This is not good when

  • backups
  • table scan

Which both of them take long time to complete and non-repeatable might lead to inconsistent data

How ?

Serializable

  • prevent lost update

lost update

mysql> -- session A starts --
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from account where id = 1;
+----+--------+
| id | amount |
+----+--------+
|  1 |   1000 |
+----+--------+
1 row in set (0.00 sec)

mysql> -- application logic total= 1000 + 1 = 1001


mysql> -- session B starts --
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from account where id = 1;
+----+--------+
| id | amount |
+----+--------+
|  1 |   1000 |
+----+--------+
1 row in set (0.00 sec)

mysql> -- application logic total= 1000 + 1 = 1001

mysql> -- session A returns --
mysql> update account set amount = 1001 where id = 1;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> commit;
Query OK, 0 rows affected (0.01 sec)

-- sessions B returns --
mysql> update account set amount = 1001 where id = 1;  -- this should be 1002 if these two are executed serially
Query OK, 0 rows affected (0.00 sec)
Rows matched: 1  Changed: 0  Warnings: 0

mysql> commit;
Query OK, 0 rows affected (0.00 sec)

(for update can also be use to prevent this. For eg: select * from account where id = 1 for update;)

How can we achive serializability ?

  • execute serially
    • single thread (for eg: Redis)
    • partition could be use to leverage multicore CPU
  • two phase locking
    • writers block writers (READ COMMITTED) and readers and vice versa
    • exclusive lock for lock, shared lock for read
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment