Skip to content

Instantly share code, notes, and snippets.

@cabecada
Last active July 3, 2023 18:23
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save cabecada/f522074040789151ac7f453c102039c9 to your computer and use it in GitHub Desktop.
Save cabecada/f522074040789151ac7f453c102039c9 to your computer and use it in GitHub Desktop.
postgresql corruption zero pages index
https://www.postgresql.org/message-id/1184245756.24101.178.camel@coppola.muc.ecircle.de
https://www.cybertec-postgresql.com/en/how-to-corrupt-your-postgresql-database/
contains unexpected zero page at block 35651600
adjust=# select pg_relation_filepath('indexname');
pg_relation_filepath
--------------------------------------------------
pg_tblspc/5392705/PG_12_201909212/114783/5412650
ls -lrt pg_tblspc/5392705/PG_12_201909212/114783/5412650\.* | wc -l
473
there are 473 chunks of 1GB files for the index.
to calculate blocks we first see how many blocks are there in each chunk i.e
(1024 * 1024 / 8 ) # since our block size is 8k = 131072
So the chunk number will be:
blockId / blocks_per_chunk = 35651600 / 131072 = 272
the block id has to be made relative to the chunk file's start:
chunk_block_id = block_id % blocks_per_chunk
35651600 % 131072 = 16
so
dd if=5412650.272 skip=16 bs=8k count=1 conv=noerror iflag=direct | hexdump
1+0 records in
1+0 records out
8192 bytes (8.2 kB, 8.0 KiB) copied, 0.000152094 s, 53.9 MB/s
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
0002000
all zeros... check adjacent blocks
for i in {16..20}; do dd if=5412650.272 skip=$i bs=8k count=1 conv=noerror iflag=direct | tr -d '\0' | wc -c; done
1+0 records in
1+0 records out
8192 bytes (8.2 kB, 8.0 KiB) copied, 0.000295382 s, 27.7 MB/s
0
1+0 records in
1+0 records out
8192 bytes (8.2 kB, 8.0 KiB) copied, 0.000193171 s, 42.4 MB/s
5782
1+0 records in
1+0 records out
8192 bytes (8.2 kB, 8.0 KiB) copied, 0.000142676 s, 57.4 MB/s
3926
1+0 records in
1+0 records out
8192 bytes (8.2 kB, 8.0 KiB) copied, 0.000134201 s, 61.0 MB/s
3803
1+0 records in
1+0 records out
8192 bytes (8.2 kB, 8.0 KiB) copied, 0.000107582 s, 76.1 MB/s
3864
"indexname" btree (col1, col2)
adjust=# select 1 from table where index predicates;
ERROR: index "indexname" contains unexpected zero page at block 209560
HINT: Please REINDEX it.
going with the same logic above
dd if=5412650.1 skip=78488 bs=8k count=1 conv=noerror iflag=direct | tr -d '\0' | wc -c
1+0 records in
1+0 records out
8192 bytes (8.2 kB, 8.0 KiB) copied, 8.1793e-05 s, 100 MB/s
0
adjust=# set zero_damaged_pages TO on;
SET
adjust=# select 1 from trackers where mobile_app_id in (-1, 1000000000) and ltree2text(name) = 'xxxxxxx';
?column?
----------
(0 rows)
root@zfstest:~# dd if=/dev/urandom of=/tmp/random bs=8k count=10000
10000+0 records in
10000+0 records out
81920000 bytes (82 MB, 78 MiB) copied, 0.290177 s, 282 MB/s
root@zfstest:~# dd if=/tmp/random bs=8k | tr -d '\0' | wc -c
10000+0 records in
10000+0 records out
81920000 bytes (82 MB, 78 MiB) copied, 0.133213 s, 615 MB/s
81600812
# write some zero at block=10 of size 8k
root@zfstest:~# dd if=/dev/zero seek=10 bs=8k of=/tmp/random conv=notrunc,noerror count=1
1+0 records in
1+0 records out
8192 bytes (8.2 kB, 8.0 KiB) copied, 0.00037571 s, 21.8 MB/s
# confirm change in count of chars
root@zfstest:~# dd if=/tmp/random bs=8k | tr -d '\0' | wc -c
10000+0 records in
10000+0 records out
81920000 bytes (82 MB, 78 MiB) copied, 0.129599 s, 632 MB/s
81592651
# and then using ddrescue find the largest 0 block
root@zfstest:~# ddrescue -b 1 --generate-mode /dev/zero /tmp/random /tmp/mapfile
GNU ddrescue 1.26
Press Ctrl-C to interrupt
Initial status (read from mapfile)
rescued: 0 B, generated: 0 B
Current status
rescued: 81592 kB, generated: 81920 kB, current rate: 79626 kB/s
opos: 81920 kB, run time: 1s, average rate: 79626 kB/s
Finished
root@zfstest:~# grep '0x.*0x.*[?]' /tmp/mapfile | awk -F ' ' '{print $2}' | sort -ru | head -n 1
0x00002000. # which is 8192
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment