Skip to content

Instantly share code, notes, and snippets.

@frostschutz
Created February 24, 2020 23:16
Show Gist options
  • Save frostschutz/f63b6fd25e240ff0e3d8c699837aa3e7 to your computer and use it in GitHub Desktop.
Save frostschutz/f63b6fd25e240ff0e3d8c699837aa3e7 to your computer and use it in GitHub Desktop.
LVM RAID5 gen I/O failure and wild capacity change event
[44747.505134] device-mapper: raid: Loading target version 1.15.0
[44772.918083] device-mapper: raid: Device 1 specified for rebuild; clearing superblock
[44772.918086] device-mapper: raid: Superblocks created for new raid set
[44772.922200] md/raid1:mdX: active with 1 out of 2 mirrors
[44775.150317] mdX: bitmap file is out of date, doing full recovery
[44775.203299] md: recovery of RAID array mdX
[44785.658799] md: mdX: recovery done.
[44796.572343] md/raid:mdX: device dm-97 operational as raid disk 0
[44796.572345] md/raid:mdX: device dm-99 operational as raid disk 1
[44796.573124] md/raid:mdX: raid level 5 active with 2 out of 2 devices, algorithm 2
[44845.070481] device-mapper: raid: Device 2 specified for rebuild; clearing superblock
[44845.072436] md/raid:mdX: device dm-97 operational as raid disk 0
[44845.072438] md/raid:mdX: device dm-99 operational as raid disk 1
[44845.073271] md/raid:mdX: raid level 5 active with 2 out of 2 devices, algorithm 2
[44847.621879] Buffer I/O error on dev dm-95, logical block 65520, async page read
[44852.487365] md/raid:mdX: device dm-97 operational as raid disk 0
[44852.487369] md/raid:mdX: device dm-99 operational as raid disk 1
[44852.488578] md/raid:mdX: raid level 5 active with 2 out of 2 devices, algorithm 2
[44855.132817] Buffer I/O error on dev dm-95, logical block 65520, async page read
[44859.123541] md/raid:mdX: device dm-97 operational as raid disk 0
[44859.123545] md/raid:mdX: device dm-99 operational as raid disk 1
[44859.123548] md/raid:mdX: device dm-101 operational as raid disk 2
[44859.125109] md/raid:mdX: raid level 5 active with 3 out of 3 devices, algorithm 2
[44860.787631] Buffer I/O error on dev dm-95, logical block 65520, async page read
[44861.075380] md: reshape of RAID array mdX
[44870.261368] md: mdX: reshape done.
[44870.595122] dm-95: detected capacity change from 268435456 to 134217728
[45102.524496] md/raid:mdX: device dm-97 operational as raid disk 0
[45102.524500] md/raid:mdX: device dm-99 operational as raid disk 1
[45102.524502] md/raid:mdX: device dm-101 operational as raid disk 2
[45102.525630] md/raid:mdX: raid level 5 active with 3 out of 3 devices, algorithm 2
[45106.106515] Buffer I/O error on dev dm-95, logical block 98288, async page read
[45113.953785] md/raid:mdX: device dm-97 operational as raid disk 0
[45113.953790] md/raid:mdX: device dm-99 operational as raid disk 1
[45113.953792] md/raid:mdX: device dm-101 operational as raid disk 2
[45113.953794] md/raid:mdX: device dm-103 operational as raid disk 3
[45113.955677] md/raid:mdX: raid level 5 active with 4 out of 4 devices, algorithm 2
[45117.061737] Buffer I/O error on dev dm-95, logical block 98288, async page read
[45117.680692] md: reshape of RAID array mdX
[45136.388545] md: mdX: reshape done.
[45137.160666] dm-95: detected capacity change from 402653184 to 268435456
[49284.209178] perf: interrupt took too long (5034 > 5018), lowering kernel.perf_event_max_sample_rate to 39600
[53052.763923] XFS (dm-95): Mounting V5 Filesystem
[53052.858008] XFS (dm-95): Ending clean mount
[53052.862524] xfs filesystem being mounted at /mnt/tmp supports timestamps until 2038 (0x7fffffff)
[53131.450184] XFS (dm-95): Unmounting Filesystem
[53438.660606] md/raid:mdX: device dm-97 operational as raid disk 0
[53438.660607] md/raid:mdX: device dm-99 operational as raid disk 1
[53438.660608] md/raid:mdX: device dm-101 operational as raid disk 2
[53438.660608] md/raid:mdX: device dm-103 operational as raid disk 3
[53438.661150] md/raid:mdX: raid level 5 active with 4 out of 4 devices, algorithm 2
[53441.207212] Buffer I/O error on dev dm-95, logical block 163824, async page read
[53444.941753] md/raid:mdX: device dm-97 operational as raid disk 0
[53444.941756] md/raid:mdX: device dm-99 operational as raid disk 1
[53444.941758] md/raid:mdX: device dm-101 operational as raid disk 2
[53444.941760] md/raid:mdX: device dm-103 operational as raid disk 3
[53444.941761] md/raid:mdX: device dm-105 operational as raid disk 4
[53444.941763] md/raid:mdX: device dm-107 operational as raid disk 5
[53444.944090] md/raid:mdX: raid level 5 active with 6 out of 6 devices, algorithm 2
[53446.734477] Buffer I/O error on dev dm-95, logical block 163824, async page read
[53447.275191] md: reshape of RAID array mdX
[53477.039485] md: mdX: reshape done.
[53477.400607] dm-95: detected capacity change from 671088640 to 402653184
[53926.200154] md/raid:mdX: device dm-96 operational as raid disk 0
[53926.200156] md/raid:mdX: device dm-98 operational as raid disk 1
[53926.200157] md/raid:mdX: device dm-100 operational as raid disk 2
[53926.200157] md/raid:mdX: device dm-102 operational as raid disk 3
[53926.200158] md/raid:mdX: device dm-104 operational as raid disk 4
[53926.200159] md/raid:mdX: device dm-106 operational as raid disk 5
[53926.201044] md/raid:mdX: raid level 5 active with 6 out of 6 devices, algorithm 2
# create 128M linear test LV
lvcreate -L128M -n raidtest HDD
# convert to raid5 (actually raid1)
lvconvert --type raid5 HDD/raidtest --stripes 2
# convert to raid5 (repeated until raid5)
lvconvert --type raid5 HDD/raidtest --stripes 2
# grow to 3 stripes
lvconvert --stripes 3 HDD/raidtest
# grow to 5 stripes
lvconvert --stripes 5 HDD/raidtest
# lvconvert --stripes 5 HDD/raidtest
Using default stripesize 64.00 KiB.
WARNING: Adding stripes to active logical volume HDD/raidtest will grow it from 6 to 10 extents!
Run "lvresize -l6 HDD/raidtest" to shrink it or use the additional capacity.
Are you sure you want to add 2 images to raid5 LV HDD/raidtest? [y/n]: y
Logical volume HDD/raidtest successfully converted.
# shred -v -n 1 /dev/HDD/raidtest
shred: /dev/HDD/raidtest: pass 1/1 (random)...
shred: /dev/HDD/raidtest: pass 1/1 (random)...64KiB/640MiB 0%
shred: /dev/HDD/raidtest: pass 1/1 (random)...15MiB/640MiB 2%
shred: /dev/HDD/raidtest: pass 1/1 (random)...28MiB/640MiB 4%
shred: /dev/HDD/raidtest: pass 1/1 (random)...38MiB/640MiB 6%
shred: /dev/HDD/raidtest: pass 1/1 (random)...51MiB/640MiB 7%
shred: /dev/HDD/raidtest: pass 1/1 (random)...109MiB/640MiB 17%
shred: /dev/HDD/raidtest: pass 1/1 (random)...175MiB/640MiB 27%
shred: /dev/HDD/raidtest: pass 1/1 (random)...240MiB/640MiB 37%
shred: /dev/HDD/raidtest: pass 1/1 (random)...305MiB/640MiB 47%
shred: /dev/HDD/raidtest: pass 1/1 (random)...369MiB/640MiB 57%
shred: /dev/HDD/raidtest: error writing at offset 402653184: No space left on device
### why??? ###
# shred -v -n 1 /dev/HDD/raidtest
shred: /dev/HDD/raidtest: pass 1/1 (random)...
shred: /dev/HDD/raidtest: pass 1/1 (random)...52MiB/384MiB 13%
### 640 -> 384 for no reason ###
# blockdev --getsize64 /dev/HDD/raidtest
402653184
# lvchange -a n HDD/raidtest
# lvchange -a y HDD/raidtest
### back to normal: ###
# shred -v -n 1 raidtest
shred: raidtest: pass 1/1 (random)...
shred: raidtest: pass 1/1 (random)...57MiB/640MiB 8%
shred: raidtest: pass 1/1 (random)...127MiB/640MiB 19%
shred: raidtest: pass 1/1 (random)...190MiB/640MiB 29%
shred: raidtest: pass 1/1 (random)...254MiB/640MiB 39%
shred: raidtest: pass 1/1 (random)...320MiB/640MiB 50%
shred: raidtest: pass 1/1 (random)...380MiB/640MiB 59%
shred: raidtest: pass 1/1 (random)...447MiB/640MiB 69%
shred: raidtest: pass 1/1 (random)...511MiB/640MiB 79%
shred: raidtest: pass 1/1 (random)...575MiB/640MiB 89%
shred: raidtest: pass 1/1 (random)...640MiB/640MiB 100%
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment