Skip to content

Instantly share code, notes, and snippets.

@brianredbeard
Created January 6, 2022 06:28
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save brianredbeard/6fd311a34d2c7f6c3469a2ee890176e6 to your computer and use it in GitHub Desktop.
Save brianredbeard/6fd311a34d2c7f6c3469a2ee890176e6 to your computer and use it in GitHub Desktop.
cache disk setup - a shitty history dump of my logs from DIY tiered cache disk setups
[bharrington@host10 media]$ pvcreate /dev/md0
WARNING: Running as a non-root user. Functionality may be unavailable.
/run/lvm/lvmetad.socket: access failed: Permission denied
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
/run/lock/lvm/P_orphans:aux: open failed: Permission denied
Can't get lock for orphan PVs.
[bharrington@host10 media]$ sudo pvcreate /dev/md0
Physical volume "/dev/md0" successfully created.
[bharrington@host10 media]$ sudo pvcreate /dev/nvme0n1p2
Physical volume "/dev/nvme0n1p2" successfully created.
[bharrington@host10 media]$ sudo lvcreate -L 60G -n cache_block bulk_vol /dev/nvme0n1p2
Volume group "bulk_vol" not found
Cannot process volume group bulk_vol
(reverse-i-search)`vg': sudo ^Cremove --force bulk_vol
[bharrington@host10 media]$ sudo vgcreate bulk_vol /dev/md0 /dev/nvme0n1p2
Volume group "bulk_vol" successfully created
[bharrington@host10 media]$ sudo lvcreate -L 60G -n cache_block bulk_vol /dev/nvme0n1p2
Logical volume "cache_block" created.
[bharrington@host10 media]$ sudo lvcreate -L 2G -n cache_meta bulk_vol /dev/nvme0n1p2
Logical volume "cache_meta" created.
[bharrington@host10 media]$ sudo lvcreate --extents=100%FREE --name data bulk_vol /dev/md0
Logical volume "data" created.
[bharrington@host10 media]$ sudo lvconvert --type cache-pool --poolmetadata bulk_vol/cache_meta bulk_vol/cache_block
WARNING: Converting logical volume bulk_vol/cache_block and bulk_vol/cache_meta to cache pool's data and metadata volumes with metadata wiping.
THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert bulk_vol/cache_block and bulk_vol/cache_meta? [y/n]: y
Converted bulk_vol/cache_block_cdata to cache pool.
[bharrington@host10 media]$ sudo lvconvert --type cache --cachepool bulk_vol/cache_block --cachemode writethrough bulk_vol/data
Do you want wipe existing metadata of cache pool bulk_vol/cache_block? [y/n]: y
Logical volume bulk_vol/data is now cached.
[bharrington@host10 media]$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data bulk_vol Cwi-a-C--- <931.04g [cache_block] [data_corig] 0.00 0.39 0.00
home rhel -wi-ao---- 56.31g
root rhel -wi-ao---- 50.00g
swap rhel -wi-ao---- <6.00g
[bharrington@host10 media]$ sudo mkfs.xfs /dev/mapper/bulk_vol-
bulk_vol-cache_block_cdata bulk_vol-cache_block_cmeta bulk_vol-data bulk_vol-data_corig
[bharrington@host10 media]$ sudo mkfs.xfs /dev/mapper/bulk_vol-data
meta-data=/dev/mapper/bulk_vol-data isize=512 agcount=32, agsize=7627136 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=244065280, imaxpct=25
= sunit=128 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=119176, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
(reverse-i-search)`moun': sudo u^Cunt bulk/
[bharrington@host10 media]$ sudo mount /dev/mapper/bulk_vol-data bulk/
[bharrington@host10 media]$ sudo fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=bulk/random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.1
Starting 1 process
test: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=117MiB/s,w=38.5MiB/s][r=30.0k,w=9853 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=6855: Tue Apr 10 01:26:10 2018
read: IOPS=18.3k, BW=71.4MiB/s (74.9MB/s)(3070MiB/42997msec)
bw ( KiB/s): min=32296, max=180945, per=100.00%, avg=73337.81, stdev=25488.92, samples=85
iops : min= 8074, max=45238, avg=18334.26, stdev=6372.31, samples=85
write: IOPS=6108, BW=23.9MiB/s (25.0MB/s)(1026MiB/42997msec)
bw ( KiB/s): min=11134, max=60665, per=100.00%, avg=24506.34, stdev=8388.08, samples=85
iops : min= 2783, max=15166, avg=6126.35, stdev=2097.05, samples=85
cpu : usr=8.99%, sys=38.02%, ctx=170926, majf=0, minf=27
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwt: total=785920,262656,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=3070MiB (3219MB), run=42997-42997msec
WRITE: bw=23.9MiB/s (25.0MB/s), 23.9MiB/s-23.9MiB/s (25.0MB/s-25.0MB/s), io=1026MiB (1076MB), run=42997-42997msec
Disk stats (read/write):
dm-5: ios=785271/262462, merge=0/0, ticks=194959/266394, in_queue=461600, util=99.01%, aggrios=283517/191344, aggrmerge=0/0, aggrticks=86869/217742, aggrin_queue=304726, aggrutil=72.64%
dm-3: ios=737216/311204, merge=0/0, ticks=170991/613992, in_queue=785169, util=70.92%, aggrios=737216/303339, aggrmerge=0/8024, aggrticks=171195/564191, aggrin_queue=735348, aggrutil=70.95%
nvme0n1: ios=737216/303339, merge=0/8024, ticks=171195/564191, in_queue=735348, util=70.95%
dm-4: ios=0/159, merge=0/0, ticks=0/151, in_queue=151, util=0.11%
dm-6: ios=113335/262671, merge=0/0, ticks=89616/39084, in_queue=128858, util=72.64%, aggrios=113335/262671, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
md0: ios=113335/262671, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=24694/65699, aggrmerge=18202/1, aggrticks=48590/5825, aggrin_queue=53993, aggrutil=30.45%
sda: ios=24640/65857, merge=18251/4, ticks=39049/5876, in_queue=44258, util=26.44%
sdb: ios=24883/66064, merge=18051/4, ticks=46344/5894, in_queue=52386, util=28.28%
sdc: ios=24606/66002, merge=18300/0, ticks=72853/6053, in_queue=76529, util=30.29%
sdd: ios=24636/65499, merge=18266/0, ticks=39362/5736, in_queue=44741, util=26.18%
sde: ios=24867/65390, merge=18016/0, ticks=35503/5820, in_queue=41496, util=26.56%
sdf: ios=24803/65504, merge=18085/0, ticks=46833/5526, in_queue=52593, util=27.97%
sdg: ios=24682/65550, merge=18228/0, ticks=37363/5612, in_queue=43124, util=26.46%
sdh: ios=24439/65732, merge=18419/0, ticks=71418/6085, in_queue=76817, util=30.45%
[bharrington@host10 media]$
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment