Employ the use of a slow hard disk (local or network attached) for persistent storage and a fast SSD (preferably local) for caching. This combination can be used as data directory for Apache Solr.
In this example, we have a hard disk at /dev/sdb and the SSD at /dev/nvme0n1. The entire hard disk (/dev/sdb) can be used for persistent storage, but in this example we are using just a single partition (/dev/sdb1) as the persistent store.
root@asrock-new:/home/ishan# fdisk -l /dev/sdb /dev/nvme0n1
Output
Disk /dev/sdb: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: Hitachi HDS72105
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2bca438b
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 243271679 243269632 116G 83 Linux
/dev/sdb2 243271680 486541311 243269632 116G 83 Linux
/dev/sdb3 486541312 729810943 243269632 116G 83 Linux
/dev/sdb4 729810944 976773167 246962224 117.8G 83 Linux
Disk /dev/nvme0n1: 119.24 GiB, 128035676160 bytes, 250069680 sectors
Disk model: SAMSUNG MZVPV128HDGM-00000
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb39a7154
Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 2048 104859647 104857600 50G 83 Linux
/dev/nvme0n1p2 104859648 167774207 62914560 30G 83 Linux
/dev/nvme0n1p3 167774208 209717247 41943040 20G 83 Linux
/dev/nvme0n1p4 209717248 230688767 20971520 10G 83 Linux
Here are the various parameters for this exercise:
- Slow drive: /dev/sdb1
- Fast drive: /dev/nvme0n1p1
- Size of slow drive: 116GB (according to fdisk -l)
- Size of fast drive: 50GB (according to fdisk -l)
- Name of volume group: solrvg1
- Name of cached drive (HD+SSD cache): solrdata1
- Size of metadata for caching: 100MB (this is sufficient for a 50GB cache)
Here are the steps:
# Variables to edit
SLOWDEVICE=sda3
CACHEDEVICE=nvme0n1p3
VGGROUP=solrvg3
LVNAME=solrdata3
SLOWDEVSIZE=115G
CACHESIZE=19.8G
METADATASIZE=100m
# Set up the LVM and cache
pvcreate /dev/$SLOWDEVICE
vgcreate $VGGROUP /dev/$SLOWDEVICE
pvcreate /dev/$CACHEDEVICE
vgextend $VGGROUP /dev/$CACHEDEVICE
lvcreate -n $LVNAME -L $SLOWDEVSIZE $VGGROUP /dev/$SLOWDEVICE
lvcreate -n cac0 -L $CACHESIZE $VGGROUP /dev/$CACHEDEVICE
lvcreate -n met0 -L $METADATASIZE $VGGROUP /dev/$CACHEDEVICE
lvconvert -y --type cache-pool --poolmetadata $VGGROUP/met0 $VGGROUP/cac0
lvconvert -y --type cache --cachepool $VGGROUP/cac0 $VGGROUP/$LVNAME
# Create the filesystem and mount it
mkfs.btrfs /dev/mapper/$VGGROUP-$LVNAME
mkdir /mnt/$LVNAME
mount /dev/mapper/$VGGROUP-$LVNAME /mnt/$LVNAME
Example run using the above steps:
Output
root@asrock-new:/home/ishan# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
root@asrock-new:/home/ishan# vgcreate solrvg1 /dev/sdb1
Volume group "solrvg1" successfully created
root@asrock-new:/home/ishan# pvcreate /dev/nvme0n1p1
Physical volume "/dev/nvme0n1p1" successfully created.
root@asrock-new:/home/ishan# vgextend solrvg1 /dev/nvme0n1p1
Volume group "solrvg1" successfully extended
root@asrock-new:/home/ishan# lvcreate -n solrdata1 -L 115G solrvg1 /dev/sdb1
Logical volume "solrdata1" created.
root@asrock-new:/home/ishan# lvcreate -n cac0 -L 49G solrvg1 /dev/nvme0n1p1
Logical volume "cac0" created.
root@asrock-new:/home/ishan# lvcreate -n met0 -L 100m solrvg1 /dev/nvme0n1p1
Logical volume "met0" created.
root@asrock-new:/home/ishan# lvconvert -y --type cache-pool --poolmetadata solrvg1/met0 solrvg1/cac0
Converted solrvg1/cac0 and solrvg1/met0 to cache pool.
root@asrock-new:/home/ishan# lvconvert -y --type cache --cachepool solrvg1/cac0 solrvg1/solrdata1
Logical volume solrvg1/solrdata1 is now cached.
After this, the new cached drive should show up in fdisk -l
:
root@asrock-new:/home/ishan# fdisk -l
....
Disk /dev/mapper/solrvg1-solrdata1: 115 GiB, 123480309760 bytes, 241172480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Now, time to create a filesystem on this drive and mount it:
root@asrock-new:/home/ishan# mkfs.btrfs /dev/mapper/solrvg1-solrdata1
btrfs-progs v6.2
See http://btrfs.wiki.kernel.org for more information.
Performing full device TRIM /dev/mapper/solrvg1-solrdata1 (115.00GiB) ...
NOTE: several default settings have changed in version 5.15, please make sure
this does not affect your deployments:
- DUP for metadata (-m dup)
- enabled no-holes (-O no-holes)
- enabled free-space-tree (-R free-space-tree)
Label: (null)
UUID: 5eb0d96d-0a19-477e-9c43-6c7e443063dd
Node size: 16384
Sector size: 4096
Filesystem size: 115.00GiB
Block group profiles:
Data: single 8.00MiB
Metadata: DUP 1.00GiB
System: DUP 8.00MiB
SSD detected: no
Zoned device: no
Incompat features: extref, skinny-metadata, no-holes
Runtime features: free-space-tree
Checksum: crc32c
Number of devices: 1
Devices:
ID SIZE PATH
1 115.00GiB /dev/mapper/solrvg1-solrdata1
root@asrock-new:/home/ishan# mkdir /mnt/solrdata1
root@asrock-new:/home/ishan# mount /dev/mapper/solrvg1-solrdata1 /mnt/solrdata1
root@asrock-new:/home/ishan# df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 1.9M 1.6G 1% /run
/dev/sda1 182G 139G 35G 81% /
tmpfs 7.8G 232K 7.8G 1% /dev/shm
tmpfs 5.0M 12K 5.0M 1% /run/lock
tmpfs 7.8G 124K 7.8G 1% /tmp
/dev/sda3 2.0G 5.9M 2.0G 1% /boot/efi
tmpfs 1.6G 92K 1.6G 1% /run/user/1000
/dev/mapper/solrvg1-solrdata1 115G 3.8M 113G 1% /mnt/solrdata1
The mount point is now ready to be used as Solr's data directory.
NOTE: For compression with BTRFS, the last mount command can be changed to use a compression algorithm. Reference: https://fedoramagazine.org/working-with-btrfs-compression/
Here are some commands to view the cache, volume and logical groups:
Cache % and write mode:
root@asrock-new:/home/ishan# lvs -o+cache_mode solrvg1/solrdata1
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert CacheMode
solrdata1 solrvg1 Cwi-aoC--- 115.00g [cac0_cpool] [solrdata1_corig] 0.01 6.32 0.00 writethrough
LV display:
root@asrock-new:/home/ishan# lvdisplay
--- Logical volume ---
LV Path /dev/solrvg1/solrdata1
LV Name solrdata1
VG Name solrvg1
LV UUID zH63L3-G3sa-vea5-JQXg-TpY7-6BDX-H6I0mN
LV Write Access read/write
LV Creation host, time asrock-new, 2023-07-07 12:44:17 +0530
LV Cache pool name cac0_cpool
LV Cache origin name solrdata1_corig
LV Status available
# open 1
LV Size 115.00 GiB
Cache used blocks 0.01%
Cache metadata blocks 6.32%
Cache dirty blocks 0.00%
Cache read hits/misses 253 / 65
Cache wrt hits/misses 690 / 564
Cache demotions 0
Cache promotions 77
Current LE 29440
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
PV display:
root@asrock-new:/home/ishan# pvdisplay
--- Physical volume ---
PV Name /dev/sdb1
VG Name solrvg1
PV Size 116.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 29695
Free PE 230
Allocated PE 29465
PV UUID VgW253-Kt1M-pYWO-7BFA-TM4V-J7lx-sN2Og2
--- Physical volume ---
PV Name /dev/nvme0n1p1
VG Name solrvg1
PV Size 50.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 12799
Free PE 230
Allocated PE 12569
PV UUID ZbmCxa-lttC-22nG-1lpn-1S4i-WItJ-y1cOMS