Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Using LVM; My Setup
title date author
Using LVM; My Setup
August 11, 2017
Kyle Barron

TODO: Create personal_backup folder and logical volume on bulk_hdd. This is for non-current tasks which I don't want to have on my NVME.

My Personal LVM Setup

With my current setup, as of August 11, 2017, I have three drives. First, I have an extremely fast 500 GB Samsung 960 EVO M.2 SSD. The manufacturer says this has read times of up to 3.5GB/s, and I've gotten up to ~2800MB/s read speeds in testing. Second, I have a slower 500GB Samsung 850 EVO SATA SSD. This has read/write speeds of just about 500MB/s. Lastly, I have a 4TB HDD, which has read speeds of about 200MB/s (faster than I had expected, actually).

With this setup, I currently expect to use the fast drive to hold current data projects, the medium speed drive to hold system files and personal files in general, and the large, slow drive to hold large datasets.

At this point, I've installed Ubuntu 16.04 LTS on the mid-speed drive, and am about to set up the fast and slow drives.

First, make sure you know the identifiers of each drive. You don't want to accidentally overwrite something.

sudo fdisk -l
Disk /dev/nvme0n1: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 06E76C87-CCE1-4B11-9028-BF8C4637D4ED


Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 91FC42F1-3BD4-48DB-92EC-420C95599D4A

Device       Start       End   Sectors   Size Type
/dev/sda1     2048   1050623   1048576   512M EFI System
/dev/sda2  1050624   2050047    999424   488M Linux filesystem
/dev/sda3  2050048 976771071 974721024 464.8G Linux LVM


Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 80F6EC57-7C07-49A2-BF79-4193176E41D9

Device     Start        End    Sectors  Size Type
/dev/sdb1   2048 7814035455 7814033408  3.7T Linux filesystem


Disk /dev/mapper/ubuntu--vg-root: 432.9 GiB, 464758243328 bytes, 907730944 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/ubuntu--vg-swap_1: 32 GiB, 34296823808 bytes, 66985984 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/cryptswap1: 32 GiB, 34296299520 bytes, 66984960 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

The disk labels are /dev/nvme0n1, /dev/sda, and /dev/sdb for the fast, medium, and slow drives, respectively. I can see that /dev/sda has my boot drive on it. That's the /dev/sda1, /dev/sda2, and /dev/sda3. Don't touch /dev/sda at all. This would delete all my files and probably render my system unusable. I currently have no data on my fast and slow drives, so it's fine to overwrite those.

I first need to create physical volumes on the fast and slow drives, in order to layer other LVM components on top.

sudo pvcreate /dev/nvme0n1
sudo pvcreate /dev/sdb

When I tried to do the latter, I got an error:

> sudo pvcreate /dev/sdb
  Device /dev/sdb not found (or ignored by filtering)

To fix this, I completely wiped the hard drive of /dev/sdb (Don't copy-paste this line):

sudo dd if=/dev/zero of=/dev/sdb bs=1M status=progress

(note: the status=progress is a huge help to let you know how far into the 4TB drive you've erased.)

Edit: You can probably just unmount and restart.

Check that you have all physical volumes created by running sudo pvdisplay again.

Making Volume Group

Now we can make volume groups. It's possible to have a single volume group span multiple drives. However, this isn't ideal for me since I have three different speed drives and I want to have more control over what goes where. Thus, I'm going to create a volume group for each drive. If I ever have multiple drives at the same speed, I'd create a single volume group over both of them.

To check your current volume groups, run sudo vgdisplay.

> sudo vgdisplay
  --- Volume group ---
  VG Name               ubuntu-vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               464.78 GiB
  PE Size               4.00 MiB
  Total PE              118984
  Alloc PE / Size       118984 / 464.78 GiB
  Free  PE / Size       0 / 0   
  VG UUID               ob2nZ2-isBh-agBN-Jl89-ozB8-muMZ-1Jfjs2

I already have a volume group on my boot drive that was created during the Ubuntu installation (created when you check "Use LVM"). I need to create two more for my other drives. I'll name them bulk_hdd for the slow, hard disk drive, and fast_nvme for the fast NVME drive.

sudo vgcreate "bulk_hdd" /dev/sdb
sudo vgcreate "fast_nvme" /dev/nvme0n1

Now run sudo vgdisplay to see the new Volume Groups.

> sudo vgdisplay
  --- Volume group ---
  VG Name               bulk_hdd
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               3.64 TiB
  PE Size               4.00 MiB
  Total PE              953861
  Alloc PE / Size       0 / 0   
  Free  PE / Size       953861 / 3.64 TiB
  VG UUID               ylYDlc-ejpV-zJ6x-auuD-Smr3-nsKX-qVqj4w
   
  --- Volume group ---
  VG Name               ubuntu-vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               464.78 GiB
  PE Size               4.00 MiB
  Total PE              118984
  Alloc PE / Size       118984 / 464.78 GiB
  Free  PE / Size       0 / 0   
  VG UUID               ob2nZ2-isBh-agBN-Jl89-ozB8-muMZ-1Jfjs2
   
  --- Volume group ---
  VG Name               fast_nvme
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               465.76 GiB
  PE Size               4.00 MiB
  Total PE              119234
  Alloc PE / Size       0 / 0   
  Free  PE / Size       119234 / 465.76 GiB
  VG UUID               W1eRnn-Usp4-qLfy-ZhDl-UdDj-3JTl-LCILV5

Creating Logical Volumes

Now on to creating the Logical Volumes. These are like the small partitions.

LVM allows the easy enlargement of logical volumes. Making a logical volume smaller doesn't look too difficult, but needs more precautions than enlarging a volume because of the potential for data loss. Because of that, I don't feel the need to fully allocate my whole drive.

I'm going to create the following logical volumes:

  • 300GB on the fast drive for project file storage
  • 2TB on the slow drive for data storage
sudo lvcreate -L 300G -n projects fast_nvme
sudo lvcreate -L 2T   -n data     bulk_hdd

You can now run sudo lvdisplay to check that those have been created.

> sudo lvdisplay
  --- Logical volume ---
  LV Path                /dev/bulk_hdd/data
  LV Name                data
  VG Name                bulk_hdd
  LV UUID                HdPG5D-ldvV-8Vaz-Vtyt-3DUV-HVX3-N20gqA
  LV Write Access        read/write
  LV Creation host, time desktop, 2017-08-11 14:43:49 -0400
  LV Status              available
  # open                 0
  LV Size                2.00 TiB
  Current LE             524288
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4
   
  --- Logical volume ---
  LV Path                /dev/ubuntu-vg/root
  LV Name                root
  VG Name                ubuntu-vg
  LV UUID                0iRqXs-H4WP-wQc3-StcP-Luvy-Fe8X-xYYIRz
  LV Write Access        read/write
  LV Creation host, time ubuntu, 2017-08-09 10:27:17 -0400
  LV Status              available
  # open                 1
  LV Size                432.84 GiB
  Current LE             110807
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/ubuntu-vg/swap_1
  LV Name                swap_1
  VG Name                ubuntu-vg
  LV UUID                bUckKQ-ay8V-re4a-y67s-JAhf-vjVm-gmZQIM
  LV Write Access        read/write
  LV Creation host, time ubuntu, 2017-08-09 10:27:17 -0400
  LV Status              available
  # open                 1
  LV Size                31.94 GiB
  Current LE             8177
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/fast_nvme/projects
  LV Name                projects
  VG Name                fast_nvme
  LV UUID                Vom1Hu-tfAv-jbN4-3uop-5SGn-upFj-g3FLiM
  LV Write Access        read/write
  LV Creation host, time desktop, 2017-08-11 14:43:07 -0400
  LV Status              available
  # open                 0
  LV Size                300.00 GiB
  Current LE             76800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3

Format and Mount

Format with ext4:

sudo mkfs.ext4 /dev/mapper/fast_nvme-projects
sudo mkfs.ext4 /dev/mapper/bulk_hdd-data

Create mount points:

mkdir -p ~/Documents/research/data
mkdir -p ~/Documents/research/personal

Then mount the logical volumes:

sudo mount /dev/mapper/fast_nvme-projects ~/Documents/research/personal
sudo mount /dev/mapper/bulk_hdd-data      ~/Documents/research/data

Make this permanent by editing /etc/fstab. It should look like this:

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/mapper/ubuntu--vg-root /               ext4    errors=remount-ro 0       1
# /boot was on /dev/sda2 during installation
UUID=cac79909-2fde-4321-9fa5-f955a47b266e /boot           ext2    defaults        0       2
# /boot/efi was on /dev/sda1 during installation
UUID=2A8E-8CC4  /boot/efi       vfat    umask=0077      0       1
#/dev/mapper/ubuntu--vg-swap_1 none            swap    sw              0       0
/dev/mapper/cryptswap1 none swap sw 0 0
/dev/mapper/fast_nvme-projects /home/kyle/Documents/research/personal ext4 defaults,nofail 0 0
/dev/mapper/bulk_hdd-data /home/kyle/Documents/research/data ext4 defaults,nofail 0 0 
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.