Skip to content

Instantly share code, notes, and snippets.

@scyto
Last active March 14, 2024 21:29
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save scyto/452fc778c4c3ba7caf03b833151e84a1 to your computer and use it in GitHub Desktop.
Save scyto/452fc778c4c3ba7caf03b833151e84a1 to your computer and use it in GitHub Desktop.
Install & Configure GlusterFS

Assumes you installed debian, docker, etc as per the list here

Assumptions

  • I will have one gluster volume i will call glusterfs-vol1
  • I will install glusterfs on my docker nodes (best practice is to have seperate dedicated VMs)
  • I have 3 nodes in my cluster (docker01, docker02, docker03)
  • I will have one brick per node (brick1, brick2, brick3)
  • the volume will be dispered - more on volume types

Prepare Disks

Prepare disks on hypervisor

Add a VHD to each of your docker host VMs - for example a 100GB Volume If using Hyper-V this can be done without rebooting (add it as new SCSI VHD and the VM OS will detect it instantly)

Partition, format and mount the underlying storage (be careful)

Perform these steps on every node.

sudo lsblk to confirm device node (should be sdb but could be different if you diverged from any of gists)

sudo fdisk /dev/sdb (then g, then n and accept defaults and lastly w to write out changes)

sudo mkfs.xfs /dev/sdb1 (this formats the new paritions with XFS)

sudo mkdir /mnt/glusterfs (this is where you will mount the new parition)

sudo mount /dev/sdb1 /mnt/glusterfs (this mounts and is used in next steps, don't miss).

on docker01

sudo mkdir /mnt/glusterfs/vol1-brick1 this created the folder where brick1 will be stored

sudo ls -al /dev/disk/by-uuid/ this gets you the UUID for the partition you created earlier

edit fstab (be careful) with sudo nano /etc/fstab add the followling line as the last line in fstab, use the UUID you got in the last step, do not use the one here. UUID=c78ed594-ef62-445d-9486-10938f49b603 /mnt/glusterfs xfs defaults 0 0

sudo findmnt --verify you should see no errors related to /dev/sdb or /dev/sb1 (you may see errors about CD-ROM those can be ignored) if you get ANY errors do not proceed until you have checked your previous work

on docker02

sudo mkdir /mnt/glusterfs/vol1-brick2 this created the folder where brick2 will be stored

sudo ls -al /dev/disk/by-uuid/ this gets you the UUID for the partition you created earlier

edit fstab (be careful) with sudo nano /etc/fstab add the followling line as the last line in fstab, use the UUID you got in the last step, do not use the one here. UUID=c78ed594-ef62-445d-9486-10938f49b603 /mnt/glusterfs xfs defaults 0 0

sudo findmnt --verify you should see no errors related to /dev/sdb or /dev/sb1 (you may see errors about CD-ROM those can be ignored) if you get ANY errors do not proceed until you have checked your previous work

on docker03

sudo mkdir /mnt/glusterfs/vol1-brick3 this created the folder where brick3 will be stored

sudo ls -al /dev/disk/by-uuid/ this gets you the UUID for the partition you created earlier

edit fstab (be careful) with sudo nano /etc/fstab add the followling line as the last line in fstab, use the UUID you got in the last step, do not use the one here. UUID=c78ed594-ef62-445d-9486-10938f49b603 /mnt/glusterfs xfs defaults 0 0

sudo findmnt --verify you should see no errors related to /dev/sdb or /dev/sdb1 (you may see errors about CD-ROM those can be ignored) if you get ANY errors do not proceed until you have checked your previous work

Install & Configure GlusterFS

On all nodes:

sudo apt-get install glusterfs-server
sudo systemctl start glusterd
sudo systemctl enable glusterd

Create the glusterfs volume

On the master node (docker01) - note you must run sudo -s and not sudo for each command

sudo -s
gluster peer probe docker02.alexbal.com; gluster peer probe docker03.alexbal.com;
gluster pool list
gluster volume create gluster-vol1 disperse 3 redundancy 1 docker01.yourdomain.com:/mnt/glusterfs/vol1-brick1 docker02.yourdomain.com:/mnt/glusterfs/vol1-brick2 docker03.yourdomain.com:/mnt/glusterfs/vol1-brick3
gluster volume start gluster-vol1
gluster volume info  gluster-vol1

on all docker hosts

sudo mkdir /mnt/gluster-vol1 make the mount point

sudo nano /etc/fstab edit the fstab

add the following as the last line in the fstab localhost:/gluster-vol1 /mnt/gluster-vol1 glusterfs defaults,_netdev,noauto,x-systemd.automount 0 0 exit and save

sudo mount -a should mount with no errors

sudo df /mnt/gluster-vol1/ should return details about the gluster file system (size etc)

To test create a file using touch sudo touch /mnt/gluster-vol1/hello-world-txt now check for that file in the same path on the other 2 nodes. If you did everything correctly you now have a replicating and redundant file system!

Note: findmnt --verify cant be used for this mount as it doesn't support checking glusterfs

@scyto
Copy link
Author

scyto commented Aug 7, 2023

added ,noauto,x-systemd.automount guidance to the fstab section to fix mounting sometimes not working correctly because things were not setup when needed to start, this may require more x-systemd mount directives if this doesn't fix (but seems to fix so far)

@scyto
Copy link
Author

scyto commented Sep 5, 2023

Looking at this a few years later, somethings i don't like:

  1. why didn't i create LVMs to put the gluster bricks on (it is needed for gluster snap)
  2. why the heck did i store the bricks in /mnt - that seems silly, should have been in /srv or somesuch!
  3. i know why i mounted the volume - i needed a consistent access path on each node as containers moved - but why did i do that via a mount? couldn't i have done it with a symlink to say the roo or off of some other location? i am really not sure and need to think about this one more - especially as mount issues from other devices seems to knock gluster offline... plus now i use the gluster volume plugin this changes does it not? oh also it is not best ptractice to acess the bricks directly - this makes sense unless you know everything is converged you don't want to touch those files....

@OlivierMary
Copy link

Hi,

noauto is ignore by mount -a check with mount -av

mount -av
/                        : ignored
/boot                    : already mounted
/boot/efi                : already mounted
/glusterfs               : already mounted
/mnt/gluster-vol1        : ignored

I used systemd:

with: /etc/systemd/system/srv.mount:

[Unit]
Description=GlusterFS Mount
After=glusterd.service
Requires=glusterd.service

[Mount]
What=localhost:/gluster-vol1
Where=/srv
Type=glusterfs
Options=defaults,_netdev

[Install]
WantedBy=multi-user.target

sudo systemctl daemon-reload 
sudo systemctl enable srv.mount 
sudo systemctl start srv.mount
sudo systemctl status srv.mount

and I deleted the added line in /etc/fstab

@OlivierMary
Copy link

and for more security I add:

sudo gluster volume set gluster-vol1 auth.allow <node-1-ip>,<node-2-ip>,<node-3-ip>,127.0.*.*

I have to check how to setup correctly SSL/TLS

@scyto
Copy link
Author

scyto commented Dec 11, 2023

nice!

can't say i needed to do that as the service always starts for me, only issue i have had is if DNS isn't present at gluster and docker its seems to get freaky - havent decided if i will fix or offboard my DNS from the cluster to actual pi's yet, but maybe what you did would solve that issue in someway....

I haven't got to SSL/TLS - using certbot on every node is on my todo list. :-)

@OlivierMary
Copy link

if your nodes are on a public network, anyone can connect to your glusterfs without any authentication, if you don't set auth.allow ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment