Skip to content

Instantly share code, notes, and snippets.

@dantio
Forked from githubfoam/Gluster Cheat Sheet
Created February 22, 2023 09:55
Show Gist options
  • Save dantio/c3b47ba18d3a503c15faed45d290f601 to your computer and use it in GitHub Desktop.
Save dantio/c3b47ba18d3a503c15faed45d290f601 to your computer and use it in GitHub Desktop.
Gluster Cheat Sheet
Brick –> is basic storage (directory) on a server in the trusted storage pool.
Volume –> is a logical collection of bricks.
Cluster –> is a group of linked computers, working together as a single computer.
Distributed File System –> A filesystem in which the data is spread across the multiple storage nodes and allows the clients to access it over a network.
Client –> is a machine which mounts the volume.
Server –> is a machine where the actual file system is hosted in which the data will be stored.
Replicate –> Making multiple copies of data to achieve high redundancy.
Fuse –> is a loadable kernel module that lets non-privileged users create their own file systems without editing kernel code.
glusterd –> is a daemon that runs on all servers in the trusted storage pool.
RAID –> Redundant Array of Inexpensive Disks (RAID) is a technology that provides increased storage reliability through redundancy
TCP ports 111, 24007,24008 on all Gluster servers
TCP port 24009-(24009 + number of bricks across all volumes) on all Gluster servers
TCP port 24009 to 24014 -> 5 bricks for each
glusterfs -V -> Check the version of installed glusterfs
gluster -> Gluster Console Manager in interactive mode
sudo vi /etc/hosts -> modify /etc/hosts file if DNS is N\A
192.168.13.16 gluster1.storage.local gluster1
192.168.13.17 gluster2.storage.local gluster2
192.168.13.20 client.storage.local client
gluster peer status -> Verify the status of the trusted storage pool
gluster peer probe gluster2-server -> Add servers to the trusted storage pool
gluster peer detach gluster2-server -> Remove a server in storage pool
gluster pool list -> List the storage pool.
mkdir -p /data/gluster/gvol0 -> Create a brick (directory) called “gvol0” in the mounted file system on both nodes
gluster volume create gvol0 replica 2 gluster1.storage.local:/data/gluster/gvol0 gluster2.storage.local:/data/gluster/gvol0
volume create: gvol0 -> Create the volume named “gvol0” with two replicas
gluster volume start gvol0 -> Start volume
gluster volume info -> Show the volume information
gluster volume info gvol0 -> Show the volume information of volume gvol0
gluster volume start test-volume -> Start volume
mkfs.ext4 /dev/sdb1 -> Format partition
mkdir -p /data/gluster -> Create directory called /data/gluster
mount /dev/sdb1 /data/gluster -> Mount the disk on a directory called /data/gluster
mount -t glusterfs gluster1-server:/test-volume /mnt/glusterfs -> Mount a Gluster volume on all Gluster servers
cat /proc/mounts | grep glusterfs
#/etc/fstab
storage.example.lan:/test-volume /mnt glusterfs defaults,_netdev 0 0
gluster1-server:/test-volume /mnt/glusterfs glusterfs defaults,_netdev 0 0 -> Edit the /etc/fstab file on all Gluster servers
echo "/dev/sdb1 /data/gluster ext4 defaults 0 0" | sudo tee --append /etc/fstab ->Add an entry to /etc/fstab
sudo iptables -I INPUT -p all -s <ip-address> -j ACCEPT -> Configure the firewall to allow all connections within a cluster
Redhat Based Systems
chkconfig glusterd on -> Start the glusterd daemon every time the system boots
Debian Based Systems
sudo service glusterfs-server start ->Start the glusterfs-server service on all gluster nodes
Clients
dmesg | grep -i fuse -> Verify FUSE module is installed
mkdir -p /mnt/glusterfs -> Create a directory to mount the GlusterFS filesystem
mount -t glusterfs gluster1.storage.local:/gvol0 /mnt/glusterfs -> Mount the GlusterFS filesystem to /mnt/glusterfs
df -hP /mnt/glusterfs -> Verify the mounted GlusterFS filesystem
gluster1.storage.local:/gvol0 /mnt/glusterfs glusterfs defaults,_netdev 0 0 -> Add to /etc/fstab for automatically mounting
Benchmarking && Testing
Servers
mount -t glusterfs gluster1.storage.local:/gvol0 /mnt -> Mount GlusterFS volume on the same storage node
/mnt directory -> Data inside the /mnt directory of both nodes will always be same (replication).
ls -l /mnt/ -> Verify the created files
poweroff -> Shutdown gluster node to test HA on client
Clients
touch /mnt/glusterfs/file1 -> Create some files on the mounted filesystem
ls -l /mnt/glusterfs/ -> Verify the created files
Tuning
gluster volume set gvol0 network.ping-timeout "5" -> set network ping timeout to 5 seconds from default 42 on all gluster nodes
gluster volume get gvol0 network.ping-timeout -> Verify network ping timeout
network.ping-timeout default 42 Secs-> The time duration for which the client waits to check if the server is responsive. When a ping timeout happens, there is a network disconnect between the client and server. All resources held by server on behalf of the client get cleaned up. When a reconnection happens, all resources will need to be re-acquired before the client can resume its operations on the server. Additionally, the locks will be acquired and the lock tables updated. This reconnect is a very expensive operation and should be avoided.
RDMA
Process glusterd will listen on both tcp and rdma if rdma device is found. Port used for rdma is 24008.
troubleshooting
sudo glusterd --debug
sudo netstat -ntlp | grep gluster
netstat -tlpn | grep 24007
#https://docs.gluster.org/en/v3/Administrator%20Guide/Setting%20Up%20Clients/
#https://docs.gluster.org/en/v3/Install-Guide/Install/
41 sudo apt-get install -y software-properties-common
42 sudo add-apt-repository ppa:gluster/glusterfs-3.13 -y
43 sudo apt-get update
44 sudo apt-get install glusterfs-server=3.13.2-1build1
45 sudo service glusterfs-server start
47 sudo service glusterd status
48 sudo service glusterd restart
#Gluster 3.10 (Stable)
#https://www.gluster.org/install/
sudo systemctl disable ufw
sudo systemctl stop ufw
sudo systemctl status ufw
hostnamectl set-hostname gluster2
sudo vi /etc/hosts
ping -c2 gluster1
ping -c2 gluster2
sudo apt-get install -y software-properties-common
sudo add-apt-repository ppa:gluster/glusterfs-3.10
sudo apt-get update -y
sudo apt-get install glusterfs-server -y
glusterfs --version
gluster peer probe gluster1
sudo systemctl start glusterd
sudo systemctl enable glusterd
sudo gluster volume create gvol0 replica 2 gluster1.example.lan:/data/gluster/gvol0 gluster2.example.lan:/data/gluster/gvol0
sudo gluster volume start test-volume
sudo gluster volume info test-volume
sudo gluster volume set test-volume network.ping-timeout 3
# glusterfs client
sudo apt-get install -y glusterfs-client
mkdir -p /mnt/glusterfs
mount -t glusterfs gluster1.example.lan:/data/gluster/gvol0 /mnt/glusterfs
echo 'gluster1.example.lan:/data/gluster/gvol0 /mnt/glusterfs glusterfs defaults,_netdev 0 0' >> /etc/fstab
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment