What is GlusterFS? Read [here] (https://gluster.readthedocs.io/en/latest/Administrator%20Guide/GlusterFS%20Introduction/) :)
The main purpose of this documentation is to easily setup GlusterFS, a Network Attached Storage system across multiple servers without having to go thru complicated implementation.
This guide is meant for audiences that has minimal knowledge of Linux filesystem and GlusterFS.
- https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
- https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/
- http://www.n2ws.com/how-to-guides/how-to-create-an-lvm-volume-on-aws.html
- https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart
All of the following are tested in Centos 7 using Amazon EC2.
yum -y install lvm2
yum -y install centos-release-gluster
yum -y --enablerepo=centos-gluster*-test install glusterfs-server
service glusterd start
Create the physical volume using pvcreate
command.
pvcreate --dataalignment 1280K /dev/xvdb
Check physical volume creation using pvdisplay
.
"/dev/xvdb" is a new physical volume of "10.00 GiB"
--- NEW Physical volume ---
PV Name /dev/xvdb
VG Name
PV Size 10.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID dKeOnr-dCNp-Iml5-wrpK-SDDw-HLb3-I1tQd2
Create volume group.
vgcreate --physicalextentsize 128K gfs-vg /dev/xvdb
Create logical volume thinpool to serve as data device. The following command consumes 95% of the total disk space of the /dev/xvdb device.
lvcreate --wipesignatures y -l 95%VG -n gfs-thinpool gfs-vg
Create logical volume thinpool to serve as metadata device. The following command consumes 1% of the total disk space of the /dev/xvdb device.
lvcreate --wipesignatures y -l 1%VG -n gfs-thinpool-meta gfs-vg
Check logical volume creation using lvdisplay
.
--- Logical volume ---
LV Path /dev/gfs-vg/gfs-thinpool
LV Name gfs-thinpool
VG Name gfs-vg
LV UUID Oz35Vo-I8Xe-GFHL-a5MA-2zpR-ZnTB-7iNfPp
LV Write Access read/write
LV Creation host, time ip-172-31-18-232.us-west-2.compute.internal, 2016-08-07 09:38:14 +0000
LV Status available
# open 0
LV Size 9.50 GiB
Current LE 77814
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
--- Logical volume ---
LV Path /dev/gfs-vg/gfs-thinpool-meta
LV Name gfs-thinpool-meta
VG Name gfs-vg
LV UUID a4Te3N-JRyd-VMDd-n7VO-cMyy-TBrf-Yv8nCz
LV Write Access read/write
LV Creation host, time ip-172-31-18-232.us-west-2.compute.internal, 2016-08-07 09:38:25 +0000
LV Status available
# open 0
LV Size 102.38 MiB
Current LE 819
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
Create a thin pool from the data LV and the metadata LV.
lvconvert -y --zero n --chunksize 1280K --thinpool gfs-vg/gfs-thinpool --poolmetadata gfs-vg/gfs-thinpool-meta
Check logical volume creation using lvdisplay
.
--- Logical volume ---
LV Name gfs-thinpool
VG Name gfs-vg
LV UUID E4k9ut-gnk8-Q0nk-Gemc-RfaJ-4jJn-mFFt9v
LV Write Access read/write
LV Creation host, time ip-172-31-18-231.us-west-2.compute.internal, 2016-08-07 10:22:10 +0000
LV Pool metadata gfs-thinpool_tmeta
LV Pool data gfs-thinpool_tdata
LV Status available
# open 0
LV Size 9.50 GiB
Allocated pool data 0.00%
Allocated metadata 0.04%
Current LE 77814
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:2
Create a thinly provisioned volume from the previously created pool. Increase 1G
with your initial LV size.
lvcreate -V 1G -T gfs-vg/gfs-thinpool -n gfs-lv
Formatting the logical volume device or what will be the so called "Gluster BRICK" later on.
mkfs.ext4 /dev/gfs-vg/gfs-lv
Mount a local directory.
mkdir -p /gfs
mount /dev/gfs-vg/gfs-lv /gfs
# Observe
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 10G 1.4G 8.7G 14% /
devtmpfs 902M 0 902M 0% /dev
tmpfs 920M 0 920M 0% /dev/shm
tmpfs 920M 17M 904M 2% /run
tmpfs 920M 0 920M 0% /sys/fs/cgroup
tmpfs 184M 0 184M 0% /run/user/1000
tmpfs 184M 0 184M 0% /run/user/0
/dev/mapper/gfs--vg-gfs--lv 976M 2.6M 907M 1% /gfs
Add entry in fstab to persist configuration.
echo "/dev/gfs-vg/gfs-lv /gfs ext4 defaults 0 0" >> /etc/fstab
Reboot the server to ensure that the configuration persists. If by any chance you screwed the fstab configuration.. Good luck.
This example is a Distributed Gluster cluster type where files will be shared across all nodes.
- For Single node, running from server1.com host.
gluster volume create gfs-vol1 server1.com:/gfs/gfs-vol1
gluster volume start gfs-vol1
- For Clustered nodes, run this from any of the servers. Make sure that you've completed [Step 1] on all the nodes.
gluster peer probe server1.com
gluster peer probe server2.com
gluster volume create gfs-vol1 server1.com:/gfs/gfs-vol1 server2.com:/gfs/gfs-vol1
Install Gluster Client
For CentOs/RHEL:
yum -y install centos-release-gluster
For AWS Linux:
Enable GlusterFS repo in /etc/yum.repos.d
cat > /etc/yum.repos.d/glusterfs-epel.repo <<-EOF
[glusterfs-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.0/EPEL.repo/epel-6/$basearch/
enabled=1
skip_if_unavailable=1
gpgcheck=1
gpgkey=http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.0/EPEL.repo/pub.key
[glusterfs-noarch-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.0/EPEL.repo/epel-6/noarch
enabled=1
skip_if_unavailable=1
gpgcheck=1
gpgkey=http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.0/EPEL.repo/pub.key
[glusterfs-source-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes. - Source
baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.0/EPEL.repo/epel-6/SRPMS
enabled=0
skip_if_unavailable=1
gpgcheck=1
gpgkey=http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.0/EPEL.repo/pub.key
EOF
#Install GlusterFS
yum install -y glusterfs-fuse
Mount the Gluster volume that we created from the GlusterFS server server1.com
to any remote host /mnt
directory.
mount -t glusterfs server1.com:/gfs-vol1 /mnt
Add entry in fstab to persist configuration.
echo "server1.com:/gfs-vol1 /mnt glusterfs defaults 0 0" >> /etc/fstab
Fill the /mnt
directory with some files.
for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
From server1.com
observe the brick directory.
ls -lrt /gfs/gfs-vol1/
-rw-------. 2 root root 1574469 Sep 25 10:39 copy-test-089
-rw-------. 2 root root 1574469 Sep 25 10:39 copy-test-088
-rw-------. 2 root root 1574469 Sep 25 10:39 copy-test-087
-rw-------. 2 root root 1574469 Sep 25 10:39 copy-test-086
-rw-------. 2 root root 1574469 Sep 25 10:39 copy-test-085
-rw-------. 2 root root 1574469 Sep 25 10:39 copy-test-084
-rw-------. 2 root root 1574469 Sep 25 10:39 copy-test-083
-rw-------. 2 root root 1574469 Sep 25 10:39 copy-test-082
-rw-------. 2 root root 1574469 Sep 25 10:39 copy-test-081
-rw-------. 2 root root 1574469 Sep 25 10:39 copy-test-080
-rw-------. 2 root root 1574469 Sep 25 10:39 copy-test-079
Let say that the directory configured above /gfs
with 1GB limit will not be enough, this is the where we need to extend the logical volume data capacity.
Assuming that my volume group has reached its limit, I'll need to attached another Device Volume /dev/xvdc
in my server and extend the volume group.
pvcreate /dev/xvdc
vgextend gfs-vg /dev/xvdc
Extending the Logical Volume with additional 1GB.
lvextend -L +1G /dev/gfs-vg/gfs-lv
Output:
Size of logical volume gfs-vg/gfs-lv changed from 1.00 GiB (8192 extents) to 2.00 GiB (16384 extents).
Logical volume gfs-lv successfully resized.
Notice that from lvdisplay
the changes has reflected.
--- Logical volume ---
LV Path /dev/gfs-vg/gfs-lv
LV Name gfs-lv
VG Name gfs-vg
LV UUID CBK7eA-wVCk-t3xT-Peaf-uVSN-4Ghz-7XDMCJ
LV Write Access read/write
LV Creation host, time ip-10-10-1-104, 2016-09-25 08:55:50 +0000
LV Pool name gfs-thinpool
LV Status available
# open 1
LV Size 2.00 GiB
Mapped size 2.81%
Current LE 16384
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:4
Though if we run df -h
, we will see that the directory /gfs
Available size did not changed. This is where we need to resize the filesystem.
resize2fs /dev/gfs-vg/gfs-lv
Output:
Filesystem at /dev/gfs-vg/gfs-lv is mounted on /gfs; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/gfs-vg/gfs-lv is now 524288 blocks long.
Run df -h
again.
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 10G 1.4G 8.7G 14% /
devtmpfs 902M 0 902M 0% /dev
tmpfs 920M 0 920M 0% /dev/shm
tmpfs 920M 17M 904M 2% /run
tmpfs 920M 0 920M 0% /sys/fs/cgroup
tmpfs 184M 0 184M 0% /run/user/1000
tmpfs 184M 0 184M 0% /run/user/0
/dev/mapper/gfs--vg-gfs--lv 2.0G 3.0M 1.9G 1% /gfs