Skip to content

Instantly share code, notes, and snippets.

@TemporaryJam
Last active March 10, 2016 20:54
Show Gist options
  • Save TemporaryJam/8805586 to your computer and use it in GitHub Desktop.
Save TemporaryJam/8805586 to your computer and use it in GitHub Desktop.
Gluster (Glusterfs) setup and install
#http://www.gluster.org
#nathan's notes https://gist.github.com/f58f3aa963f2165a0caa
#install gluster repo
wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
#create brick directory
[root@re-sw-cen-web1 ~]# mkdir -p /gluster/var-www-html
#install client & server software
[root@re-sw-cen-web1 ~]# yum install glusterfs-{client,server} -y
#create /etc/rc.modules if doesn't already exist
#put "modprobe fuse" into rc.modules
[root@re-sw-cen-web1 ~]# vim /etc/rc.modules
#make executable
[root@re-sw-cen-web1 ~]# chmod +x /etc/rc.modules
#load fuse for the first time
[root@re-sw-cen-web1 ~]# modprobe fuse
#check fuse is loaded
[root@re-sw-cen-web1 ~]# lsmod | grep fuse
fuse 66891 0
#set services to load on boot
[root@re-sw-cen-web1 ~]# chkconfig glusterd on
[root@re-sw-cen-web1 ~]# chkconfig glusterfsd on
[root@re-sw-cen-web1 ~]# chkconfig
#start services (on both servers)
[root@re-sw-cen-web1 www]# service glusterd start
Starting glusterd: [ OK ]
#this next one will probably not produce output - this may not start straight away, might start when needed
[root@re-sw-cen-web1 www]# service glusterfsd start
#create mount point on all nodes
mkdir -p /var/www/html
#add rules to IP tables and restart iptables service
#Ensure that TCP ports 111, 24007,24008, 24009-(24009 + number of bricks across all volumes) are open on all Gluster servers. If you will be using NFS, open additional ports 38465-(38465 + number of Gluster servers).
#You need one open port, starting at 38465 and incrementing sequentially for each Gluster storage server, and one port, starting at 24009 for each bricks. This example below opens enough ports for 5 storage servers and three bricks.
-A INPUT -s 10.0.100.0/24 -m state --state NEW -m tcp -p tcp --dport 24007:24011 -j ACCEPT
-A INPUT -s 10.0.100.0/24 -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT
-A INPUT -s 10.0.100.0/24 -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT
-A INPUT -s 10.0.100.0/24 -m state --state NEW -m tcp -p tcp --dport 38465:38470 -j ACCEPT
##
#make sure both servers are now at the same point, installed, services started, fuse loaded, mount points available, gluster directory created
##
#probe the secondary machine from the first (if more than one peer, probe each in turn to add to the trusted pool)
[root@re-sw-cen-web1 www]# gluster peer probe 10.0.100.194
Probe successful
#probe the primary machine from the secondary - it should return that it is already in the pool
[root@re-sw-cen-web2 www]# gluster peer probe 10.0.100.193
Probe on host 10.0.100.193 port 24007 already in peer list
#Check the status of the peers (if there are only 2 machines in the cluster you'll only see one peer
[root@re-sw-cen-web1 www]# gluster peer status
Number of Peers: 1
Hostname: 10.0.100.194
Uuid: 1b15d7e1-9540-45a7-9fd1-26cd44cc9a9d
State: Peer in Cluster (Connected)
#check peers from second machine
[root@re-sw-cen-web2 www]# gluster peer status
Number of Peers: 1
Hostname: 10.0.100.193
Uuid: cae3ff75-fb37-4263-b93c-85021824a0c7
State: Peer in Cluster (Connected)
#create a new gluster volume, the name of the volume is "wwwroot", it is set as a replicated volume (all files stored on all nodes) then the nodes and the storage points are listed
[root@re-sw-cen-web1 www]# gluster volume create wwwroot replica 2 transport tcp 10.0.100.193:/gluster/var-www-html 10.0.100.194:/gluster/var-www-html
Creation of volume wwwroot has been successful. Please start the volume to access data.
#start the volume from the first machine
[root@re-sw-cen-web1 www]# gluster volume start wwwroot
Starting volume wwwroot has been successful
#check the volume information (can be done from any node)
[root@re-sw-cen-web1 www]# gluster volume info
Volume Name: wwwroot
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.0.100.193:/gluster/var-www-html
Brick2: 10.0.100.194:/gluster/var-www-html
#Set limited access to the volume
[root@re-sw-cen-web1 mysql]# gluster volume set wwwroot auth.allow 10.0.100.*
Set volume successful
#mount the device, note the IP should be the IP of the node that you are mounting on
[root@re-sw-cen-web1 www]# mount -t glusterfs 10.0.100.193:/wwwroot /var/www/html
[root@re-sw-cen-web2 www]# mount -t glusterfs 10.0.100.194:/wwwroot /var/www/html
#copy all the stuff in if you put it somewhere temporarily, it should appear on the peers
cp -a /var/www/html_temp/* /var/www/html/
#edit the /etc/fstab file so the volume is auto mounted on reboot
#add the following line at the bottom of the file
10.0.100.194:/wwwroot /var/www/html glusterfs defaults 0 0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment