Skip to content

Instantly share code, notes, and snippets.

@bsormagec
Forked from aitoroses/3_nodes.sh
Created October 10, 2017 17:33
Show Gist options
  • Save bsormagec/6190fb3723aa4542a4f33d036b671cb5 to your computer and use it in GitHub Desktop.
Save bsormagec/6190fb3723aa4542a4f33d036b671cb5 to your computer and use it in GitHub Desktop.
configuring glusterfs
# Assumming we have 3 nodes
NODE1=10.0.1.1
NODE2=10.0.1.2
NODE3=10.0.1.3
# Configure the server
ssh root@NODE1 apt-get install -y glusterfs-server
ssh root@NODE2 apt-get install -y glusterfs-server
ssh root@NODE3 apt-get install -y glusterfs-server
ssh root@NODE1 gluster peer probe NODE2
ssh root@NODE1 gluster peer probe NODE3
ssh root@NODE2 gluster peer status
# Create the volume
ssh root@NODE1 \
gluster volume create k8 replica 3 transport tcp \
10.0.1.1:/gluster-storage \
10.0.1.2:/gluster-storage \
10.0.1.3:/gluster-storage \
force
# Start the volume
ssh root@NODE1 gluster volume start k8
# Prepare the client
ssh root@NODE1 apt-get install -y glusterfs-client
ssh root@NODE2 apt-get install -y glusterfs-client
ssh root@NODE3 apt-get install -y glusterfs-client
ssh root@NODE1 mkdir /storage-pool
ssh root@NODE2 mkdir /storage-pool
ssh root@NODE3 mkdir /storage-pool
# Mount the volume
ssh root@NODE1 mount -t glusterfs 10.0.1.1:/k8 /storage-pool
ssh root@NODE2 mount -t glusterfs 10.0.1.1:/k8 /storage-pool
ssh root@NODE3 mount -t glusterfs 10.0.1.1:/k8 /storage-pool
# Allow automount on reboot
ssh root@NODE1 echo "10.0.1.1:/k8 /storage-pool glusterfs defaults, _netdev 0 0" >> /etc/fstab
ssh root@NODE2 echo "10.0.1.1:/k8 /storage-pool glusterfs defaults, _netdev 0 0" >> /etc/fstab
ssh root@NODE3 echo "10.0.1.1:/k8 /storage-pool glusterfs defaults, _netdev 0 0" >> /etc/fstab
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment