Skip to content

Instantly share code, notes, and snippets.

@domofactor
Last active February 3, 2016 02:25
Show Gist options
  • Save domofactor/ce63f9b1010d3b5f5224 to your computer and use it in GitHub Desktop.
Save domofactor/ce63f9b1010d3b5f5224 to your computer and use it in GitHub Desktop.

#GlusterFS prep procedure

  • clone disk from 01
  • on 02 and 01,add gluster:server_install recipe
  • on 02, apt-get install glusterfs-server
  • on 01, service glusterfs-server stop
  • on 01, killall glusterfsd updates the brick-port from 24009(3.1.x) to 49152(3.4.x)
  • test+verify connection to 02(if fail, start 01 service)
  • on 01, apt-get install glusterfs-server
  • test+verify connection by writing a temp file
  • on 02, apt-get install glusterfs-server
  • on 02, apt-get install glusterfs-server
  • on 02, killall glusterfsd updates the brick-port from 24009(3.1.x) to 49152(3.4.x)

#GlusterFS Expanding procedure

  • Create 2 new servers, attach separate disk(500gb) to be used for glusterFS
  • bootstrap in chef
    knife bootstrap <ip_address> -N <fqdn> -E production -r 'recipe[apt::default],recipe[gluster::server_install]' -x ubuntu --sudo
  • configure disk on new host(s)
(echo n; echo p; echo 1; echo; echo; echo w) | fdisk /dev/vdb
mkfs.xfs -i size=512 /dev/vdb1
mkdir -p /GlusterFS/brick1
mount -t xfs /dev/vdb1 /GlusterFS/brick1/
  • verify connections
ping -c 3 <host03>
ping -c 3 <host04>
  • verify current connected peer count is 1
    gluster peer status

  • add new host(s) to trusted storage pool

gluster peer probe <host03>
gluster peer probe <host04>
  • verify peer connected count is 3
    gluster peer status

  • IF Peer Rejected error
    gluster peer detach <node> on 01
    service glusterfs-server stop on 03 and 04
    rm -rf /var/lib/gluster/* on 03 and 04
    service glusterfs-server start on 03 and 04
    gluster volume reset <volume>
    retry peer probe

  • verify the current volume info

gluster volume info
  • add new bricks to volume gluster volume add-brick vol01 <host1>:/GlusterFS/brick1 <host2>:/GlusterFS/brick1

  • verify new bricks on the volume(if going from 2 => 4, type should also change from replicated to distributed-replicated) gluster volume info

  • rebalance the volume gluster volume rebalance vol01 start

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment