Skip to content

Instantly share code, notes, and snippets.

Created August 5, 2015 09:50
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save anonymous/72efe06db048ae3d1a8f to your computer and use it in GitHub Desktop.
Save anonymous/72efe06db048ae3d1a8f to your computer and use it in GitHub Desktop.
Gluster operations
[root@labsvmh02 ~]# gluster volume remove-brick bdml labsvmh02:/gluster/bdml/brick start
volume remove-brick start: success
ID: f5857145-d2ca-463b-9217-9f780a608445
[root@labsvmh02 ~]# gluster volume remove-brick bdml labsvmh02:/gluster/bdml/brick status
Node Rebalanced-files size scanned failures skipped status run time in secs
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 0 0Bytes 31 0 0 completed 0.00
[root@labsvmh02 ~]# gluster volume info bdml
Volume Name: bdml
Type: Distribute
Volume ID: 8fc735fe-d18b-482b-a7b9-741b0dce3a84
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: labsvmh01:/gluster/bdml/brick
Brick2: labsvmh02:/gluster/bdml/brick
Options Reconfigured:
cluster.quorum-count: 1
cluster.server-quorum-type: none
cluster.quorum-type: fixed
[root@labsvmh02 ~]# gluster volume remove-brick bdml labsvmh02:/gluster/bdml/brick commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.
[root@labsvmh02 ~]# gluster volume status
Status of volume: bdml
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick labsvmh01:/gluster/bdml/brick 49156 0 Y 3325
NFS Server on localhost 2049 0 Y 29314
NFS Server on labsvmh01 2049 0 Y 27565
Task Status of Volume bdml
------------------------------------------------------------------------------
There are no active volume tasks
[root@labsvmh02 ~]# rm -rf /gluster/bdml/brick/
[root@labsvmh02 ~]# mkdir /gluster/bdml/brick
[root@labsvmh02 ~]# gluster volume add-brick bdml replica 2 labsvmh02:/gluster/bdml/brick
volume add-brick: success
[root@labsvmh02 ~]# gluster volume status bdml
Status of volume: bdml
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick labsvmh01:/gluster/bdml/brick 49156 0 Y 3325
Brick labsvmh02:/gluster/bdml/brick 49158 0 Y 41132
NFS Server on localhost 2049 0 Y 41154
Self-heal Daemon on localhost N/A N/A Y 41168
NFS Server on labsvmh01 2049 0 Y 929
Self-heal Daemon on labsvmh01 N/A N/A Y 940
Task Status of Volume bdml
------------------------------------------------------------------------------
There are no active volume tasks
[root@labsvmh02 ~]# gluster volume info bdml
Volume Name: bdml
Type: Replicate
Volume ID: 8fc735fe-d18b-482b-a7b9-741b0dce3a84
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: labsvmh01:/gluster/bdml/brick
Brick2: labsvmh02:/gluster/bdml/brick
Options Reconfigured:
cluster.quorum-count: 1
cluster.server-quorum-type: none
cluster.quorum-type: fixed
./spark
./spark/spark-1.4.1-bin-hadoop2.6.tgz
./original-logs
......... All the files
./raw2json
......... All the files
[root@labsvmh02 brick]# find
./spark
./spark/spark-1.4.1-bin-hadoop2.6.tgz
./original-logs
./raw2json
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment