Skip to content

Instantly share code, notes, and snippets.

@jag3773
Last active December 3, 2018 06:06
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jag3773/59da457e629c647a1f26 to your computer and use it in GitHub Desktop.
Save jag3773/59da457e629c647a1f26 to your computer and use it in GitHub Desktop.
Gluster Volume Checksum Mismatch
[root@ip-172-26-177-115 ~]# gluster volume info
Volume Name: supportgfs
Type: Distributed-Replicate
Volume ID: 695f6857-de4a-441f-bbf1-a57ec047eea6
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 172.26.178.4:/media/ephemeral0/supportgfs-readonly
Brick2: 172.26.177.115:/media/ephemeral0/supportgfs-readonly
Brick3: 172.26.178.254:/media/ephemeral0/supportgfs-readonly # <--- Bad node
Brick4: 172.26.177.116:/media/ephemeral0/supportgfs-readonly
Options Reconfigured:
nfs.disable: on
performance.cache-size: 128MB
auth.allow: 172.26.*
[root@ip-172-26-177-115 ~]# gluster volume status
Status of volume: supportgfs
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 172.26.178.4:/media/ephemeral0/supportgfs-readonl
y 49152 Y 1913
Brick 172.26.177.115:/media/ephemeral0/supportgfs-reado
nly 49152 Y 1972
Brick 172.26.177.116:/media/ephemeral0/supportgfs-reado
nly 49152 Y 1917
Self-heal Daemon on localhost N/A Y 1986
Self-heal Daemon on 172.26.177.116 N/A Y 1936
Self-heal Daemon on 172.26.178.4 N/A Y 1932
Task Status of Volume supportgfs
------------------------------------------------------------------------------
There are no active volume tasks
[2014-06-17 04:21:11.275495] I [glusterd-handler.c:2050:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: e15a8b47-259c-4bd1-a7df-aafe2ac0f9aa
[2014-06-17 04:21:11.275757] E [glusterd-utils.c:2373:glusterd_compare_friend_volume] 0-management: Cksums of volume supportgfs differ. local cksum = 2201279699, remote cksum = 52468988 on peer 172.26.177.115
[2014-06-17 04:21:11.275958] I [glusterd-handler.c:3085:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 172.26.177.115 (0), ret: 0
[2014-06-17 04:21:11.266398] I [glusterd-handler.c:2050:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 81857e74-a726-4f48-8d1b-c2a4bdbc094f
[2014-06-17 04:21:11.266485] E [glusterd-utils.c:2373:glusterd_compare_friend_volume] 0-management: Cksums of volume supportgfs differ. local cksum = 52468988, remote cksum = 2201279699 on peer 172.26.178.254
[2014-06-17 04:21:11.266542] I [glusterd-handler.c:3085:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 172.26.178.254 (0), ret: 0
[2014-06-17 04:21:11.272206] I [glusterd-rpc-ops.c:356:__glusterd_friend_add_cbk] 0-glusterd: Received RJT from uuid: 81857e74-a726-4f48-8d1b-c2a4bdbc094f, host: 172.26.178.254, port: 0
The solution for this problem was to do the following, on the BAD node:
/etc/init.d/glusterd stop
rsync -havP --delete 172.26.177.115:/var/lib/glusterd/vols/ /var/lib/glusterd/vols/
/etc/init.d/glusterd start
After that, running a `gluster volume status` reported back properly on all nodes.
@bkunal
Copy link

bkunal commented Oct 13, 2015

Observed similar issue while probing new node in the cluster.

Newly probed node goes to ** rejected ** state.

Here is the workaround :

  1. Unmount all the client associated with all the volume
  2. Detach the newly added node from the cluster

    gluster peer detach

  3. Set/unset volume option

    gluster volume set VOLNAME features.barrier on

    gluster volume set VOLNAME features.barrier off

    This should be done of all the volumes availble
  4. On the new node

    rm -rf /var/lib/glusterd/*

  5. Now try probing new node into the cluster

    gluster peer probe

  6. Verify the peer status

    gluster peer status

It works. If it still shows rejected, check for the logs for cksums error.
If you still sees cksums error, might need to do above steps again for that volume.

@luther7
Copy link

luther7 commented Dec 3, 2018

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment