Skip to content

Instantly share code, notes, and snippets.

@skpy
Last active April 13, 2017 12:16
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save skpy/510d24a2acb177572262 to your computer and use it in GitHub Desktop.
Save skpy/510d24a2acb177572262 to your computer and use it in GitHub Desktop.

On the server

gluster1# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.0 (Maipo)

gluster1# cat /etc/yum.repos.d/glusterfs-x86_64.repo
[glusterfs-x86_64]
name=GlusterFS x86_64
baseurl=https://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/epel-7/x86_64/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-gluster.pub

gluster1# rpm -qa | grep gluster
glusterfs-3.5.2-1.el7.x86_64
glusterfs-fuse-3.5.2-1.el7.x86_64
glusterfs-libs-3.5.2-1.el7.x86_64
glusterfs-api-3.5.2-1.el7.x86_64
glusterfs-cli-3.5.2-1.el7.x86_64
glusterfs-server-3.5.2-1.el7.x86_64

gluster1# glusterfsd --version
glusterfs 3.5.2 built on Jul 31 2014 18:41:16

gluster1# gluster volume info test

Volume Name: test
Type: Replicate
Volume ID: a37c5f6e-6dac-48dc-9e82-935a76b2ca20
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/bricks/brick1/brick
Brick2: gluster2:/bricks/brick1/brick
Options Reconfigured:
nfs.disable: true
server.allow-insecure: on

gluster1# gluster volume status test
Status of volume: test
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick gluster1:/bricks/brick1/brick	49153	Y	35881
Brick gluster2:/bricks/brick1/brick	49153	Y	28208
Self-heal Daemon on localhost				N/A	Y	40992
Self-heal Daemon on gluster2    		N/A	Y	33189

Task Status of Volume test
------------------------------------------------------------------------------
There are no active volume tasks

On the client

client# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.5 (Santiago)

client# cat /etc/yum.repos.d/glusterfs-x86_64.repo
[glusterfs-x86_64]
name=GlusterFS x86_64
baseurl=https://download.gluster.org/pub/gluster/glusterfs//LATEST/RHEL/epel-6.5/x86_64/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-gluster.pub

client# rpm -qa | grep gluster
glusterfs-3.5.2-1.el6.x86_64
glusterfs-libs-3.5.2-1.el6.x86_64
glusterfs-fuse-3.5.2-1.el6.x86_64

client# glusterfsd --version
glusterfs 3.5.2 built on Jul 31 2014 18:47:52

client# mount -t glusterfs gluster1:test /mnt
Mount failed. Please check the log file for more details.

client# tail /var/log/gluster/mnt.log
[2014-08-01 19:43:34.838329] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2 (/usr/sbin/glusterfs --volfile-server=gluster1 --volfile-id=test /mnt)
[2014-08-01 19:43:34.841098] I [socket.c:3561:socket_init] 0-glusterfs: SSL support is NOT enabled
[2014-08-01 19:43:34.841190] I [socket.c:3576:socket_init] 0-glusterfs: using system polling thread
[2014-08-01 19:43:34.849622] W [socket.c:522:__socket_rwv] 0-glusterfs: readv on 192.168.30.107:24007 failed (No data available)
[2014-08-01 19:43:34.850009] E [rpc-clnt.c:369:saved_frames_unwind] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x15d) [0x7f3daf9f4ced] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xc3) [0x7f3daf9f4833] (-->/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7f3daf9f474e]))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2014-08-01 19:43:34.846671 (xid=0x1)
[2014-08-01 19:43:34.850056] E [glusterfsd-mgmt.c:1398:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:test)
[2014-08-01 19:43:34.850083] W [glusterfsd.c:1095:cleanup_and_exit] (-->/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7f3daf9f474e] (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x21b) [0x7f3daf9f465b] (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3fe) [0x40be2e]))) 0-: received signum (0), shutting down
[2014-08-01 19:43:34.850095] I [fuse-bridge.c:5475:fini] 0-fuse: Unmounting '/mnt'.
[2014-08-01 19:43:34.854973] W [glusterfsd.c:1095:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x7f3dae2edb5d] (-->/lib64/libpthread.so.0(+0x79d1) [0x7f3dae9809d1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xd5) [0x4053e5]))) 0-: received signum (15), shutting down

client# mount -t glusterfs -o log-level=DEBUG gluster1:/test /mnt
Mount failed. Please check the log file for more details.

client# tail /var/log/gluster/mnt.log
[2014-08-01 19:46:13.092161] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2 (/usr/sbin/glusterfs --log-level=DEBUG --volfile-server=gluster1 --volfile-id=/test /mnt)
[2014-08-01 19:46:13.092510] D [glusterfsd.c:410:set_fuse_mount_options] 0-glusterfsd: fopen-keep-cache mode 2
[2014-08-01 19:46:13.092726] D [glusterfsd.c:466:set_fuse_mount_options] 0-: fuse direct io type 2
[2014-08-01 19:46:13.092783] D [options.c:1152:xlator_option_init_double] 0-fuse: option negative-timeout using set value 0.000000
[2014-08-01 19:46:13.093805] D [rpc-clnt.c:975:rpc_clnt_connection_init] 0-glusterfs: defaulting frame-timeout to 30mins
[2014-08-01 19:46:13.093881] D [rpc-transport.c:262:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/3.5.2/rpc-transport/socket.so
[2014-08-01 19:46:13.095507] I [socket.c:3561:socket_init] 0-glusterfs: SSL support is NOT enabled
[2014-08-01 19:46:13.095553] I [socket.c:3576:socket_init] 0-glusterfs: using system polling thread
[2014-08-01 19:46:13.095573] D [rpc-clnt.c:1427:rpcclnt_cbk_program_register] 0-glusterfs: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1
[2014-08-01 19:46:13.098868] D [common-utils.c:248:gf_resolve_ip6] 0-resolver: returning ip-192.168.30.107 (port-24007) for hostname: gluster1 and port: 24007
[2014-08-01 19:46:13.116606] D [socket.c:492:__socket_rwv] 0-glusterfs: EOF on socket
[2014-08-01 19:46:13.116709] W [socket.c:522:__socket_rwv] 0-glusterfs: readv on 192.168.30.107:24007 failed (No data available)
[2014-08-01 19:46:13.116728] D [socket.c:2238:socket_event_handler] 0-transport: disconnecting now
[2014-08-01 19:46:13.117116] E [rpc-clnt.c:369:saved_frames_unwind] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x15d) [0x7fd560740ced] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xc3) [0x7fd560740833] (-->/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7fd56074074e]))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2014-08-01 19:46:13.107743 (xid=0x1)
[2014-08-01 19:46:13.117182] E [glusterfsd-mgmt.c:1398:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/test)
[2014-08-01 19:46:13.117217] W [glusterfsd.c:1095:cleanup_and_exit] (-->/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7fd56074074e] (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x21b) [0x7fd56074065b] (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3fe) [0x40be2e]))) 0-: received signum (0), shutting down
[2014-08-01 19:46:13.117284] D [glusterfsd-mgmt.c:2025:glusterfs_mgmt_pmap_signout] 0-fsd-mgmt: portmapper signout arguments not given
[2014-08-01 19:46:13.117303] I [fuse-bridge.c:5475:fini] 0-fuse: Unmounting '/mnt'.
@skpy
Copy link
Author

skpy commented Aug 1, 2014

Solution: add option rpc-auth-allow-insecure on in /etc/gluster/glusterd.vol on each server and restart the glusterd service.

@hazertyck
Copy link

hazertyck commented Apr 12, 2017

Hi !
I've exactly the same problem with glusterfs-server 3.5.2-2+deb8u3
but adding option rpc-auth-allow-insecure in /etc/gluster/glusterd.vol makes gluster un-restartable.

Apr 12 11:30:10 gfs01 glusterfs-server[30198]: Starting glusterd service: glusterd failed!
Apr 12 11:30:10 gfs01 systemd[1]: glusterfs-server.service: control process exited, code=exited status=1
Apr 12 11:30:10 gfs01 systemd[1]: Failed to start LSB: GlusterFS server.

EDIT / SOLUTION :

  1. Edit the /etc/gluster/glusterd.vol file like @skpy said with option rpc-auth-allow-insecure on,
  2. Run gluster volume set <volname> server.allow-insecure on on your shell.
  3. Don't forget to stop & start again your volume with gluster volume stop <volname> and gluster volume start <volname>
  4. Finaly restart glusterfs-server service glusterfs-server restart

Source : Release Notes for GlusterFS 3.6.3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment