Skip to content

Instantly share code, notes, and snippets.

@bend
Created August 31, 2018 07:49
Show Gist options
  • Save bend/4d355203c3edab80831c343f9a9210d9 to your computer and use it in GitHub Desktop.
Save bend/4d355203c3edab80831c343f9a9210d9 to your computer and use it in GitHub Desktop.
[cmdexec] ERROR 2018/08/31 07:33:38 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:154: Unable to delete volume heketidbstorage: Unable to execute command on glusterfs-fpfcl
: volume delete: heketidbstorage: failed: Some of the peers are down
[negroni] Started GET /queue/0b4176488951330f734045483ffee1ee
[negroni] Completed 200 OK in 136.453µs
[kubeexec] DEBUG 2018/08/31 07:33:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vps01 Pod: glusterfs-vhkdb Command: gluster --mode=script volume stop heket
idbstorage force
Result: volume stop: heketidbstorage: success
[heketi] WARNING 2018/08/31 07:33:39 failed to delete volume f0a524cf1265ff8fb27405ac42ef93af via vps01: Unable to delete volume heketidbstorage: Unable to execute command on glusterfs
-vhkdb: volume delete: heketidbstorage: failed: Some of the peers are down
[kubeexec] ERROR 2018/08/31 07:33:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete heketidbstorage] on gl
usterfs-vhkdb: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: heketidbstorage: failed: Some of the peers are down
]
[cmdexec] ERROR 2018/08/31 07:33:39 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:154: Unable to delete volume heketidbstorage: Unable to execute command on glusterfs-vhkdb
: volume delete: heketidbstorage: failed: Some of the peers are down
[negroni] Started GET /queue/0b4176488951330f734045483ffee1ee
[negroni] Completed 200 OK in 164.642µs
[kubeexec] DEBUG 2018/08/31 07:33:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vps02 Pod: glusterfs-lr5hc Command: gluster --mode=script volume stop heket
idbstorage force
Result: volume stop: heketidbstorage: success
[heketi] WARNING 2018/08/31 07:33:40 failed to delete volume f0a524cf1265ff8fb27405ac42ef93af via vps02: Unable to delete volume heketidbstorage: Unable to execute command on glusterfs
-lr5hc: volume delete: heketidbstorage: failed: Some of the peers are down
[asynchttp] INFO 2018/08/31 07:33:40 asynchttp.go:292: Completed job 0b4176488951330f734045483ffee1ee in 18.536315607s
[kubeexec] ERROR 2018/08/31 07:33:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete heketidbstorage] on gl
usterfs-lr5hc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: heketidbstorage: failed: Some of the peers are down
]
[cmdexec] ERROR 2018/08/31 07:33:40 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:154: Unable to delete volume heketidbstorage: Unable to execute command on glusterfs-lr5hc
: volume delete: heketidbstorage: failed: Some of the peers are down
[heketi] ERROR 2018/08/31 07:33:40 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:415: failed to delete volume in cleanup: no hosts available (3 total)
[heketi] ERROR 2018/08/31 07:33:40 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:218: Error on create volume rollback: failed to clean up volume: f0a524cf1265ff8fb27405ac4
2ef93af
[heketi] ERROR 2018/08/31 07:33:40 /src/github.com/heketi/heketi/apps/glusterfs/operations_manage.go:172: Create Volume Rollback error: failed to clean up volume: f0a524cf1265ff8fb2740
5ac42ef93af
[negroni] Started GET /queue/0b4176488951330f734045483ffee1ee
[negroni] Completed 500 Internal Server Error in 439.569µs
[negroni] Started GET /clusters
[negroni] Completed 200 OK in 151.402µs
[negroni] Started GET /clusters/07d0a6d37eb03d98081776ecba94ee27
[negroni] Completed 200 OK in 637.467µs
[negroni] Started GET /nodes/5502b48c704c3cd3ca0bd44b45793ad1
[negroni] Completed 200 OK in 877.411µs
[negroni] Started GET /nodes/91f513210187420b8746d6f4bc05d855
[negroni] Completed 200 OK in 636.601µs
[negroni] Started GET /nodes/ca245feedc741e2b1706aecc628e0661
[negroni] Completed 200 OK in 794.798µs
[heketi] INFO 2018/08/31 07:35:00 Starting Node Health Status refresh
[cmdexec] INFO 2018/08/31 07:35:00 Check Glusterd service status in node vps04
[kubeexec] DEBUG 2018/08/31 07:35:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vps04 Pod: glusterfs-fpfcl Command: systemctl status glusterd
Result: ● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-08-31 07:32:04 UTC; 2min 56s ago
Process: 92 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 93 (glusterd)
CGroup: /kubepods/besteffort/podef48cd71-acef-11e8-bbc2-fa163eec9a70/58cf197009eedd9abfd3a1663df0af3d6414a3bdec3ff5e5187dcf118ac2cf38/system.slice/glusterd.service
└─93 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
Aug 31 07:32:02 vps04 systemd[1]: Starting GlusterFS, a clustered file-system server...
Aug 31 07:32:04 vps04 systemd[1]: Started GlusterFS, a clustered file-system server.
[heketi] INFO 2018/08/31 07:35:01 Periodic health check status: node 5502b48c704c3cd3ca0bd44b45793ad1 up=true
[cmdexec] INFO 2018/08/31 07:35:01 Check Glusterd service status in node vps01
[kubeexec] DEBUG 2018/08/31 07:35:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vps01 Pod: glusterfs-vhkdb Command: systemctl status glusterd
Result: ● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-08-31 07:32:44 UTC; 2min 16s ago
Process: 99 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 100 (glusterd)
CGroup: /kubepods/besteffort/podef4299d0-acef-11e8-bbc2-fa163eec9a70/b58905ec78b37050da9a4860bb18cbafe47d8f8affdb1c1229af044dbea8806f/system.slice/glusterd.service
└─100 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
Aug 31 07:32:42 vps01 systemd[1]: Starting GlusterFS, a clustered file-system server...
Aug 31 07:32:44 vps01 systemd[1]: Started GlusterFS, a clustered file-system server.
[heketi] INFO 2018/08/31 07:35:01 Periodic health check status: node 91f513210187420b8746d6f4bc05d855 up=true
[cmdexec] INFO 2018/08/31 07:35:01 Check Glusterd service status in node vps02
[kubeexec] DEBUG 2018/08/31 07:35:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vps02 Pod: glusterfs-lr5hc Command: systemctl status glusterd
Result: ● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-08-31 07:32:03 UTC; 2min 57s ago
Process: 92 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 93 (glusterd)
CGroup: /kubepods/besteffort/podef498e52-acef-11e8-bbc2-fa163eec9a70/8047116f7af23753567373de711e3839f868835cd684a71b0c6c183b932a9836/system.slice/glusterd.service
└─93 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
Aug 31 07:32:01 vps02 systemd[1]: Starting GlusterFS, a clustered file-system server...
Aug 31 07:32:03 vps02 systemd[1]: Started GlusterFS, a clustered file-system server.
[heketi] INFO 2018/08/31 07:35:01 Periodic health check status: node ca245feedc741e2b1706aecc628e0661 up=true
[heketi] INFO 2018/08/31 07:35:01 Cleaned 0 nodes from health cache
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment