Skip to content

Instantly share code, notes, and snippets.

@sleinen
sleinen / gist:5c23e54c4dd42e7eaf47638ffb1592ec
Created March 1, 2018 09:50
log messages from stopping designate-producer, which takes a long time
2018-03-01 10:35:42.689 959 DEBUG designate.service [req-18f51523-cbd2-49c7-b0a2-66dfdb76dba0 - - - - -] Stopping RPC server on topic 'producer' stop /usr/lib/python2.7/dist-packages/designate/service.py:186
2018-03-01 10:35:48.153 959 INFO designate.service [req-18f51523-cbd2-49c7-b0a2-66dfdb76dba0 - - - - -] Stopping producer service
2018-03-01 10:36:41.998 959 ERROR oslo.service.loopingcall [req-18f51523-cbd2-49c7-b0a2-66dfdb76dba0 - - - - -] Fixed interval looping call 'designate.producer.tasks.PeriodicGenerateDelayedNotifyTask' failed: MessagingTimeout: Timed out waiting for a reply to message ID e18ec6fe608b4d988cc2fc5b48c21ac6
2018-03-01 10:36:41.998 959 ERROR oslo.service.loopingcall Traceback (most recent call last):
2018-03-01 10:36:41.998 959 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 137, in _run_loop
2018-03-01 10:36:41.998 959 ERROR oslo.service.loopingcall result = func(*self.args, **self.kw)
2018-03-01 10:36:41.998 959 ERROR o
#!/bin/sh
sudo route delete -inet6 default -iface en0
sudo ifconfig en5 down
sudo ifconfig en5 up
- also_notifies: []
attributes: {}
description: Default Pool
id: ff9f6c43-5711-4a41-8842-d36f83111ef1
name: default
nameservers:
- host: 127.0.0.1
port: 53
ns_records:
- hostname: designate.s2.scloud.switch.ch.

Keybase proof

I hereby claim:

  • I am sleinen on github.
  • I am simonas559 (https://keybase.io/simonas559) on keybase.
  • I have a public key ASDuXQ4QOdbA3N3lYqcax8zao9BxELViku89nNWofe9e0Qo

To claim this, I am signing this object:

Verifying that +sleinen is my blockchain ID. https://onename.com/sleinen
<!-- UPDATE: jpich on #openstack-horizon pointed me to
https://bugs.launchpad.net/horizon/+bug/1332238
Thanks!
So this is a known issue, and as I suspected there seems to be a
more credible fix than my hack below. I'll try to search
Launchpad more thoroughly next time.
@sleinen
sleinen / s2-recovery
Created August 4, 2013 19:18
We recently lost four OSDs (8,9,10,11) in the same server of our 64-OSD/10-server cluster. After reformatting the file systems, two objects remain unfound. Unfortunately it is not possible to declare them as "lost", because osd.9 remains in "querying" state (see line 123). Any idea on how to get this unstuck? I have already tried restarting the …
: root@ineri[leinen]; ceph health detail
HEALTH_WARN 1 pgs degraded; 1 pgs recovering; 1 pgs stuck unclean; recovery 2158/19171654 degraded (0.011%); 2/9585827 unfound (0.000%)
pg 0.cfa is stuck unclean for 249687.042135, current state active+recovering+degraded, last acting [23,50]
pg 0.cfa is active+recovering+degraded, acting [23,50], 2 unfound
recovery 2158/19171654 degraded (0.011%); 2/9585827 unfound (0.000%)
: root@ineri[leinen]; ceph pg dump_stuck unclean
ok
pg_stat objects mip degr unf bytes log disklog state state_stamp v reported up acting last_scrub scrub_stamp last_deep_scrub deep_scrub_stamp
0.cfa 2178 2 2158 2 143697053 0 0 active+recovering+degraded 2013-08-02 14:26:53.965345 28074'7610 28074'41570 [23,50] [23,50] 20585'6801 2013-07-28 15:40:53.298786 20585'6801 2013-07-28 15:40:53.298786