Skip to content

Instantly share code, notes, and snippets.

@buggtb
Created October 7, 2016 13:31
Show Gist options
  • Save buggtb/7b96fa7f023aa3749b4c5c3cc67d3e0c to your computer and use it in GitHub Desktop.
Save buggtb/7b96fa7f023aa3749b4c5c3cc67d3e0c to your computer and use it in GitHub Desktop.
17:50 holocron| could some one take a look at this pastebin and tell me what next steps to debugging might be? http://pastebin.com/7MHZV4e3
17:52 holocron| this, specifically: unit-ceph-1: 12:57:00 INFO unit.ceph/1.update-status admin_socket: exception getting command descriptions: [Errno 111] Connection refused
17:55 * | shruthima quit (Reason: Quit: Page closed)
18:00 * | matthelmke_ joined #juju
18:05 * | frankban is now known as frankban|afk
18:10 spaok| stub: I guess what I'm confused on is, when I try to set the services structure for when haproxy joins, if do hookenv.relation_set(relation_id='somerelid:1', service_yaml) then the joined/changed
hook runs, haproxy doesn't get the services yaml, however if I do hookenv.relation_set(relation_id=None, service_yaml) then it does work, and haproxy builds the config right, but after a bit when
the update-status runs it errors because relation_id isn't set
18:10 * | wolverineav joined #juju
18:11 * | wolverin_ joined #juju
18:12 stub| spaok: Specifying None for the relation id means use the JUJU_RELATION_ID environment variable, which is only set in relation hooks. Specifying an explicit relation id does the same thing, but
will work in all hooks. Provided you used the correct relation id.
18:13 * | josvaz quit (Reason: Ping timeout: 265 seconds)
18:15 stub| spaok: You can test this using "juju run foo/0 'relation-set -r somerelid:1 foo=bar'" if you want.
18:15 * | wolverineav quit (Reason: Ping timeout: 252 seconds)
18:16 * | bdx joined #juju
18:17 stub| (juju run --unit foo/0 now it seems, with 2.0)
18:17 stub| "juju run --unit foo/0 'relation-ids somerelname' " to list the relationids in play
18:18 bdx| hows it going everyone?
18:18 bdx| do I have the capability to bootstrap to a specific subnet?
18:18 bdx| using aws provider
18:19 * | narinder joined #juju
18:20 lutostag| any way to specify "charm build" deps? (for instance in my wheelhouse I have cffi, which when the charm is built depends on having libffi-dev installed. I have this in the README, but
wondering if that was possible to enforce programatically
18:20 spaok| thanks stub, I'm fairly certain I have the right relid, but when I see from the haproxy side is only the private_ip and unit id something else, with None, I get the services yaml, its very
confusing
18:20 spaok| I'll look at trying that command, I was wondering how to run the relation-set command
18:21 spaok| lutostag: layers.yaml ?
18:21 spaok| not 100% on that
18:23 lutostag| spaok: yeah, I have deps for install time unit-side like http://pastebin.ubuntu.com/23285512/, but not sure how to do it "charm build" side
18:25 * | alexisb is now known as alexisb-afk
18:25 * | fginther` is now known as fginther
18:26 spaok| ah gotca, not sure
18:26 * | wolverineav joined #juju
18:27 lutostag| think I'll go with a wrapper around charm build in the top-level dir, don't need to turn charm into pip/apt/npm/snap anyways
18:28 * | wolverin_ quit (Reason: Ping timeout: 256 seconds)
18:36 * | mwenning quit (Reason: Ping timeout: 252 seconds)
18:36 * | rogpeppe joined #juju
18:40 * | rogpeppe1 quit (Reason: Ping timeout: 272 seconds)
18:40 * | mwenning joined #juju
18:41 * | mwenning quit (Reason: Remote host closed the connection)
18:42 * | matthelmke_ quit (Reason: Quit: Leaving)
18:42 kwmonroe| hey icey - can you help holocron with his ceph connection refused? http://pastebin.com/7MHZV4e3
18:43 * | wolverin_ joined #juju
18:45 * | narinder quit (Reason: Quit: This computer has gone to sleep)
18:45 * | rogpeppe quit (Reason: Ping timeout: 252 seconds)
18:46 holocron| Something in dpkg giving an Input/output error
18:47 * | wolverineav quit (Reason: Ping timeout: 252 seconds)
18:49 kwmonroe| lutostag: seems odd that an entry in your wheelhouse.txt would require a -dev to be installed for charm build
18:51 lutostag| kwmonroe: doesnt it though, it builds it locally. by compiling stuff, I guess there are c-extentsions in the python package itself
18:51 lutostag| lxml is another example
18:51 kwmonroe| cory_fu_: does charm build have runtime reqs dependent on the wheelhouse?
18:51 kwmonroe| cory_fu_: (see lutostag's issue above)
18:52 lutostag| (but I can get around that one, cause we have that deb'd up)
18:53 holocron| kwmonroe icey going to try this http://askubuntu.com/questions/139377/unable-to-install-any-updates-through-update-manager-apt-get-upgrade
18:53 cory_fu_| lutostag, kwmonroe: You shouldn't need a -dev package for charm-build because it *should* be installing source only and building them on the deployed unit, since we don't know the arch ahead
of time.
18:54 lutostag| ah, so I'll need these -dev's on the unit-side, good to know, but interesting.
18:54 kwmonroe| holocron: that doesn't sound like a ceph problem then... got a log with the dpkg error?
18:55 cory_fu_| lutostag: What's the repo for cffi so I can try building it?
18:55 lutostag| cory_fu_: my charm or the upstream python package?
18:56 cory_fu_| lutostag: The charm
18:56 lutostag| cory_fu_: lemme push...
18:56 cory_fu_| Sorry, I misread cffi as the charm name
18:57 holocron| kwmonroe: similar messages to this: http://pastebin.com/XZ0uFfz8
18:57 holocron| I've had make, and build-essential give the error, right now it's libdpkg-perl
18:59 kwmonroe| holocron: when do you see that? on charm deploy
18:59 lutostag| cory_fu_: bzr branch lp:~lutostag/oil-ci/charm-weebl+weasyprint-dep
19:00 holocron| kwmonroe no, the charm deployed fine yesterday, it came in as part of the openstack-on-lxd bundle. i was able to create a cinder volume even.. logged in today and saw that error in my first
pastebin
19:00 lutostag| (and I'll need to add the deps as explained too)
19:00 holocron| i logged into the unit and did an apt-get clean and apt-get update
19:00 holocron| now it's failing with this
19:01 holocron| is it common practice to scale out another unit and then tear down the breaking one?
19:01 holocron| like, should i just make that my default practice or should i try to fix this unit?
19:01 kwmonroe| holocron: common practice for for the breaking unit not to break
19:02 kwmonroe| especially on some nonsense apt operation
19:02 holocron| :P yeah that's the ideal
19:02 kwmonroe| is that unit perhaps out of disk space holocron?
19:02 kwmonroe| or inodes? (df -h && df -i)
19:02 holocron| nope, lots of space
19:03 holocron| lots of inodes
19:05 kwmonroe| holocron: anything in dmesg, /var/log/syslog, or /var/log/apt/* on that container that would help explain the dpkg failure?
19:07 * | rogpeppe joined #juju
19:07 holocron| kwmonroe probably,sorry i've got to jump to another thing now but i'll try to get back
19:08 kwmonroe| np holocron
19:14 * | holocron quit (Reason: Ping timeout: 240 seconds)
19:24 * | wolverin_ quit (Reason: Remote host closed the connection)
19:25 * | Siva joined #juju
19:25 Siva| I am trying to remove my application in juju 2.0 but it is not working
19:26 Siva| I put pdb.set_trace() in my code
19:26 Siva| Not sure if it is because of that
19:26 Siva| juju remove-application does not remove the application
19:26 Siva| How do I now forcefully remove it?
19:26 * | wolverineav joined #juju
19:26 Siva| Any help is much appreciated
19:27 spaok| is there a decorator for the update-status hook? or do I use @hook?
19:29 Siva| It is stuck at the install hook where I put pdb
19:29 Siva| I have the following decorator for the install hook, @hooks.hook()
19:30 spaok| Siva can you remove the machine?
19:30 lutostag| Siva: juju resolved --no-retry <unit> # over and over till its gone
19:30 kwmonroe| Siva: remove-application will first remove relations between your app and something else, then it will remove all units of your app, then it will remove the machine (if your app was the last
unit on a machine)
19:30 * | wolverineav quit (Reason: Ping timeout: 264 seconds)
19:31 kwmonroe| Siva: you're probably trapping the remove-relation portion of remove-application, which means you'll need to continue or break or whatever pdb takes to let it continue tearing down relations /
units / machines
19:32 stub| Siva: The hook will likely never complete, so you either need to go in and kill it yourself or run 'juju remove-machine XXX --force'
19:32 kwmonroe| so lutostag's suggestion would work -- keep resolving the app with --no-retry to make your way through the app's lifecycle. or spaok's suggestion might be faster -- juju remove-machine X
--force
19:32 spaok| I work with containers, so I tend to do that mostly
19:33 stub| (and haven't we all left our Python debugger statements in a hook at some point)
19:33 Siva| I removed the machine, it shows the status as 'stopped'
19:33 spaok| takes a sec
19:34 kwmonroe| keep watching.. it'll go away
19:34 spaok| also rerun the remove-application
19:34 Siva| OK. @lutostag suggestion did the trick
19:34 spaok| I've notcied some application ghosts when I remove machines
19:34 Siva| Now it is gone
19:35 Siva| Thanks
19:36 stub| spaok: yes, its perfectly valid to have an application deployed to no units. Makes sense when setting up subordinates, more dubious with normal charms.
19:37 Siva| One thing I noticed is that after removing the machine (say 0/lxd/14 is removed) now when you deploy it creates the machine 0/lxd/15 rather than 14
19:38 Siva| is this expected?
19:38 spaok| ya
19:38 Siva| same for units as well
19:38 spaok| yup
19:38 spaok| makes deployer scripts fun
19:38 * | holocron joined #juju
19:38 Siva| Why does it not consider the freed ones so that it is in sequence?
19:39 rick_h_| Siva: mainly because it makes things like logs more useful when the numbers are unique
19:39 rick_h_| Siva: especially as everything is async
19:40 rick_h_| Siva: so you can be sure any logs re: unit 15 are in fact the unit that was destroyed at xx:xx
19:40 Siva| OK
19:41 Siva| It looks a bit weird in the 'juju status' as the sequence is broken
19:41 rick_h_| Siva: yea, understand, but for the best imho
19:43 * | wolverineav joined #juju
19:45 * | neiljerram quit (Reason: Ping timeout: 264 seconds)
19:49 * | wolverineav quit (Reason: Ping timeout: 244 seconds)
19:52 spaok| Siva: why I have scripts to destroy and recreate my MAAS/Juju 2.0 dev env, good to reset sometimes
19:54 Siva| One question
19:54 Siva| I have the following entry in the config.yaml file
19:54 Siva| tor-type: type: string description: Always ovs default: ovs
19:55 Siva| Now when do, tor_type = config.get("tor_type") print "SIVA: ", tor_type
19:55 Siva| I expect it to print the default value 'ovs' as I have not set any value
19:55 Siva| It prints nothing
19:55 Siva| Is this a expected?
19:56 spaok| tor_type or tor-type?
19:56 Siva| oops! my bad
19:56 spaok| I put underscores in my config.yaml
19:57 Siva| Now after making the change, I can just 'redeploy', right?
19:57 * | antdillon joined #juju
19:57 spaok| you can test by editing the charm live if you wanted
19:57 Siva| How do I do that?
19:58 spaok| vi /var/lib/juju/agents/unit-charmname-id/charms/config.yaml
19:58 spaok| kick jujud in the pants by
19:58 spaok| ps aux |grep jujud-unit-charmname |grep -v grep | awk '{print $2}' | xargs kill -9
19:59 spaok| should cause the charm to run
20:01 * | wolverineav joined #juju
20:08 Siva| I modified the config.yaml and restarted the jujud
20:08 * | rogpeppe1 joined #juju
20:08 Siva| still prints None
20:11 * | alexisb-afk is now known as alexisb
20:12 * | rogpeppe quit (Reason: Ping timeout: 256 seconds)
20:18 * | veebers joined #juju
20:19 spaok| Siva: in my reactive charm I just config('tor_type')
20:19 spaok| not config.get
20:19 spaok| not sure the diff
20:22 * | wolverineav quit (Reason: Ping timeout: 252 seconds)
20:35 * | wolverineav joined #juju
20:45 * | wolverineav quit (Reason: Ping timeout: 244 seconds)
20:45 * | bbaqar joined #juju
20:45 Siva| I removed the app and deployed it again
20:46 Siva| config.get works
20:46 Siva| I can try config('tor_type') and see how it goes
20:49 * | mwenning joined #juju
20:50 * | rogpeppe1 quit (Reason: Ping timeout: 244 seconds)
20:53 * | babbageclunk joined #juju
21:02 * | rogpeppe1 joined #juju
21:02 * | neiljerram joined #juju
21:13 icey| hey holocron kwmonroe just seeing the messages
21:14 icey| to me, the line saying admin_socket: connection refused is more interesting; it looks like maybe the ceph monitor is down?
21:14 cory_fu_| kwmonroe: Comments on https://github.com/juju-solutions/layer-apache-bigtop-base/pull/50
21:16 * | rcj quit (Reason: Read error: Connection reset by peer)
21:16 * | rcj joined #juju
21:16 holocron| icey hey, i ended up tearing down the controller. I'm going to redeploy now and i'll drop a line in here if it happens again
21:17 * | menn0 joined #juju
21:17 icey| holocron: great; kwmonroeis right though, the expectation is that it wouldn't break
21:19 Siva| @spaok, live config.yaml change testing does not work for me
21:21 * | rcj` joined #juju
21:21 * | rcj quit (Reason: Read error: Connection reset by peer)
21:23 * | thumper joined #juju
21:23 * | ChanServ changed mode on channel #juju (+o thumper)
21:23 * | rcj` is now known as rcj
21:24 * | rcj is now known as Guest46240
21:24 * | Guest46240 is now known as rcj
21:29 kwmonroe| cory_fu_: i like the callback idea in https://github.com/juju-solutions/layer-apache-bigtop-base/pull/50. but i'm not gonna get to that by tomorrow, and i really want the base hadoop charms
refreshed (which depend on the pr as is). you ok if i open an issue to do it better in the future, but merge for now?
21:30 kwmonroe| cory_fu_: it doesn't matter what you say, mind you. petevg already +1'd it. just trying to fake earnest consideration.
21:30 cory_fu_| ha
21:31 cory_fu_| kwmonroe: I'm also +1 on it as-is, but I'd like to reduce boilerplate where we can. But we can go back and refactor it later
21:33 cory_fu_| kwmonroe: Issue opened and PR merged
21:33 cory_fu_| kwmonroe: And I'm good on the other commits you made for the xenial updates
21:35 * | cmagina quit (Reason: Quit: ZZZzzz…)
21:36 * | cmagina joined #juju
21:37 kwmonroe| dag nab cory_fu_! you're fast. i was still pontificating on the title of a new issue. thanks!!
21:38 * | narinder joined #juju
21:38 * | Edur joined #juju
21:39 kwmonroe| and thanks for the xenial +1. i'll squash, pr, and set the clock for 24 hours till i can push ;)
21:39 kwmonroe| just think, you'll be swimming when the upstream charms get refreshed.
21:39 * | narinder quit (Reason: Client Quit)
21:40 kwmonroe| nice knowing you
21:40 * | wolverineav joined #juju
21:46 * | wallyworld quit (Reason: Ping timeout: 252 seconds)
21:48 kwmonroe| before you go cory_fu_.. did you see mark's note to the juju ML about blocked status? seems the slave units are reporting "blocked" even when a relation is present. i'm pretty sure it's
because of this: https://github.com/juju-solutions/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L22
21:48 * | axisys quit (Reason: Quit: leaving)
21:48 kwmonroe| as in, there *is* a spec mismatch until NN/RM are ready. what's the harm in setting status with a spec mismatch?
21:49 * | axisys joined #juju
21:50 kwmonroe| afaict, they'll report "waiting for ready", which seems ok to me. unless we want to add a separate block for specifically dealing with spec mismatches, which would be some weird state between
waiting and blocked to see if the spec ever does eventually match.
21:51 * | Siva quit (Reason: Quit: Page closed)
21:52 * | wolverineav quit (Reason: Ping timeout: 244 seconds)
21:54 * | wolverineav joined #juju
21:57 cory_fu_| kwmonroe: I think the problem is one of when hooks are triggered. I think that what's happening is that the relation is established and reported, but the hook doesn't get called on both sides
right away, leaving one side reporting "blocked" even though the relation is there, simply because it hasn't been informed of it yet
21:58 * | saibarspeis quit (Reason: Quit: Textual IRC Client: www.textualapp.com)
22:00 * | lborda quit (Reason: Quit: Ex-Chat)
22:01 kwmonroe| i think i don't agree with ya cory_fu_... NN will send DN info early
(https://github.com/juju-solutions/bigtop/blob/bug/BIGTOP-2548/xenial-charm-refresh/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py#L27). but check out what's
missing... https://github.com/juju-solutions/interface-dfs/blob/master/requires.py#L95
22:01 * | wolverineav quit (Reason: Ping timeout: 244 seconds)
22:01 cory_fu_| kwmonroe: Doesn't matter. The waiting vs blocked status only depends on the .joined state:
https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L36
22:01 kwmonroe| spoiler alert cory_fu_: it's clustername. we don't send that until NN is all the way up, so the specmatch will be false:
https://github.com/juju-solutions/bigtop/blob/bug/BIGTOP-2548/xenial-charm-refresh/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py#L142
22:02 kwmonroe| cory_fu_: you crazy: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L22
22:02 cory_fu_| kwmonroe: Ooh, I see.
22:02 cory_fu_| We should remove that @when_none line. There's no reason for it
22:03 kwmonroe| great idea cory_fu_! if only i had it 15 minutes ago.
22:03 cory_fu_| :)
22:04 kwmonroe| petevg: you mentioned you also saw charms blocked on missing relations (maybe zookeeper?). could it be you saw the slaves blocked instead?
22:04 * | redir quit (Reason: Quit: WeeChat 1.4)
22:05 * | redir joined #juju
22:05 neiljerram| Aaaargh guys, would you _please_ stop making gratuitous changes in every Juju 2 beta or rc?
22:06 neiljerram| The latest one that has just bitten my testing is the addition of a * after the unit name in juju status output.
22:06 neiljerram| Before that it was 'juju set-config' being changed to 'juju config'
22:06 neiljerram| This is getting old....
22:09 * | wallyworld joined #juju
22:09 * | wolverineav joined #juju
22:11 * | alexisb quit (Reason: Ping timeout: 244 seconds)
22:12 petevg| kwmonroe: Yes. I think that it was probably just the hadoop slave.
22:12 kwmonroe| neiljerram: apologies for the headaches! but you should see much more stability in the RCs. at least for me, api has been pretty consistent with rc1/rc2. rick_h_ do you know if there are
significant api/output changes in the queue from now to GA?
22:13 kwmonroe| thx petevg - fingers crossed that was the only outlier
22:13 petevg| np
22:13 petevg| fingers crossed.
22:14 * | wolverineav quit (Reason: Ping timeout: 244 seconds)
22:15 neiljerram| kwmonroe, tbh I'm afraid I have to say that I think things have been _less_ stable since the transition from -beta to -rc. My guess is that there are changes that people have been thinking
they should do for a while, but only now that the GA is really looking likely so they think that they should get them out :-)
22:16 neiljerram| kwmonroe, but don't worry, I've had my moan now...
22:17 kwmonroe| heh neiljerram - fair enough :)
22:17 neiljerram| Do you happen to know what the new * means? Should I expect to see it on every juju status line?
22:18 kwmonroe| neiljerram: i was just about to ask you the same thing... i haven't seen the '*'
22:18 kwmonroe| neiljerram: you on rc1, 2, or 3?
22:19 neiljerram| kwmonroe, rc3 now; here's an excerpt from I test I currently have running:
22:19 neiljerram| UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE
22:19 neiljerram| calico-devstack/0* unknown idle 0 104.197.123.208
22:19 neiljerram|
22:19 neiljerram| MACHINE STATE DNS INS-ID SERIES AZ
22:19 neiljerram| 0 started 104.197.123.208 juju-0f506f-0 trusty us-central1-a
22:19 * | holocron quit (Reason: Quit: Page closed)
22:20 * | wolverineav joined #juju
22:21 neiljerram| kwmonroe, just doing another deployment with more units, to get more data
22:21 kwmonroe| hmph... neiljerram i wonder if that's an attempt to truncate the unit name to a certain length. doesn't make sense in your case, but i could see 'really-long-unit-name/0' being truncated to
'really-long-u*' to keep the status columns sane.
22:21 kwmonroe| just a guess neiljerram
22:22 kwmonroe| and at any rate neiljerram, if you're scraping 'juju status', you might want to consider scraping 'juju status --format=tabular', which might be more consistent.
22:22 neiljerram| kwmonroe, BTW the reason this matters for my automated testing is that I have some quite tricky code that is trying to determine when the deployment as a whole is really ready.
22:22 kwmonroe| ugh, not right
22:23 kwmonroe| sorry, i meant 'juju status --format=yaml', not tabular
22:23 neiljerram| kwmonroe, yes, I suppose that would probably be better
22:25 neiljerram| kwmonroe, Ah, it seems that * means 'no longer waiting for machine'
22:29 * | mrjazzcat quit (Reason: Ping timeout: 252 seconds)
22:29 * | alexisb joined #juju
22:30 * | wolverineav quit (Reason: Ping timeout: 244 seconds)
22:33 kwmonroe| neiljerram: you sure? i just went to rc3 and deployed ubuntu... i still see:
22:33 kwmonroe| UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE
22:33 kwmonroe| ubuntu/0 waiting allocating 0 54.153.95.194 waiting for machine
22:33 neiljerram| kwmonroe, exactly - because the machine hasn't been started yet
22:33 neiljerram| kwmonroe, exactly - because the machine hasn't been started yet
22:33 kwmonroe| oh, nm neiljerram, i should wait longer.. you said the '*' is....
22:33 kwmonroe| right
22:34 kwmonroe| i gotta say, intently watching juju status is right up there with the birth of my first child
22:35 * | wolverineav joined #juju
22:38 * | jheroux quit (Reason: Quit: Leaving)
22:40 * | antdillon quit (Reason: Quit: Ex-Chat)
22:40 kwmonroe| neiljerram: i can't get the '*' after the machine is ready, nor using a super long unit name. i'm not sure where that's coming from.
22:40 kwmonroe| UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE
22:40 kwmonroe| really-long-ubuntu-name/0 maintenance executing 1 54.153.97.184 (install) installing charm software
22:40 kwmonroe| ubuntu/0 active idle 0 54.153.95.194 ready
22:41 neiljerram| kwmonroe, do you have rc3?
22:41 kwmonroe| neiljerram: i do... http://paste.ubuntu.com/23286448/
22:42 * | wolverineav quit (Reason: Ping timeout: 244 seconds)
22:44 neiljerram| kwmonroe, curious, I don't know then. I'm also using AWS, so it's not because we're using different clouds.
22:45 kwmonroe| neiljerram: i'm aws-west. if you're east, it could be signifying the hurricane coming to the east coast this weekend.
22:45 neiljerram| kwmonroe, :-)
22:45 kwmonroe| rc3 backed by weather.com ;)
22:46 kwmonroe| neiljerram: care to /join #juju-dev? i'll ask the core devs where the '*' is coming from
22:46 * | wolverineav joined #juju
22:46 neiljerram| kwmonroe, sure, will do
22:51 * | wolverineav quit (Reason: Ping timeout: 244 seconds)
22:52 * | rogpeppe1 quit (Reason: Ping timeout: 256 seconds)
23:02 kwmonroe| for anyone following along, the '*' denotes leadership
23:06 * | wolverineav joined #juju
23:08 * | rogpeppe1 joined #juju
23:17 * | mwenning quit (Reason: Ping timeout: 272 seconds)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment