Skip to content

Instantly share code, notes, and snippets.

@codeasone
Last active May 19, 2017 12:42
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save codeasone/ff2952f773481e998e46dbc94e46e3e0 to your computer and use it in GitHub Desktop.
Save codeasone/ff2952f773481e998e46dbc94e46e3e0 to your computer and use it in GitHub Desktop.
VerneMQ cluster up
sudo rm -rf ./data
aws s3 rm "s3://vernemq-discovery/" --recursive
delete: s3://vernemq-discovery/172.26.0.2
delete: s3://vernemq-discovery/172.26.0.4
delete: s3://vernemq-discovery/172.26.0.3
docker-compose up
Creating vernemqdataloss_mqtt-c_1
Creating vernemqdataloss_mqtt-a_1
Creating vernemqdataloss_mqtt-b_1
Creating mqtt-broker
Attaching to vernemqdataloss_mqtt-c_1, vernemqdataloss_mqtt-b_1, vernemqdataloss_mqtt-a_1, mqtt-broker
mqtt-c_1 | Starting: 237feee419bd 172.26.0.2
mqtt-c_1 | Registering: 172.26.0.2
mqtt-b_1 | Starting: a172f0430278 172.26.0.4
mqtt-b_1 | Registering: 172.26.0.4
mqtt-a_1 | Starting: e5f586475e19 172.26.0.3
mqtt-a_1 | Registering: 172.26.0.3
mqtt-broker | <7>haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy -p /run/haproxy.pid -f /usr/local/etc/haproxy/haproxy.cfg -Ds
mqtt-b_1 | Discovering nodes via S3 bucket
mqtt-c_1 | Discovering nodes via S3 bucket
mqtt-a_1 | Discovering nodes via S3 bucket
mqtt-b_1 | [2017-05-19T12:41:23Z]:join VerneMQ@172.26.0.2
mqtt-b_1 | Done
mqtt-b_1 | [2017-05-19T12:41:23Z]:join VerneMQ@172.26.0.3
mqtt-c_1 | [2017-05-19T12:41:23Z]:join VerneMQ@172.26.0.2
mqtt-b_1 | Done
mqtt-b_1 | [2017-05-19T12:41:23Z]:join VerneMQ@172.26.0.4
mqtt-c_1 | Couldn't join cluster due to self_join
mqtt-c_1 |
mqtt-c_1 | [2017-05-19T12:41:23Z]:join VerneMQ@172.26.0.3
mqtt-b_1 | Couldn't join cluster due to self_join
mqtt-b_1 |
mqtt-b_1 | 2017-05-19 12:41:08.515 [info] <0.31.0> Application hackney started on node 'VerneMQ@172.26.0.4'
mqtt-b_1 | 2017-05-19 12:41:09.948 [info] <0.328.0>@vmq_reg_trie:handle_info:183 loaded 0 subscriptions into vmq_reg_trie
mqtt-b_1 | 2017-05-19 12:41:09.961 [info] <0.217.0>@vmq_cluster:init:113 plumtree peer service event handler 'vmq_cluster' registered
mqtt-b_1 | 2017-05-19 12:41:10.589 [info] <0.31.0> Application vmq_acl started on node 'VerneMQ@172.26.0.4'
mqtt-b_1 | 2017-05-19 12:41:10.745 [info] <0.31.0> Application vmq_passwd started on node 'VerneMQ@172.26.0.4'
mqtt-b_1 | 2017-05-19 12:41:10.951 [error] <0.396.0> Failed to start Ranch listener {{127,0,0,1},8888} in ranch_tcp:listen([{ip,{127,0,0,1}},{port,8888},{nodelay,true},{linger,{true,0}},{send_timeout,30000},{send_timeout_close,true}]) for reason eaddrinuse (address already in use)
mqtt-b_1 | 2017-05-19 12:41:10.951 [error] <0.396.0> CRASH REPORT Process <0.396.0> with 0 neighbours exited with reason: {listen_error,{{127,0,0,1},8888},eaddrinuse} in gen_server:init_it/6 line 352
mqtt-b_1 | 2017-05-19 12:41:10.951 [error] <0.238.0>@vmq_ranch_config:reconfigure_listeners_for_type:187 can't reconfigure http listener({127,0,0,1}, 8888) with Options [{max_connections,10000},{nr_of_acceptors,10},{config_mod,vmq_http_config},{config_fun,config},{proxy_protocol,false}] due to {{shutdown,{failed_to_start_child,ranch_acceptors_sup,{listen_error,{{127,0,0,1},8888},eaddrinuse}}},{child,undefined,{ranch_listener_sup,{{127,0,0,1},8888}},{ranch_listener_sup,start_link,[{{127,0,0,1},8888},10,ranch_tcp,[{ip,{127,0,0,1}},{port,8888},{nodelay,true},{linger,{true,0}},{send_timeout,30000},{send_timeout_close,true}],cowboy_protocol,[{env,[{dispatch,[{'_',[],[{[<<"metrics">>],[],vmq_metrics_http,[]},{[<<"api">>,<<"v1">>,'...'],[],vmq_http_mgmt_api,[]}]}]}]}]]},permanent,infinity,supervisor,[ranch_listener_sup]}}
mqtt-b_1 | 2017-05-19 12:41:10.951 [error] <0.394.0> Supervisor {<0.394.0>,ranch_listener_sup} had child ranch_acceptors_sup started with ranch_acceptors_sup:start_link({{127,0,0,1},8888}, 10, ranch_tcp, [{ip,{127,0,0,1}},{port,8888},{nodelay,true},{linger,{true,0}},{send_timeout,30000},{send_timeout_close,...}]) at undefined exit with reason {listen_error,{{127,0,0,1},8888},eaddrinuse} in context start_error
mqtt-b_1 | 2017-05-19 12:41:10.951 [info] <0.31.0> Application vmq_server started on node 'VerneMQ@172.26.0.4'
mqtt-c_1 | Done
mqtt-c_1 | [2017-05-19T12:41:23Z]:join VerneMQ@172.26.0.4
mqtt-c_1 | Done
mqtt-c_1 | 2017-05-19 12:41:08.640 [info] <0.31.0> Application hackney started on node 'VerneMQ@172.26.0.2'
mqtt-c_1 | 2017-05-19 12:41:10.188 [info] <0.328.0>@vmq_reg_trie:handle_info:183 loaded 0 subscriptions into vmq_reg_trie
mqtt-c_1 | 2017-05-19 12:41:10.200 [info] <0.217.0>@vmq_cluster:init:113 plumtree peer service event handler 'vmq_cluster' registered
mqtt-c_1 | 2017-05-19 12:41:10.830 [info] <0.31.0> Application vmq_acl started on node 'VerneMQ@172.26.0.2'
mqtt-c_1 | 2017-05-19 12:41:11.093 [info] <0.31.0> Application vmq_passwd started on node 'VerneMQ@172.26.0.2'
mqtt-c_1 | 2017-05-19 12:41:11.300 [error] <0.399.0> Failed to start Ranch listener {{127,0,0,1},8888} in ranch_tcp:listen([{ip,{127,0,0,1}},{port,8888},{nodelay,true},{linger,{true,0}},{send_timeout,30000},{send_timeout_close,true}]) for reason eaddrinuse (address already in use)
mqtt-c_1 | 2017-05-19 12:41:11.300 [error] <0.238.0>@vmq_ranch_config:reconfigure_listeners_for_type:187 can't reconfigure http listener({127,0,0,1}, 8888) with Options [{max_connections,10000},{nr_of_acceptors,10},{config_mod,vmq_http_config},{config_fun,config},{proxy_protocol,false}] due to {{shutdown,{failed_to_start_child,ranch_acceptors_sup,{listen_error,{{127,0,0,1},8888},eaddrinuse}}},{child,undefined,{ranch_listener_sup,{{127,0,0,1},8888}},{ranch_listener_sup,start_link,[{{127,0,0,1},8888},10,ranch_tcp,[{ip,{127,0,0,1}},{port,8888},{nodelay,true},{linger,{true,0}},{send_timeout,30000},{send_timeout_close,true}],cowboy_protocol,[{env,[{dispatch,[{'_',[],[{[<<"metrics">>],[],vmq_metrics_http,[]},{[<<"api">>,<<"v1">>,'...'],[],vmq_http_mgmt_api,[]}]}]}]}]]},permanent,infinity,supervisor,[ranch_listener_sup]}}
mqtt-c_1 | 2017-05-19 12:41:11.300 [error] <0.399.0> CRASH REPORT Process <0.399.0> with 0 neighbours exited with reason: {listen_error,{{127,0,0,1},8888},eaddrinuse} in gen_server:init_it/6 line 352
mqtt-c_1 | 2017-05-19 12:41:11.301 [error] <0.397.0> Supervisor {<0.397.0>,ranch_listener_sup} had child ranch_acceptors_sup started with ranch_acceptors_sup:start_link({{127,0,0,1},8888}, 10, ranch_tcp, [{ip,{127,0,0,1}},{port,8888},{nodelay,true},{linger,{true,0}},{send_timeout,30000},{send_timeout_close,...}]) at undefined exit with reason {listen_error,{{127,0,0,1},8888},eaddrinuse} in context start_error
mqtt-c_1 | 2017-05-19 12:41:11.301 [info] <0.31.0> Application vmq_server started on node 'VerneMQ@172.26.0.2'
mqtt-b_1 | 2017-05-19 12:41:23.354 [info] <0.414.0>@plumtree_peer_service:attempt_join:50 Sent join request to: 'VerneMQ@172.26.0.2'
mqtt-b_1 | 2017-05-19 12:41:23.357 [info] <0.332.0>@vmq_cluster_mon:handle_info:126 cluster node 'VerneMQ@172.26.0.2' UP
mqtt-b_1 | 2017-05-19 12:41:23.358 [info] <0.216.0>@plumtree_peer_service_manager:write_state_to_disk:100 writing state {[{[{actor,<<72,38,174,107,204,42,242,85,52,115,255,110,169,17,46,168,124,218,11,116>>}],1},{[{actor,<<247,134,206,142,100,230,168,3,68,251,14,141,172,71,142,60,206,141,200,44>>}],1}],{dict,2,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[['VerneMQ@172.26.0.2',{[{actor,<<247,134,206,142,100,230,168,3,68,251,14,141,172,71,142,60,206,141,200,44>>}],1}]],[],[['VerneMQ@172.26.0.4',{[{actor,<<72,38,174,107,204,42,242,85,52,115,255,110,169,17,46,168,124,218,11,116>>}],1}]],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}}} to disk <<75,2,131,80,0,0,1,114,120,1,203,96,206,97,96,96,96,202,96,2,81,140,25,76,41,12,172,137,201,37,249,69,185,64,174,136,135,218,186,236,51,90,159,66,77,138,255,231,173,20,212,91,81,115,139,187,36,43,17,168,10,155,226,239,109,231,250,82,158,173,96,118,249,205,215,187,198,189,207,230,92,239,9,29,160,226,172,12,206,20,6,150,148,204,228,146,68,166,68,1,32,228,72,12,72,52,200,16,200,66,3,25,140,32,49,176,193,96,23,165,48,8,133,165,22,229,165,250,6,58,24,154,27,233,25,153,233,25,232,25,145,102,51,33,227,76,176,27,135,195,215,40,238,69,120,138,129,176,167,80,116,102,1,0,64,212,114,128>>
mqtt-b_1 | 2017-05-19 12:41:23.566 [info] <0.426.0>@plumtree_peer_service:attempt_join:50 Sent join request to: 'VerneMQ@172.26.0.3'
mqtt-b_1 | 2017-05-19 12:41:23.570 [info] <0.332.0>@vmq_cluster_mon:handle_info:126 cluster node 'VerneMQ@172.26.0.3' UP
mqtt-b_1 | 2017-05-19 12:41:23.572 [info] <0.216.0>@plumtree_peer_service_manager:write_state_to_disk:100 writing state {[{[{actor,<<72,38,174,107,204,42,242,85,52,115,255,110,169,17,46,168,124,218,11,116>>}],1},{[{actor,<<245,147,148,116,1,92,61,155,202,89,209,45,47,203,35,229,121,34,71,4>>}],1},{[{actor,<<247,134,206,142,100,230,168,3,68,251,14,141,172,71,142,60,206,141,200,44>>}],1}],{dict,3,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[['VerneMQ@172.26.0.2',{[{actor,<<247,134,206,142,100,230,168,3,68,251,14,141,172,71,142,60,206,141,200,44>>}],1}]],[['VerneMQ@172.26.0.3',{[{actor,<<245,147,148,116,1,92,61,155,202,89,209,45,47,203,35,229,121,34,71,4>>}],1}]],[['VerneMQ@172.26.0.4',{[{actor,<<72,38,174,107,204,42,242,85,52,115,255,110,169,17,46,168,124,218,11,116>>}],1}]],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}}} to disk <<75,2,131,80,0,0,1,236,120,1,203,96,206,97,96,96,96,206,96,2,81,140,25,76,41,12,172,137,201,37,249,69,185,64,174,136,135,218,186,236,51,90,159,66,77,138,255,231,173,20,212,91,81,115,139,187,36,43,17,168,10,155,226,175,147,167,148,48,198,216,206,62,21,121,81,87,255,180,242,211,74,37,119,22,156,138,191,183,157,235,75,121,182,130,217,229,55,95,239,26,247,62,155,115,189,39,116,128,138,179,50,56,83,24,88,82,50,147,75,18,153,19,5,128,144,35,49,32,209,32,67,32,11,13,100,48,130,196,192,174,0,17,64,87,11,133,165,22,229,165,250,6,58,24,154,27,233,25,153,233,25,232,25,97,119,38,46,155,9,152,102,140,221,52,28,158,38,228,54,19,236,166,225,8,111,20,207,35,66,136,129,112,8,161,232,204,2,0,104,73,148,198>>
mqtt-c_1 | 2017-05-19 12:41:23.357 [info] <0.332.0>@vmq_cluster_mon:handle_info:126 cluster node 'VerneMQ@172.26.0.4' UP
mqtt-c_1 | 2017-05-19 12:41:23.358 [info] <0.216.0>@plumtree_peer_service_manager:write_state_to_disk:100 writing state {[{[{actor,<<72,38,174,107,204,42,242,85,52,115,255,110,169,17,46,168,124,218,11,116>>}],1},{[{actor,<<247,134,206,142,100,230,168,3,68,251,14,141,172,71,142,60,206,141,200,44>>}],1}],{dict,2,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[['VerneMQ@172.26.0.2',{[{actor,<<247,134,206,142,100,230,168,3,68,251,14,141,172,71,142,60,206,141,200,44>>}],1}]],[],[['VerneMQ@172.26.0.4',{[{actor,<<72,38,174,107,204,42,242,85,52,115,255,110,169,17,46,168,124,218,11,116>>}],1}]],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}}} to disk <<75,2,131,80,0,0,1,114,120,1,203,96,206,97,96,96,96,202,96,2,81,140,25,76,41,12,172,137,201,37,249,69,185,64,174,136,135,218,186,236,51,90,159,66,77,138,255,231,173,20,212,91,81,115,139,187,36,43,17,168,10,155,226,239,109,231,250,82,158,173,96,118,249,205,215,187,198,189,207,230,92,239,9,29,160,226,172,12,206,20,6,150,148,204,228,146,68,166,68,1,32,228,72,12,72,52,200,16,200,66,3,25,140,32,49,176,193,96,23,165,48,8,133,165,22,229,165,250,6,58,24,154,27,233,25,153,233,25,232,25,145,102,51,33,227,76,176,27,135,195,215,40,238,69,120,138,129,176,167,80,116,102,1,0,64,212,114,128>>
mqtt-c_1 | 2017-05-19 12:41:23.571 [info] <0.216.0>@plumtree_peer_service_manager:write_state_to_disk:100 writing state {[{[{actor,<<72,38,174,107,204,42,242,85,52,115,255,110,169,17,46,168,124,218,11,116>>}],1},{[{actor,<<245,147,148,116,1,92,61,155,202,89,209,45,47,203,35,229,121,34,71,4>>}],1},{[{actor,<<247,134,206,142,100,230,168,3,68,251,14,141,172,71,142,60,206,141,200,44>>}],1}],{dict,3,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[['VerneMQ@172.26.0.2',{[{actor,<<247,134,206,142,100,230,168,3,68,251,14,141,172,71,142,60,206,141,200,44>>}],1}]],[['VerneMQ@172.26.0.3',{[{actor,<<245,147,148,116,1,92,61,155,202,89,209,45,47,203,35,229,121,34,71,4>>}],1}]],[['VerneMQ@172.26.0.4',{[{actor,<<72,38,174,107,204,42,242,85,52,115,255,110,169,17,46,168,124,218,11,116>>}],1}]],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}}} to disk <<75,2,131,80,0,0,1,236,120,1,203,96,206,97,96,96,96,206,96,2,81,140,25,76,41,12,172,137,201,37,249,69,185,64,174,136,135,218,186,236,51,90,159,66,77,138,255,231,173,20,212,91,81,115,139,187,36,43,17,168,10,155,226,175,147,167,148,48,198,216,206,62,21,121,81,87,255,180,242,211,74,37,119,22,156,138,191,183,157,235,75,121,182,130,217,229,55,95,239,26,247,62,155,115,189,39,116,128,138,179,50,56,83,24,88,82,50,147,75,18,153,19,5,128,144,35,49,32,209,32,67,32,11,13,100,48,130,196,192,174,0,17,64,87,11,133,165,22,229,165,250,6,58,24,154,27,233,25,153,233,25,232,25,97,119,38,46,155,9,152,102,140,221,52,28,158,38,228,54,19,236,166,225,8,111,20,207,35,66,136,129,112,8,161,232,204,2,0,104,73,148,198>>
mqtt-c_1 | 2017-05-19 12:41:23.582 [info] <0.332.0>@vmq_cluster_mon:handle_info:126 cluster node 'VerneMQ@172.26.0.3' UP
mqtt-c_1 | 2017-05-19 12:41:23.833 [info] <0.439.0>@plumtree_peer_service:attempt_join:50 Sent join request to: 'VerneMQ@172.26.0.3'
mqtt-c_1 | 2017-05-19 12:41:23.969 [info] <0.442.0>@plumtree_peer_service:attempt_join:50 Sent join request to: 'VerneMQ@172.26.0.4'
mqtt-a_1 | [2017-05-19T12:41:24Z]:join VerneMQ@172.26.0.2
mqtt-a_1 | Done
mqtt-a_1 | [2017-05-19T12:41:24Z]:join VerneMQ@172.26.0.3
mqtt-a_1 | Couldn't join cluster due to self_join
mqtt-a_1 |
mqtt-a_1 | [2017-05-19T12:41:24Z]:join VerneMQ@172.26.0.4
mqtt-a_1 | Done
mqtt-a_1 | 2017-05-19 12:41:11.829 [info] <0.31.0> Application vmq_acl started on node 'VerneMQ@172.26.0.3'
mqtt-a_1 | 2017-05-19 12:41:11.922 [info] <0.31.0> Application vmq_passwd started on node 'VerneMQ@172.26.0.3'
mqtt-a_1 | 2017-05-19 12:41:11.977 [error] <0.394.0> Failed to start Ranch listener {{127,0,0,1},8888} in ranch_tcp:listen([{ip,{127,0,0,1}},{port,8888},{nodelay,true},{linger,{true,0}},{send_timeout,30000},{send_timeout_close,true}]) for reason eaddrinuse (address already in use)
mqtt-a_1 | 2017-05-19 12:41:11.977 [error] <0.394.0> CRASH REPORT Process <0.394.0> with 0 neighbours exited with reason: {listen_error,{{127,0,0,1},8888},eaddrinuse} in gen_server:init_it/6 line 352
mqtt-a_1 | 2017-05-19 12:41:11.977 [error] <0.238.0>@vmq_ranch_config:reconfigure_listeners_for_type:187 can't reconfigure http listener({127,0,0,1}, 8888) with Options [{max_connections,10000},{nr_of_acceptors,10},{config_mod,vmq_http_config},{config_fun,config},{proxy_protocol,false}] due to {{shutdown,{failed_to_start_child,ranch_acceptors_sup,{listen_error,{{127,0,0,1},8888},eaddrinuse}}},{child,undefined,{ranch_listener_sup,{{127,0,0,1},8888}},{ranch_listener_sup,start_link,[{{127,0,0,1},8888},10,ranch_tcp,[{ip,{127,0,0,1}},{port,8888},{nodelay,true},{linger,{true,0}},{send_timeout,30000},{send_timeout_close,true}],cowboy_protocol,[{env,[{dispatch,[{'_',[],[{[<<"metrics">>],[],vmq_metrics_http,[]},{[<<"api">>,<<"v1">>,'...'],[],vmq_http_mgmt_api,[]}]}]}]}]]},permanent,infinity,supervisor,[ranch_listener_sup]}}
mqtt-a_1 | 2017-05-19 12:41:11.977 [error] <0.392.0> Supervisor {<0.392.0>,ranch_listener_sup} had child ranch_acceptors_sup started with ranch_acceptors_sup:start_link({{127,0,0,1},8888}, 10, ranch_tcp, [{ip,{127,0,0,1}},{port,8888},{nodelay,true},{linger,{true,0}},{send_timeout,30000},{send_timeout_close,...}]) at undefined exit with reason {listen_error,{{127,0,0,1},8888},eaddrinuse} in context start_error
mqtt-a_1 | 2017-05-19 12:41:11.977 [info] <0.31.0> Application vmq_server started on node 'VerneMQ@172.26.0.3'
mqtt-a_1 | 2017-05-19 12:41:23.569 [info] <0.332.0>@vmq_cluster_mon:handle_info:126 cluster node 'VerneMQ@172.26.0.4' UP
mqtt-a_1 | 2017-05-19 12:41:23.571 [info] <0.216.0>@plumtree_peer_service_manager:write_state_to_disk:100 writing state {[{[{actor,<<72,38,174,107,204,42,242,85,52,115,255,110,169,17,46,168,124,218,11,116>>}],1},{[{actor,<<245,147,148,116,1,92,61,155,202,89,209,45,47,203,35,229,121,34,71,4>>}],1},{[{actor,<<247,134,206,142,100,230,168,3,68,251,14,141,172,71,142,60,206,141,200,44>>}],1}],{dict,3,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[['VerneMQ@172.26.0.2',{[{actor,<<247,134,206,142,100,230,168,3,68,251,14,141,172,71,142,60,206,141,200,44>>}],1}]],[['VerneMQ@172.26.0.3',{[{actor,<<245,147,148,116,1,92,61,155,202,89,209,45,47,203,35,229,121,34,71,4>>}],1}]],[['VerneMQ@172.26.0.4',{[{actor,<<72,38,174,107,204,42,242,85,52,115,255,110,169,17,46,168,124,218,11,116>>}],1}]],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}}} to disk <<75,2,131,80,0,0,1,236,120,1,203,96,206,97,96,96,96,206,96,2,81,140,25,76,41,12,172,137,201,37,249,69,185,64,174,136,135,218,186,236,51,90,159,66,77,138,255,231,173,20,212,91,81,115,139,187,36,43,17,168,10,155,226,175,147,167,148,48,198,216,206,62,21,121,81,87,255,180,242,211,74,37,119,22,156,138,191,183,157,235,75,121,182,130,217,229,55,95,239,26,247,62,155,115,189,39,116,128,138,179,50,56,83,24,88,82,50,147,75,18,153,19,5,128,144,35,49,32,209,32,67,32,11,13,100,48,130,196,192,174,0,17,64,87,11,133,165,22,229,165,250,6,58,24,154,27,233,25,153,233,25,232,25,97,119,38,46,155,9,152,102,140,221,52,28,158,38,228,54,19,236,166,225,8,111,20,207,35,66,136,129,112,8,161,232,204,2,0,104,73,148,198>>
mqtt-a_1 | 2017-05-19 12:41:23.582 [info] <0.332.0>@vmq_cluster_mon:handle_info:126 cluster node 'VerneMQ@172.26.0.2' UP
mqtt-a_1 | 2017-05-19 12:41:24.588 [info] <0.433.0>@plumtree_peer_service:attempt_join:50 Sent join request to: 'VerneMQ@172.26.0.2'
mqtt-a_1 | 2017-05-19 12:41:24.878 [info] <0.439.0>@plumtree_peer_service:attempt_join:50 Sent join request to: 'VerneMQ@172.26.0.4'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment