-
-
Save Analect/4001cc3afa21e64b131b to your computer and use it in GitHub Desktop.
console from logs of one elkslave
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[2016-02-05 10:03:58,023][INFO ][cluster.service ] [Alex Wilder] added {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(join from node[{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:04:28,028][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [8] (timeout [30s], pending nodes: [{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:04:28,034][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-join(join from node[{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}])] took 30s above the warn threshold of 30s | |
[2016-02-05 10:04:28,035][INFO ][cluster.service ] [Alex Wilder] removed {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:04:28,041][WARN ][discovery.zen ] [Alex Wilder] discovered [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] which is also master but with an older cluster_state, telling [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] to rejoin the cluster ([via a new cluster state]) | |
[2016-02-05 10:04:28,043][WARN ][discovery.zen ] [Alex Wilder] received a request to rejoin the cluster from [T_BXQ4tYRP22xK5nz5VGww], current nodes: {{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300},} | |
[2016-02-05 10:04:31,060][INFO ][cluster.service ] [Alex Wilder] new_master {Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}, added {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(elected_as_master, [1] joins received) | |
[2016-02-05 10:04:55,437][DEBUG][action.admin.cluster.state] [Alex Wilder] no known master node, scheduling a retry | |
[2016-02-05 10:04:55,452][DEBUG][action.admin.cluster.health] [Alex Wilder] no known master node, scheduling a retry | |
[2016-02-05 10:05:01,063][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [10] (timeout [30s], pending nodes: [{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:05:01,065][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-join(elected_as_master, [1] joins received)] took 30s above the warn threshold of 30s | |
[2016-02-05 10:05:31,073][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [11] (timeout [30s], pending nodes: [{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:05:31,074][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-receive(from master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}])] took 30s above the warn threshold of 30s | |
[2016-02-05 10:05:31,075][INFO ][cluster.service ] [Alex Wilder] removed {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:05:31,078][WARN ][discovery.zen ] [Alex Wilder] discovered [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] which is also master but with an older cluster_state, telling [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] to rejoin the cluster ([via a new cluster state]) | |
[2016-02-05 10:05:31,078][WARN ][discovery.zen ] [Alex Wilder] received a request to rejoin the cluster from [T_BXQ4tYRP22xK5nz5VGww], current nodes: {{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300},} | |
[2016-02-05 10:05:34,099][INFO ][cluster.service ] [Alex Wilder] new_master {Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-join(elected_as_master, [0] joins received) | |
[2016-02-05 10:05:34,108][INFO ][cluster.service ] [Alex Wilder] added {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(join from node[{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:06:04,111][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [14] (timeout [30s], pending nodes: [{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:06:04,113][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-join(join from node[{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}])] took 30s above the warn threshold of 30s | |
[2016-02-05 10:06:04,115][INFO ][cluster.service ] [Alex Wilder] removed {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:06:04,120][WARN ][discovery.zen ] [Alex Wilder] discovered [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] which is also master but with an older cluster_state, telling [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] to rejoin the cluster ([via a new cluster state]) | |
[2016-02-05 10:06:04,120][WARN ][discovery.zen ] [Alex Wilder] received a request to rejoin the cluster from [T_BXQ4tYRP22xK5nz5VGww], current nodes: {{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300},} | |
[2016-02-05 10:06:07,140][INFO ][cluster.service ] [Alex Wilder] new_master {Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}, added {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(elected_as_master, [1] joins received) | |
[2016-02-05 10:06:37,144][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [16] (timeout [30s], pending nodes: [{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:06:37,147][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-join(elected_as_master, [1] joins received)] took 30s above the warn threshold of 30s | |
[2016-02-05 10:07:07,148][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [17] (timeout [30s], pending nodes: [{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:07:07,150][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-receive(from master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}])] took 30s above the warn threshold of 30s | |
[2016-02-05 10:07:07,150][INFO ][cluster.service ] [Alex Wilder] removed {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:07:07,153][WARN ][discovery.zen ] [Alex Wilder] discovered [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] which is also master but with an older cluster_state, telling [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] to rejoin the cluster ([via a new cluster state]) | |
[2016-02-05 10:07:07,154][WARN ][discovery.zen ] [Alex Wilder] received a request to rejoin the cluster from [T_BXQ4tYRP22xK5nz5VGww], current nodes: {{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300},} | |
[2016-02-05 10:07:10,184][INFO ][cluster.service ] [Alex Wilder] new_master {Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-join(elected_as_master, [0] joins received) | |
[2016-02-05 10:07:10,185][INFO ][cluster.service ] [Alex Wilder] added {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(join from node[{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:07:40,188][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [20] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:07:40,190][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-join(join from node[{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}])] took 30s above the warn threshold of 30s | |
[2016-02-05 10:07:40,191][INFO ][cluster.service ] [Alex Wilder] removed {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:07:40,194][DEBUG][action.admin.cluster.node.stats] [Alex Wilder] failed to execute on node [iwm9oJnBQv6cP0Z3W0vEkQ] | |
NodeDisconnectedException[[Dragonwing][127.0.0.1:9300][cluster:monitor/nodes/stats[n]] disconnected] | |
[2016-02-05 10:07:40,195][INFO ][cluster.service ] [Alex Wilder] added {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(join from node[{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:08:10,200][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [22] (timeout [30s], pending nodes: [{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:08:10,202][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-join(join from node[{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}])] took 30s above the warn threshold of 30s | |
[2016-02-05 10:08:10,202][INFO ][cluster.service ] [Alex Wilder] removed {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:08:10,209][WARN ][discovery.zen ] [Alex Wilder] discovered [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] which is also master but with an older cluster_state, telling [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] to rejoin the cluster ([via a new cluster state]) | |
[2016-02-05 10:08:10,211][WARN ][discovery.zen ] [Alex Wilder] received a request to rejoin the cluster from [T_BXQ4tYRP22xK5nz5VGww], current nodes: {{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300},} | |
[2016-02-05 10:08:10,212][ERROR][discovery.zen ] [Alex Wilder] unexpected failure during [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]] | |
EsRejectedExecutionException[no longer master. source: [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]]] | |
at org.elasticsearch.cluster.ClusterStateUpdateTask.onNoLongerMaster(ClusterStateUpdateTask.java:52) | |
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:382) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:08:13,242][INFO ][cluster.service ] [Alex Wilder] new_master {Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}, added {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(elected_as_master, [2] joins received) | |
[2016-02-05 10:08:43,246][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [24] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}, {Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:08:43,248][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-join(elected_as_master, [2] joins received)] took 30s above the warn threshold of 30s | |
[2016-02-05 10:09:13,249][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [25] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}, {Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:09:13,250][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-receive(from master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}])] took 30s above the warn threshold of 30s | |
[2016-02-05 10:09:13,257][INFO ][cluster.service ] [Alex Wilder] removed {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:09:43,258][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [26] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:09:43,261][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-node_failed({Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout] took 30s above the warn threshold of 30s | |
[2016-02-05 10:09:43,261][DEBUG][action.admin.cluster.node.stats] [Alex Wilder] failed to execute on node [fMroFJgpTZ2PaqfLH-JoEw] | |
NodeDisconnectedException[[Jeffrey Mace][127.0.0.1:9300][cluster:monitor/nodes/stats[n]] disconnected] | |
[2016-02-05 10:09:43,264][INFO ][cluster.service ] [Alex Wilder] removed {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:09:43,268][WARN ][discovery.zen ] [Alex Wilder] discovered [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] which is also master but with an older cluster_state, telling [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] to rejoin the cluster ([via a new cluster state]) | |
[2016-02-05 10:09:43,270][WARN ][discovery.zen ] [Alex Wilder] received a request to rejoin the cluster from [T_BXQ4tYRP22xK5nz5VGww], current nodes: {{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300},} | |
[2016-02-05 10:09:43,273][ERROR][discovery.zen ] [Alex Wilder] unexpected failure during [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]] | |
EsRejectedExecutionException[no longer master. source: [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]]] | |
at org.elasticsearch.cluster.ClusterStateUpdateTask.onNoLongerMaster(ClusterStateUpdateTask.java:52) | |
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:382) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:09:43,277][ERROR][discovery.zen ] [Alex Wilder] unexpected failure during [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]] | |
EsRejectedExecutionException[no longer master. source: [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]]] | |
at org.elasticsearch.cluster.ClusterStateUpdateTask.onNoLongerMaster(ClusterStateUpdateTask.java:52) | |
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:382) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:09:46,303][INFO ][cluster.service ] [Alex Wilder] new_master {Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}, added {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(elected_as_master, [2] joins received) | |
[2016-02-05 10:09:46,442][DEBUG][action.admin.cluster.health] [Alex Wilder] no known master node, scheduling a retry | |
[2016-02-05 10:10:16,307][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [28] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}, {Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:10:16,309][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-join(elected_as_master, [2] joins received)] took 30s above the warn threshold of 30s | |
[2016-02-05 10:10:46,311][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [29] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}, {Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:10:46,313][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-receive(from master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}])] took 30s above the warn threshold of 30s | |
[2016-02-05 10:10:46,323][INFO ][cluster.service ] [Alex Wilder] removed {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:11:16,328][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [30] (timeout [30s], pending nodes: [{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:11:16,334][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-node_failed({Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout] took 30s above the warn threshold of 30s | |
[2016-02-05 10:11:16,336][INFO ][cluster.service ] [Alex Wilder] removed {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:11:16,340][WARN ][discovery.zen ] [Alex Wilder] discovered [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] which is also master but with an older cluster_state, telling [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] to rejoin the cluster ([via a new cluster state]) | |
[2016-02-05 10:11:16,341][WARN ][discovery.zen ] [Alex Wilder] received a request to rejoin the cluster from [T_BXQ4tYRP22xK5nz5VGww], current nodes: {{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300},} | |
[2016-02-05 10:11:16,342][ERROR][discovery.zen ] [Alex Wilder] unexpected failure during [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]] | |
EsRejectedExecutionException[no longer master. source: [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]]] | |
at org.elasticsearch.cluster.ClusterStateUpdateTask.onNoLongerMaster(ClusterStateUpdateTask.java:52) | |
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:382) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:11:16,344][ERROR][discovery.zen ] [Alex Wilder] unexpected failure during [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]] | |
EsRejectedExecutionException[no longer master. source: [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]]] | |
at org.elasticsearch.cluster.ClusterStateUpdateTask.onNoLongerMaster(ClusterStateUpdateTask.java:52) | |
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:382) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:11:19,368][INFO ][cluster.service ] [Alex Wilder] new_master {Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}, added {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(elected_as_master, [2] joins received) | |
[2016-02-05 10:11:46,346][INFO ][rest.suppressed ] /_cluster/health Params: {} | |
MasterNotDiscoveredException[waited for [30s]] | |
at org.elasticsearch.action.support.master.TransportMasterNodeAction$4.onTimeout(TransportMasterNodeAction.java:154) | |
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:239) | |
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:574) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:11:49,370][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [32] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}, {Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:11:49,372][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-join(elected_as_master, [2] joins received)] took 30s above the warn threshold of 30s | |
[2016-02-05 10:12:19,374][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [33] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}, {Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:12:19,376][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-receive(from master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}])] took 30s above the warn threshold of 30s | |
[2016-02-05 10:12:19,383][INFO ][cluster.service ] [Alex Wilder] removed {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:12:49,386][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [34] (timeout [30s], pending nodes: [{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:12:49,390][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-node_failed({Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout] took 30s above the warn threshold of 30s | |
[2016-02-05 10:12:49,391][INFO ][cluster.service ] [Alex Wilder] removed {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:12:49,396][WARN ][discovery.zen ] [Alex Wilder] discovered [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] which is also master but with an older cluster_state, telling [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] to rejoin the cluster ([via a new cluster state]) | |
[2016-02-05 10:12:49,396][WARN ][discovery.zen ] [Alex Wilder] received a request to rejoin the cluster from [T_BXQ4tYRP22xK5nz5VGww], current nodes: {{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300},} | |
[2016-02-05 10:12:49,398][ERROR][discovery.zen ] [Alex Wilder] unexpected failure during [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]] | |
EsRejectedExecutionException[no longer master. source: [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]]] | |
at org.elasticsearch.cluster.ClusterStateUpdateTask.onNoLongerMaster(ClusterStateUpdateTask.java:52) | |
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:382) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:12:49,401][ERROR][discovery.zen ] [Alex Wilder] unexpected failure during [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]] | |
EsRejectedExecutionException[no longer master. source: [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]]] | |
at org.elasticsearch.cluster.ClusterStateUpdateTask.onNoLongerMaster(ClusterStateUpdateTask.java:52) | |
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:382) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:12:52,423][INFO ][cluster.service ] [Alex Wilder] new_master {Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}, added {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(elected_as_master, [2] joins received) | |
[2016-02-05 10:12:55,497][DEBUG][action.admin.cluster.state] [Alex Wilder] no known master node, scheduling a retry | |
[2016-02-05 10:12:55,510][DEBUG][action.admin.cluster.health] [Alex Wilder] no known master node, scheduling a retry | |
[2016-02-05 10:13:15,558][DEBUG][action.admin.cluster.state] [Alex Wilder] no known master node, scheduling a retry | |
[2016-02-05 10:13:15,558][DEBUG][action.admin.cluster.health] [Alex Wilder] no known master node, scheduling a retry | |
[2016-02-05 10:13:22,424][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [36] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}, {Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:13:22,426][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-join(elected_as_master, [2] joins received)] took 30s above the warn threshold of 30s | |
[2016-02-05 10:13:52,427][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [37] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}, {Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:13:52,429][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-receive(from master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}])] took 30s above the warn threshold of 30s | |
[2016-02-05 10:13:52,433][INFO ][cluster.service ] [Alex Wilder] removed {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:14:22,435][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [38] (timeout [30s], pending nodes: [{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:14:22,440][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-node_failed({Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout] took 30s above the warn threshold of 30s | |
[2016-02-05 10:14:22,444][INFO ][cluster.service ] [Alex Wilder] removed {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:14:22,451][WARN ][discovery.zen ] [Alex Wilder] discovered [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] which is also master but with an older cluster_state, telling [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] to rejoin the cluster ([via a new cluster state]) | |
[2016-02-05 10:14:22,451][WARN ][discovery.zen ] [Alex Wilder] received a request to rejoin the cluster from [T_BXQ4tYRP22xK5nz5VGww], current nodes: {{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300},} | |
[2016-02-05 10:14:22,453][ERROR][discovery.zen ] [Alex Wilder] unexpected failure during [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]] | |
EsRejectedExecutionException[no longer master. source: [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]]] | |
at org.elasticsearch.cluster.ClusterStateUpdateTask.onNoLongerMaster(ClusterStateUpdateTask.java:52) | |
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:382) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:14:22,454][ERROR][discovery.zen ] [Alex Wilder] unexpected failure during [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]] | |
EsRejectedExecutionException[no longer master. source: [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]]] | |
at org.elasticsearch.cluster.ClusterStateUpdateTask.onNoLongerMaster(ClusterStateUpdateTask.java:52) | |
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:382) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:14:25,479][INFO ][cluster.service ] [Alex Wilder] new_master {Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}, added {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(elected_as_master, [2] joins received) | |
[2016-02-05 10:14:26,315][DEBUG][action.admin.cluster.state] [Alex Wilder] no known master node, scheduling a retry | |
[2016-02-05 10:14:26,320][DEBUG][action.admin.cluster.health] [Alex Wilder] no known master node, scheduling a retry | |
[2016-02-05 10:14:52,455][INFO ][rest.suppressed ] /_cluster/state Params: {settings_filter=cloud.key,cloud.account,cloud.aws.access_key,cloud.aws.secret_key,access_key,secret_key,cloud.key,cloud.account,cloud.aws.access_key,cloud.aws.secret_key,access_key,secret_key} | |
MasterNotDiscoveredException[waited for [30s]] | |
at org.elasticsearch.action.support.master.TransportMasterNodeAction$4.onTimeout(TransportMasterNodeAction.java:154) | |
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:239) | |
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:574) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:14:52,461][INFO ][rest.suppressed ] /_cluster/health Params: {} | |
MasterNotDiscoveredException[waited for [30s]] | |
at org.elasticsearch.action.support.master.TransportMasterNodeAction$4.onTimeout(TransportMasterNodeAction.java:154) | |
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:239) | |
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:574) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:14:52,463][INFO ][rest.suppressed ] /_cluster/health Params: {} | |
MasterNotDiscoveredException[waited for [30s]] | |
at org.elasticsearch.action.support.master.TransportMasterNodeAction$4.onTimeout(TransportMasterNodeAction.java:154) | |
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:239) | |
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:574) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:14:52,464][INFO ][rest.suppressed ] /_cluster/state Params: {settings_filter=cloud.key,cloud.account,cloud.aws.access_key,cloud.aws.secret_key,access_key,secret_key,cloud.key,cloud.account,cloud.aws.access_key,cloud.aws.secret_key,access_key,secret_key} | |
MasterNotDiscoveredException[waited for [30s]] | |
at org.elasticsearch.action.support.master.TransportMasterNodeAction$4.onTimeout(TransportMasterNodeAction.java:154) | |
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:239) | |
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:574) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:14:52,497][DEBUG][action.admin.cluster.health] [Alex Wilder] no known master node, scheduling a retry | |
[2016-02-05 10:14:52,501][DEBUG][action.admin.cluster.state] [Alex Wilder] no known master node, scheduling a retry | |
[2016-02-05 10:14:55,481][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [40] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}, {Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:14:55,484][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-join(elected_as_master, [2] joins received)] took 30s above the warn threshold of 30s | |
[2016-02-05 10:15:25,485][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [41] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}, {Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:15:25,487][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-receive(from master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}])] took 30s above the warn threshold of 30s | |
[2016-02-05 10:15:25,492][INFO ][cluster.service ] [Alex Wilder] removed {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:15:55,495][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [42] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:15:55,499][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-node_failed({Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout] took 30s above the warn threshold of 30s | |
[2016-02-05 10:15:55,500][INFO ][cluster.service ] [Alex Wilder] removed {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:15:55,503][WARN ][discovery.zen ] [Alex Wilder] discovered [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] which is also master but with an older cluster_state, telling [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] to rejoin the cluster ([via a new cluster state]) | |
[2016-02-05 10:15:55,504][WARN ][discovery.zen ] [Alex Wilder] received a request to rejoin the cluster from [T_BXQ4tYRP22xK5nz5VGww], current nodes: {{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300},} | |
[2016-02-05 10:15:55,506][ERROR][discovery.zen ] [Alex Wilder] unexpected failure during [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]] | |
EsRejectedExecutionException[no longer master. source: [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]]] | |
at org.elasticsearch.cluster.ClusterStateUpdateTask.onNoLongerMaster(ClusterStateUpdateTask.java:52) | |
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:382) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:15:55,507][ERROR][discovery.zen ] [Alex Wilder] unexpected failure during [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]] | |
EsRejectedExecutionException[no longer master. source: [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]]] | |
at org.elasticsearch.cluster.ClusterStateUpdateTask.onNoLongerMaster(ClusterStateUpdateTask.java:52) | |
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:382) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:15:58,541][INFO ][cluster.service ] [Alex Wilder] new_master {Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-join(elected_as_master, [0] joins received) | |
[2016-02-05 10:15:58,545][INFO ][cluster.service ] [Alex Wilder] added {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(join from node[{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:16:28,546][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [45] (timeout [30s], pending nodes: [{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:16:28,550][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-join(join from node[{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}])] took 30s above the warn threshold of 30s | |
[2016-02-05 10:16:28,551][INFO ][cluster.service ] [Alex Wilder] removed {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:16:28,556][INFO ][cluster.service ] [Alex Wilder] added {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(join from node[{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:16:58,557][WARN ][discovery.zen.publish ] [Alex Wilder] timed out waiting for all nodes to process published state [47] (timeout [30s], pending nodes: [{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}]) | |
[2016-02-05 10:16:58,558][WARN ][cluster.service ] [Alex Wilder] cluster state update task [zen-disco-join(join from node[{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}])] took 30s above the warn threshold of 30s | |
[2016-02-05 10:16:58,559][INFO ][cluster.service ] [Alex Wilder] removed {{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-node_failed({Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300}), reason failed to ping, tried [3] times, each with maximum [30s] timeout | |
[2016-02-05 10:16:58,562][WARN ][discovery.zen ] [Alex Wilder] discovered [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] which is also master but with an older cluster_state, telling [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}] to rejoin the cluster ([via a new cluster state]) | |
[2016-02-05 10:16:58,563][WARN ][discovery.zen ] [Alex Wilder] received a request to rejoin the cluster from [T_BXQ4tYRP22xK5nz5VGww], current nodes: {{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300},} | |
[2016-02-05 10:16:58,566][ERROR][discovery.zen ] [Alex Wilder] unexpected failure during [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]] | |
EsRejectedExecutionException[no longer master. source: [zen-disco-master_receive_cluster_state_from_another_master [{Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}]]] | |
at org.elasticsearch.cluster.ClusterStateUpdateTask.onNoLongerMaster(ClusterStateUpdateTask.java:52) | |
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:382) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) | |
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
[2016-02-05 10:17:01,583][INFO ][cluster.service ] [Alex Wilder] new_master {Alex Wilder}{T_BXQ4tYRP22xK5nz5VGww}{172.17.0.2}{172.17.0.2:9300}, added {{Dragonwing}{iwm9oJnBQv6cP0Z3W0vEkQ}{127.0.0.1}{127.0.0.1:9300},{Jeffrey Mace}{fMroFJgpTZ2PaqfLH-JoEw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-join(elected_as_master, [2] joins received) | |
[2016-02-05 10:17:02,598][DEBUG][action.admin.cluster.health] [Alex Wilder] no known master node, scheduling a retry |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment