Skip to content

Instantly share code, notes, and snippets.

@joune
Created July 16, 2015 15:41
Show Gist options
  • Save joune/ab1ed844f2038c30e63b to your computer and use it in GitHub Desktop.
Save joune/ab1ed844f2038c30e63b to your computer and use it in GitHub Desktop.
[2015-07-16 15:31:25,707][INFO ][node ] [Scream] version[1.6.0], pid[1], build[cdd3ac4/2015-06-09T13:36:34Z]
[2015-07-16 15:31:25,708][INFO ][node ] [Scream] initializing ...
[2015-07-16 15:31:25,714][INFO ][plugins ] [Scream] loaded [], sites []
[2015-07-16 15:31:25,749][INFO ][env ] [Scream] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/fedora_fr--ws--152-root)]], net usable_space [1.5gb], net total_space [49gb], types [ext4]
[2015-07-16 15:31:28,139][INFO ][node ] [Scream] initialized
[2015-07-16 15:31:28,140][INFO ][node ] [Scream] starting ...
[2015-07-16 15:31:28,401][INFO ][transport ] [Scream] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/172.17.0.51:9300]}
[2015-07-16 15:31:28,419][INFO ][discovery ] [Scream] elasticsearch/5_NiA8akT5Gj2Zpe_G4J9w
[2015-07-16 15:31:31,457][INFO ][cluster.service ] [Scream] new_master [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}, reason: zen-disco-join (elected_as_master)
[2015-07-16 15:31:31,468][INFO ][http ] [Scream] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.17.0.51:9200]}
[2015-07-16 15:31:31,469][INFO ][node ] [Scream] started
[2015-07-16 15:31:31,945][INFO ][cluster.service ] [Scream] added {[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true},}, reason: zen-disco-receive(join from node[[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])
[2015-07-16 15:31:32,018][INFO ][gateway ] [Scream] recovered [0] indices into cluster_state
[2015-07-16 15:31:32,343][INFO ][cluster.service ] [Scream] added {[Marvel Man][8tHFm4FxRWyVg9BDMZhTRw][0135c667e85a][inet[/172.17.0.53:9300]]{data=false, client=true},}, reason: zen-disco-receive(join from node[[Marvel Man][8tHFm4FxRWyVg9BDMZhTRw][0135c667e85a][inet[/172.17.0.53:9300]]{data=false, client=true}])
[2015-07-16 15:31:33,246][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:31:33,246][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:31:33,246][DEBUG][cluster.service ] [Scream] processing [cluster_update_settings]: took 6ms done applying updated cluster_state (version: 5)
[2015-07-16 15:31:33,246][DEBUG][cluster.service ] [Scream] processing [reroute_after_cluster_update_settings]: execute
[2015-07-16 15:31:33,247][DEBUG][cluster.service ] [Scream] processing [reroute_after_cluster_update_settings]: took 0s no change in cluster_state
[2015-07-16 15:31:36,515][DEBUG][cluster.service ] [Scream] processing [cluster_update_settings]: execute
[2015-07-16 15:31:36,516][DEBUG][cluster.service ] [Scream] cluster state updated, version [6], source [cluster_update_settings]
[2015-07-16 15:31:36,516][DEBUG][cluster.service ] [Scream] publishing cluster state version 6
[2015-07-16 15:31:36,528][DEBUG][cluster.service ] [Scream] set local cluster state to version 6
[2015-07-16 15:31:36,529][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:31:36,529][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:31:36,529][DEBUG][cluster.service ] [Scream] processing [cluster_update_settings]: took 14ms done applying updated cluster_state (version: 6)
[2015-07-16 15:31:36,529][DEBUG][cluster.service ] [Scream] processing [reroute_after_cluster_update_settings]: execute
[2015-07-16 15:31:36,530][DEBUG][cluster.service ] [Scream] processing [reroute_after_cluster_update_settings]: took 0s no change in cluster_state
[2015-07-16 15:31:41,458][DEBUG][cluster.service ] [Scream] processing [routing-table-updater]: execute
[2015-07-16 15:31:41,459][DEBUG][cluster.service ] [Scream] processing [routing-table-updater]: took 0s no change in cluster_state
[2015-07-16 15:32:39,566][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:39,569][DEBUG][indices ] [Scream] creating Index [test], shards [5]/[1]
[2015-07-16 15:32:39,709][DEBUG][index.mapper ] [Scream] [test] using dynamic[true], default mapping: default_mapping_location[null], loaded_from[jar:file:/usr/share/elasticsearch/lib/elasticsearch-1.6.0.jar!/org/elasticsearch/index/mapper/default-mapping.json], default percolator mapping: location[null], loaded_from[null]
[2015-07-16 15:32:39,709][DEBUG][index.cache.query.parser.resident] [Scream] [test] using [resident] query cache with max_size [100], expire [null]
[2015-07-16 15:32:39,714][DEBUG][index.store.fs ] [Scream] [test] using index.store.throttle.type [node], with index.store.throttle.max_bytes_per_sec [0b]
[2015-07-16 15:32:39,750][INFO ][cluster.metadata ] [Scream] [test] creating index, cause [auto(update api)], templates [], shards [5]/[1], mappings []
[2015-07-16 15:32:39,761][DEBUG][indices ] [Scream] [test] closing ... (reason [cleaning up after validating index on master])
[2015-07-16 15:32:39,762][DEBUG][indices ] [Scream] [test] closing index service (reason [cleaning up after validating index on master])
[2015-07-16 15:32:39,763][DEBUG][indices ] [Scream] [test] closing index cache (reason [cleaning up after validating index on master])
[2015-07-16 15:32:39,763][DEBUG][index.cache.filter.weighted] [Scream] [test] full cache clear, reason [close]
[2015-07-16 15:32:39,763][DEBUG][index.cache.fixedbitset ] [Scream] [test] clearing all bitsets because [close]
[2015-07-16 15:32:39,763][DEBUG][indices ] [Scream] [test] clearing index field data (reason [cleaning up after validating index on master])
[2015-07-16 15:32:39,763][DEBUG][indices ] [Scream] [test] closing analysis service (reason [cleaning up after validating index on master])
[2015-07-16 15:32:39,764][DEBUG][indices ] [Scream] [test] closing index engine (reason [cleaning up after validating index on master])
[2015-07-16 15:32:39,764][DEBUG][indices ] [Scream] [test] closing index gateway (reason [cleaning up after validating index on master])
[2015-07-16 15:32:39,764][DEBUG][indices ] [Scream] [test] closing mapper service (reason [cleaning up after validating index on master])
[2015-07-16 15:32:39,764][DEBUG][indices ] [Scream] [test] closing index query parser service (reason [cleaning up after validating index on master])
[2015-07-16 15:32:39,764][DEBUG][indices ] [Scream] [test] closing index service (reason [cleaning up after validating index on master])
[2015-07-16 15:32:39,764][DEBUG][indices ] [Scream] [test] closed... (reason [cleaning up after validating index on master])
[2015-07-16 15:32:39,764][DEBUG][cluster.service ] [Scream] cluster state updated, version [7], source [create-index [test], cause [auto(update api)]]
[2015-07-16 15:32:39,764][DEBUG][cluster.service ] [Scream] publishing cluster state version 7
[2015-07-16 15:32:40,039][DEBUG][cluster.service ] [Scream] set local cluster state to version 7
[2015-07-16 15:32:40,040][DEBUG][indices.cluster ] [Scream] [test] creating index
[2015-07-16 15:32:40,040][DEBUG][indices ] [Scream] creating Index [test], shards [5]/[1]
[2015-07-16 15:32:40,054][DEBUG][index.mapper ] [Scream] [test] using dynamic[true], default mapping: default_mapping_location[null], loaded_from[jar:file:/usr/share/elasticsearch/lib/elasticsearch-1.6.0.jar!/org/elasticsearch/index/mapper/default-mapping.json], default percolator mapping: location[null], loaded_from[null]
[2015-07-16 15:32:40,054][DEBUG][index.cache.query.parser.resident] [Scream] [test] using [resident] query cache with max_size [100], expire [null]
[2015-07-16 15:32:40,055][DEBUG][index.store.fs ] [Scream] [test] using index.store.throttle.type [node], with index.store.throttle.max_bytes_per_sec [0b]
[2015-07-16 15:32:40,056][DEBUG][indices.cluster ] [Scream] [test][3] creating shard
[2015-07-16 15:32:40,057][DEBUG][index ] [Scream] [test] creating shard_id [test][3]
[2015-07-16 15:32:40,099][DEBUG][cluster.action.shard ] [Scream] received shard started for [test][0], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,100][DEBUG][cluster.action.shard ] [Scream] received shard started for [test][4], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,100][DEBUG][cluster.action.shard ] [Scream] received shard started for [test][2], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,124][DEBUG][index.store.fs ] [Scream] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/index] as shard's index location
[2015-07-16 15:32:40,128][DEBUG][index.store ] [Scream] [test][3] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:32:40,128][DEBUG][index.merge.scheduler ] [Scream] [test][3] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:32:40,129][DEBUG][index.store.fs ] [Scream] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/translog] as shard's translog location
[2015-07-16 15:32:40,131][DEBUG][index.deletionpolicy ] [Scream] [test][3] Using [keep_only_last] deletion policy
[2015-07-16 15:32:40,132][DEBUG][index.merge.policy ] [Scream] [test][3] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:32:40,133][DEBUG][index.shard ] [Scream] [test][3] state: [CREATED]
[2015-07-16 15:32:40,133][DEBUG][index.shard ] [Scream] [test][3] scheduling optimizer / merger every 1s
[2015-07-16 15:32:40,134][DEBUG][index.translog ] [Scream] [test][3] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:32:40,137][DEBUG][index.shard ] [Scream] [test][3] state: [CREATED]->[RECOVERING], reason [from gateway]
[2015-07-16 15:32:40,137][DEBUG][indices.cluster ] [Scream] [test][1] creating shard
[2015-07-16 15:32:40,137][DEBUG][index.gateway ] [Scream] [test][3] starting recovery from local ...
[2015-07-16 15:32:40,138][DEBUG][index ] [Scream] [test] creating shard_id [test][1]
[2015-07-16 15:32:40,144][DEBUG][index.store.fs ] [Scream] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/index] as shard's index location
[2015-07-16 15:32:40,144][DEBUG][index.store ] [Scream] [test][1] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:32:40,144][DEBUG][index.merge.scheduler ] [Scream] [test][1] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:32:40,144][DEBUG][index.store.fs ] [Scream] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/translog] as shard's translog location
[2015-07-16 15:32:40,145][DEBUG][index.deletionpolicy ] [Scream] [test][1] Using [keep_only_last] deletion policy
[2015-07-16 15:32:40,146][DEBUG][index.merge.policy ] [Scream] [test][1] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:32:40,146][DEBUG][index.shard ] [Scream] [test][1] state: [CREATED]
[2015-07-16 15:32:40,146][DEBUG][index.shard ] [Scream] [test][1] scheduling optimizer / merger every 1s
[2015-07-16 15:32:40,146][DEBUG][index.translog ] [Scream] [test][1] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:32:40,147][DEBUG][index.shard ] [Scream] [test][1] state: [CREATED]->[RECOVERING], reason [from gateway]
[2015-07-16 15:32:40,147][DEBUG][index.gateway ] [Scream] [test][1] starting recovery from local ...
[2015-07-16 15:32:40,148][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:32:40,148][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:32:40,153][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: took 586ms done applying updated cluster_state (version: 7)
[2015-07-16 15:32:40,230][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][0], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2015-07-16 15:32:40,230][DEBUG][cluster.action.shard ] [Scream] [test][0] will apply shard started [test][0], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,230][DEBUG][cluster.action.shard ] [Scream] [test][4] will apply shard started [test][4], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,230][DEBUG][cluster.action.shard ] [Scream] [test][2] will apply shard started [test][2], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,231][DEBUG][cluster.service ] [Scream] cluster state updated, version [8], source [shard-started ([test][0], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING]), reason [after recovery from gateway]]
[2015-07-16 15:32:40,231][DEBUG][cluster.service ] [Scream] publishing cluster state version 8
[2015-07-16 15:32:40,231][DEBUG][index.engine ] [Scream] [test][3] [[test][3]] skipping check for 3x segments
[2015-07-16 15:32:40,234][DEBUG][index.engine ] [Scream] [test][1] [[test][1]] skipping check for 3x segments
[2015-07-16 15:32:40,280][DEBUG][cluster.service ] [Scream] set local cluster state to version 8
[2015-07-16 15:32:40,281][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][0], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING]), reason [after recovery from gateway]]: took 51ms done applying updated cluster_state (version: 8)
[2015-07-16 15:32:40,281][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][4], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2015-07-16 15:32:40,281][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][4], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING]), reason [after recovery from gateway]]: took 0s no change in cluster_state
[2015-07-16 15:32:40,281][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][2], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2015-07-16 15:32:40,281][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][2], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING]), reason [after recovery from gateway]]: took 0s no change in cluster_state
[2015-07-16 15:32:40,281][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,282][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,282][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,282][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,282][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,282][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,282][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,283][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,283][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,283][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,283][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,283][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:32:40,283][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:32:40,283][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,283][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,284][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,284][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,284][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,284][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,284][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,284][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,284][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,285][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,285][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,285][DEBUG][cluster.service ] [Scream] processing [create-index [test], cause [auto(update api)]]: execute
[2015-07-16 15:32:40,323][TRACE][index.translog.fs ] [Scream] [test][1] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/translog/translog-1437060760276
[2015-07-16 15:32:40,331][TRACE][index.translog.fs ] [Scream] [test][1] created new translog id: 1437060760276
[2015-07-16 15:32:40,333][TRACE][index.translog.fs ] [Scream] [test][3] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/translog/translog-1437060760276
[2015-07-16 15:32:40,333][TRACE][index.translog.fs ] [Scream] [test][3] created new translog id: 1437060760276
[2015-07-16 15:32:40,333][DEBUG][index.shard ] [Scream] [test][1] scheduling refresher every 1s
[2015-07-16 15:32:40,334][DEBUG][index.shard ] [Scream] [test][3] scheduling refresher every 1s
[2015-07-16 15:32:40,335][DEBUG][index.shard ] [Scream] [test][3] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from gateway, no translog]
[2015-07-16 15:32:40,335][DEBUG][index.shard ] [Scream] [test][1] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from gateway, no translog]
[2015-07-16 15:32:40,335][DEBUG][index.gateway ] [Scream] [test][3] recovery completed from [local], took [198ms]
[2015-07-16 15:32:40,335][DEBUG][index.gateway ] [Scream] [test][1] recovery completed from [local], took [188ms]
[2015-07-16 15:32:40,335][DEBUG][cluster.action.shard ] [Scream] sending shard started for [test][3], node[5_NiA8akT5Gj2Zpe_G4J9w], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,335][DEBUG][cluster.action.shard ] [Scream] sending shard started for [test][1], node[5_NiA8akT5Gj2Zpe_G4J9w], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,335][DEBUG][cluster.action.shard ] [Scream] received shard started for [test][1], node[5_NiA8akT5Gj2Zpe_G4J9w], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,335][DEBUG][cluster.action.shard ] [Scream] received shard started for [test][3], node[5_NiA8akT5Gj2Zpe_G4J9w], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,335][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][1], node[5_NiA8akT5Gj2Zpe_G4J9w], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2015-07-16 15:32:40,335][DEBUG][cluster.action.shard ] [Scream] [test][1] will apply shard started [test][1], node[5_NiA8akT5Gj2Zpe_G4J9w], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,335][DEBUG][cluster.action.shard ] [Scream] [test][3] will apply shard started [test][3], node[5_NiA8akT5Gj2Zpe_G4J9w], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,342][DEBUG][cluster.service ] [Scream] cluster state updated, version [9], source [shard-started ([test][1], node[5_NiA8akT5Gj2Zpe_G4J9w], [P], s[INITIALIZING]), reason [after recovery from gateway]]
[2015-07-16 15:32:40,342][DEBUG][cluster.service ] [Scream] publishing cluster state version 9
[2015-07-16 15:32:40,351][DEBUG][cluster.service ] [Scream] set local cluster state to version 9
[2015-07-16 15:32:40,357][DEBUG][index.shard ] [Scream] [test][3] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:32:40,361][DEBUG][indices.store ] [Scream] [test][1] loaded store meta data (took [4.3ms])
[2015-07-16 15:32:40,362][DEBUG][indices.store ] [Scream] [test][3] loaded store meta data (took [5.2ms])
[2015-07-16 15:32:40,363][DEBUG][index.shard ] [Scream] [test][1] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:32:40,363][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:32:40,363][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:32:40,400][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][1], node[5_NiA8akT5Gj2Zpe_G4J9w], [P], s[INITIALIZING]), reason [after recovery from gateway]]: took 64ms done applying updated cluster_state (version: 9)
[2015-07-16 15:32:40,400][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][3], node[5_NiA8akT5Gj2Zpe_G4J9w], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2015-07-16 15:32:40,400][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][3], node[5_NiA8akT5Gj2Zpe_G4J9w], [P], s[INITIALIZING]), reason [after recovery from gateway]]: took 0s no change in cluster_state
[2015-07-16 15:32:40,400][DEBUG][cluster.service ] [Scream] processing [async_shard_fetch]: execute
[2015-07-16 15:32:40,405][DEBUG][cluster.service ] [Scream] cluster state updated, version [10], source [async_shard_fetch]
[2015-07-16 15:32:40,405][DEBUG][cluster.service ] [Scream] publishing cluster state version 10
[2015-07-16 15:32:40,438][DEBUG][indices.recovery ] [Scream] delaying recovery of [test][3] as it is not listed as assigned to target node [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}
[2015-07-16 15:32:40,439][DEBUG][indices.recovery ] [Scream] delaying recovery of [test][1] as it is not listed as assigned to target node [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}
[2015-07-16 15:32:40,446][DEBUG][cluster.service ] [Scream] set local cluster state to version 10
[2015-07-16 15:32:40,447][DEBUG][indices.cluster ] [Scream] [test][2] creating shard
[2015-07-16 15:32:40,447][DEBUG][index ] [Scream] [test] creating shard_id [test][2]
[2015-07-16 15:32:40,453][DEBUG][index.store.fs ] [Scream] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/index] as shard's index location
[2015-07-16 15:32:40,454][DEBUG][index.store ] [Scream] [test][2] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:32:40,454][DEBUG][index.merge.scheduler ] [Scream] [test][2] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:32:40,455][DEBUG][index.store.fs ] [Scream] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/translog] as shard's translog location
[2015-07-16 15:32:40,455][DEBUG][index.deletionpolicy ] [Scream] [test][2] Using [keep_only_last] deletion policy
[2015-07-16 15:32:40,456][DEBUG][index.merge.policy ] [Scream] [test][2] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:32:40,459][DEBUG][index.shard ] [Scream] [test][2] state: [CREATED]
[2015-07-16 15:32:40,459][DEBUG][index.shard ] [Scream] [test][2] scheduling optimizer / merger every 1s
[2015-07-16 15:32:40,459][DEBUG][index.translog ] [Scream] [test][2] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:32:40,461][DEBUG][index.shard ] [Scream] [test][2] state: [CREATED]->[RECOVERING], reason [from [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]
[2015-07-16 15:32:40,463][DEBUG][indices.cluster ] [Scream] [test][0] creating shard
[2015-07-16 15:32:40,463][DEBUG][index ] [Scream] [test] creating shard_id [test][0]
[2015-07-16 15:32:40,468][DEBUG][index.store.fs ] [Scream] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/index] as shard's index location
[2015-07-16 15:32:40,469][DEBUG][index.store ] [Scream] [test][0] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:32:40,469][DEBUG][index.merge.scheduler ] [Scream] [test][0] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:32:40,470][DEBUG][index.store.fs ] [Scream] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/translog] as shard's translog location
[2015-07-16 15:32:40,470][DEBUG][index.deletionpolicy ] [Scream] [test][0] Using [keep_only_last] deletion policy
[2015-07-16 15:32:40,471][DEBUG][index.merge.policy ] [Scream] [test][0] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:32:40,471][DEBUG][index.shard ] [Scream] [test][0] state: [CREATED]
[2015-07-16 15:32:40,471][DEBUG][index.shard ] [Scream] [test][0] scheduling optimizer / merger every 1s
[2015-07-16 15:32:40,471][DEBUG][index.translog ] [Scream] [test][0] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:32:40,472][DEBUG][index.shard ] [Scream] [test][0] state: [CREATED]->[RECOVERING], reason [from [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]
[2015-07-16 15:32:40,472][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:32:40,473][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:32:40,490][DEBUG][cluster.service ] [Scream] processing [async_shard_fetch]: took 90ms done applying updated cluster_state (version: 10)
[2015-07-16 15:32:40,506][DEBUG][index.store ] [Scream] [test][2] create legacy length-only output for recovery.544664450.segments_1
[2015-07-16 15:32:40,506][DEBUG][index.store ] [Scream] [test][0] create legacy length-only output for recovery.544664461.segments_1
[2015-07-16 15:32:40,528][DEBUG][index.engine ] [Scream] [test][2] [[test][2]] skipping check for 3x segments
[2015-07-16 15:32:40,530][DEBUG][index.engine ] [Scream] [test][0] [[test][0]] skipping check for 3x segments
[2015-07-16 15:32:40,531][TRACE][index.translog.fs ] [Scream] [test][2] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/translog/translog-1437060760066
[2015-07-16 15:32:40,531][TRACE][index.translog.fs ] [Scream] [test][2] created new translog id: 1437060760066
[2015-07-16 15:32:40,531][TRACE][index.translog.fs ] [Scream] [test][0] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/translog/translog-1437060760067
[2015-07-16 15:32:40,532][TRACE][index.translog.fs ] [Scream] [test][0] created new translog id: 1437060760067
[2015-07-16 15:32:40,535][DEBUG][index.shard ] [Scream] [test][2] scheduling refresher every 1s
[2015-07-16 15:32:40,535][DEBUG][index.shard ] [Scream] [test][0] scheduling refresher every 1s
[2015-07-16 15:32:40,537][DEBUG][index.shard ] [Scream] [test][0] state: [RECOVERING]->[POST_RECOVERY], reason [peer recovery done]
[2015-07-16 15:32:40,537][DEBUG][index.shard ] [Scream] [test][2] state: [RECOVERING]->[POST_RECOVERY], reason [peer recovery done]
[2015-07-16 15:32:40,537][DEBUG][cluster.action.shard ] [Scream] sending shard started for [test][0], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:32:40,537][DEBUG][cluster.action.shard ] [Scream] sending shard started for [test][2], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:32:40,537][DEBUG][cluster.action.shard ] [Scream] received shard started for [test][0], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:32:40,537][DEBUG][indices.recovery ] [Scream] [test][0] recovery done from [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}], took [65ms]
[2015-07-16 15:32:40,537][DEBUG][cluster.action.shard ] [Scream] received shard started for [test][2], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:32:40,538][DEBUG][indices.recovery ] [Scream] [test][2] recovery done from [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}], took [76ms]
[2015-07-16 15:32:40,538][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][0], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: execute
[2015-07-16 15:32:40,538][DEBUG][cluster.action.shard ] [Scream] [test][0] will apply shard started [test][0], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:32:40,538][DEBUG][cluster.action.shard ] [Scream] [test][2] will apply shard started [test][2], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:32:40,539][DEBUG][cluster.service ] [Scream] cluster state updated, version [11], source [shard-started ([test][0], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]
[2015-07-16 15:32:40,539][DEBUG][cluster.service ] [Scream] publishing cluster state version 11
[2015-07-16 15:32:40,574][DEBUG][cluster.service ] [Scream] set local cluster state to version 11
[2015-07-16 15:32:40,575][DEBUG][indices.cluster ] [Scream] [test][4] creating shard
[2015-07-16 15:32:40,575][DEBUG][index ] [Scream] [test] creating shard_id [test][4]
[2015-07-16 15:32:40,580][DEBUG][index.store.fs ] [Scream] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/index] as shard's index location
[2015-07-16 15:32:40,581][DEBUG][index.store ] [Scream] [test][4] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:32:40,581][DEBUG][index.merge.scheduler ] [Scream] [test][4] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:32:40,581][DEBUG][index.store.fs ] [Scream] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/translog] as shard's translog location
[2015-07-16 15:32:40,582][DEBUG][index.deletionpolicy ] [Scream] [test][4] Using [keep_only_last] deletion policy
[2015-07-16 15:32:40,582][DEBUG][index.merge.policy ] [Scream] [test][4] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:32:40,582][DEBUG][index.shard ] [Scream] [test][4] state: [CREATED]
[2015-07-16 15:32:40,582][DEBUG][index.shard ] [Scream] [test][4] scheduling optimizer / merger every 1s
[2015-07-16 15:32:40,582][DEBUG][index.translog ] [Scream] [test][4] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:32:40,583][DEBUG][index.shard ] [Scream] [test][4] state: [CREATED]->[RECOVERING], reason [from [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]
[2015-07-16 15:32:40,584][DEBUG][index.shard ] [Scream] [test][0] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:32:40,584][DEBUG][index.shard ] [Scream] [test][2] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:32:40,584][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:32:40,584][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:32:40,588][DEBUG][index.store ] [Scream] [test][4] create legacy length-only output for recovery.544664572.segments_1
[2015-07-16 15:32:40,601][DEBUG][index.engine ] [Scream] [test][4] [[test][4]] skipping check for 3x segments
[2015-07-16 15:32:40,603][TRACE][index.translog.fs ] [Scream] [test][4] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/translog/translog-1437060760066
[2015-07-16 15:32:40,603][TRACE][index.translog.fs ] [Scream] [test][4] created new translog id: 1437060760066
[2015-07-16 15:32:40,605][DEBUG][index.shard ] [Scream] [test][4] scheduling refresher every 1s
[2015-07-16 15:32:40,607][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][0], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: took 68ms done applying updated cluster_state (version: 11)
[2015-07-16 15:32:40,607][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][2], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: execute
[2015-07-16 15:32:40,607][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][2], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: took 0s no change in cluster_state
[2015-07-16 15:32:40,611][DEBUG][index.shard ] [Scream] [test][4] state: [RECOVERING]->[POST_RECOVERY], reason [peer recovery done]
[2015-07-16 15:32:40,611][DEBUG][cluster.action.shard ] [Scream] sending shard started for [test][4], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:32:40,611][DEBUG][cluster.action.shard ] [Scream] received shard started for [test][4], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:32:40,611][DEBUG][indices.recovery ] [Scream] [test][4] recovery done from [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}], took [28ms]
[2015-07-16 15:32:40,611][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][4], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: execute
[2015-07-16 15:32:40,611][DEBUG][cluster.action.shard ] [Scream] [test][4] will apply shard started [test][4], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:32:40,614][DEBUG][cluster.service ] [Scream] cluster state updated, version [12], source [shard-started ([test][4], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]
[2015-07-16 15:32:40,614][DEBUG][cluster.service ] [Scream] publishing cluster state version 12
[2015-07-16 15:32:40,633][DEBUG][cluster.service ] [Scream] set local cluster state to version 12
[2015-07-16 15:32:40,633][DEBUG][index.shard ] [Scream] [test][4] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:32:40,634][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:32:40,634][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:32:40,647][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][4], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: took 35ms done applying updated cluster_state (version: 12)
[2015-07-16 15:32:40,647][DEBUG][cluster.service ] [Scream] processing [update-mapping [test][test] / node [uv6YUup_TeW823PDdszbkw], order [1]]: execute
[2015-07-16 15:32:40,731][DEBUG][cluster.metadata ] [Scream] [test] update_mapping [test] (dynamic) with source [{"test":{"properties":{"d":{"type":"string"},"i":{"type":"long"}}}}]
[2015-07-16 15:32:40,734][DEBUG][cluster.service ] [Scream] cluster state updated, version [13], source [update-mapping [test][test] / node [uv6YUup_TeW823PDdszbkw], order [1]]
[2015-07-16 15:32:40,734][DEBUG][cluster.service ] [Scream] publishing cluster state version 13
[2015-07-16 15:32:40,757][DEBUG][cluster.service ] [Scream] set local cluster state to version 13
[2015-07-16 15:32:40,757][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:32:40,757][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:32:40,770][DEBUG][cluster.service ] [Scream] processing [update-mapping [test][test] / node [uv6YUup_TeW823PDdszbkw], order [1]]: took 122ms done applying updated cluster_state (version: 13)
[2015-07-16 15:32:40,771][DEBUG][cluster.service ] [Scream] processing [update-mapping [test][test] / node [uv6YUup_TeW823PDdszbkw], order [2]]: execute
[2015-07-16 15:32:40,771][DEBUG][cluster.service ] [Scream] processing [update-mapping [test][test] / node [uv6YUup_TeW823PDdszbkw], order [2]]: took 0s no change in cluster_state
[2015-07-16 15:32:41,008][DEBUG][cluster.service ] [Scream] processing [recovery_mapping_check]: execute
[2015-07-16 15:32:41,009][DEBUG][cluster.service ] [Scream] processing [recovery_mapping_check]: took 0s no change in cluster_state
[2015-07-16 15:32:41,009][DEBUG][cluster.service ] [Scream] processing [recovery_mapping_check]: execute
[2015-07-16 15:32:41,009][DEBUG][cluster.service ] [Scream] processing [recovery_mapping_check]: took 0s no change in cluster_state
[2015-07-16 15:32:41,016][DEBUG][cluster.action.shard ] [Scream] received shard started for [test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]]
[2015-07-16 15:32:41,016][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]]]: execute
[2015-07-16 15:32:41,016][DEBUG][cluster.action.shard ] [Scream] [test][3] will apply shard started [test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]]
[2015-07-16 15:32:41,016][DEBUG][cluster.action.shard ] [Scream] received shard started for [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]]
[2015-07-16 15:32:41,017][DEBUG][cluster.service ] [Scream] cluster state updated, version [14], source [shard-started ([test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]]]
[2015-07-16 15:32:41,017][DEBUG][cluster.service ] [Scream] publishing cluster state version 14
[2015-07-16 15:32:41,020][DEBUG][cluster.action.shard ] [Scream] received shard started for [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [master [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]
[2015-07-16 15:32:41,027][DEBUG][cluster.service ] [Scream] set local cluster state to version 14
[2015-07-16 15:32:41,029][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:32:41,029][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:32:41,061][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]]]: took 44ms done applying updated cluster_state (version: 14)
[2015-07-16 15:32:41,061][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]]]: execute
[2015-07-16 15:32:41,061][DEBUG][cluster.action.shard ] [Scream] [test][1] will apply shard started [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]]
[2015-07-16 15:32:41,061][DEBUG][cluster.action.shard ] [Scream] [test][1] will apply shard started [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [master [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]
[2015-07-16 15:32:41,062][DEBUG][cluster.service ] [Scream] cluster state updated, version [15], source [shard-started ([test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]]]
[2015-07-16 15:32:41,062][DEBUG][cluster.service ] [Scream] publishing cluster state version 15
[2015-07-16 15:32:41,083][DEBUG][cluster.service ] [Scream] set local cluster state to version 15
[2015-07-16 15:32:41,084][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:32:41,084][DEBUG][river.cluster ] [Scream] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:32:41,105][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]]]: took 44ms done applying updated cluster_state (version: 15)
[2015-07-16 15:32:41,105][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING]), reason [master [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]]: execute
[2015-07-16 15:32:41,106][DEBUG][cluster.service ] [Scream] processing [shard-started ([test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING]), reason [master [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]]: took 0s no change in cluster_state
[2015-07-16 15:32:45,143][TRACE][index.translog.fs ] [Scream] [test][3] sync translog buffered{id=1437060760276, operationCounter=843}
[2015-07-16 15:32:45,161][TRACE][index.translog.fs ] [Scream] [test][1] sync translog buffered{id=1437060760276, operationCounter=834}
[2015-07-16 15:32:45,462][TRACE][index.translog.fs ] [Scream] [test][2] sync translog buffered{id=1437060760066, operationCounter=965}
[2015-07-16 15:32:45,484][TRACE][index.translog.fs ] [Scream] [test][0] sync translog buffered{id=1437060760067, operationCounter=945}
[2015-07-16 15:32:45,583][TRACE][index.translog.fs ] [Scream] [test][4] sync translog buffered{id=1437060760066, operationCounter=986}
[2015-07-16 15:32:50,154][TRACE][index.translog.fs ] [Scream] [test][3] sync translog buffered{id=1437060760276, operationCounter=2843}
[2015-07-16 15:32:50,174][TRACE][index.translog.fs ] [Scream] [test][1] sync translog buffered{id=1437060760276, operationCounter=2804}
[2015-07-16 15:32:50,474][TRACE][index.translog.fs ] [Scream] [test][2] sync translog buffered{id=1437060760066, operationCounter=2973}
[2015-07-16 15:32:50,493][TRACE][index.translog.fs ] [Scream] [test][0] sync translog buffered{id=1437060760067, operationCounter=2988}
[2015-07-16 15:32:50,625][TRACE][index.translog.fs ] [Scream] [test][4] sync translog buffered{id=1437060760066, operationCounter=3049}
[2015-07-16 15:32:55,172][TRACE][index.translog.fs ] [Scream] [test][3] sync translog buffered{id=1437060760276, operationCounter=5847}
[2015-07-16 15:32:55,187][TRACE][index.translog.fs ] [Scream] [test][1] sync translog buffered{id=1437060760276, operationCounter=5849}
[2015-07-16 15:32:55,488][TRACE][index.translog.fs ] [Scream] [test][2] sync translog buffered{id=1437060760066, operationCounter=6120}
[2015-07-16 15:32:55,508][TRACE][index.translog.fs ] [Scream] [test][0] sync translog buffered{id=1437060760067, operationCounter=6110}
[2015-07-16 15:32:55,643][TRACE][index.translog.fs ] [Scream] [test][4] sync translog buffered{id=1437060760066, operationCounter=6174}
[2015-07-16 15:32:58,158][DEBUG][indices.memory ] [Scream] recalculating shard indexing buffer (reason=[[ADDED]]), total is [98.9mb] with [5] active shards, each shard set to indexing=[19.7mb], translog=[64kb]
[2015-07-16 15:32:58,158][DEBUG][index.shard ] [Scream] [test][0] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:32:58,158][DEBUG][index.shard ] [Scream] [test][1] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:32:58,158][DEBUG][index.shard ] [Scream] [test][2] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:32:58,158][DEBUG][index.shard ] [Scream] [test][3] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:32:58,158][DEBUG][index.shard ] [Scream] [test][4] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:33:00,333][TRACE][index.translog.fs ] [Scream] [test][3] sync translog buffered{id=1437060760276, operationCounter=10841}
[2015-07-16 15:33:00,335][TRACE][index.translog.fs ] [Scream] [test][1] sync translog buffered{id=1437060760276, operationCounter=10820}
[2015-07-16 15:33:00,628][TRACE][index.translog.fs ] [Scream] [test][0] sync translog buffered{id=1437060760067, operationCounter=10976}
[2015-07-16 15:33:00,628][TRACE][index.translog.fs ] [Scream] [test][2] sync translog buffered{id=1437060760066, operationCounter=10952}
[2015-07-16 15:33:00,706][TRACE][index.translog.fs ] [Scream] [test][4] sync translog buffered{id=1437060760066, operationCounter=11051}
[2015-07-16 15:33:01,179][INFO ][node ] [Scream] stopping ...
[2015-07-16 15:33:01,321][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][3]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,342][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][1]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,368][WARN ][cluster.action.shard ] [Scream] [test][1] received shard failed for [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,342][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][3]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,341][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][1]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,339][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][1]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,380][WARN ][cluster.action.shard ] [Scream] [test][1] received shard failed for [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,339][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][3]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,381][WARN ][cluster.action.shard ] [Scream] [test][3] received shard failed for [test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,335][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][1]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,382][WARN ][cluster.action.shard ] [Scream] [test][1] received shard failed for [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,335][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][1]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,382][WARN ][cluster.action.shard ] [Scream] [test][1] received shard failed for [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,335][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][1]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,335][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][1]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,383][WARN ][cluster.action.shard ] [Scream] [test][1] received shard failed for [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,323][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][3]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,322][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][3]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,322][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][3]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,389][WARN ][cluster.action.shard ] [Scream] [test][3] received shard failed for [test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,322][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][3]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,390][WARN ][cluster.action.shard ] [Scream] [test][3] received shard failed for [test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,390][WARN ][cluster.action.shard ] [Scream] [test][3] received shard failed for [test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,389][WARN ][cluster.action.shard ] [Scream] [test][3] received shard failed for [test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,386][WARN ][cluster.action.shard ] [Scream] [test][1] received shard failed for [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,376][WARN ][cluster.action.shard ] [Scream] [test][1] received shard failed for [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,376][WARN ][cluster.action.shard ] [Scream] [test][3] received shard failed for [test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,369][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][3]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,396][WARN ][cluster.action.shard ] [Scream] [test][3] received shard failed for [test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,369][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][3]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,355][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][3]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,355][WARN ][action.index ] [Scream] failed to perform indices:data/write/index on remote replica [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}[test][1]
org.elasticsearch.transport.SendRequestTransportException: [Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:286)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.performOnReplica(TransportShardReplicationOperationAction.java:877)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicationPhase.doRun(TransportShardReplicationOperationAction.java:854)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAndMoveToReplication(TransportShardReplicationOperationAction.java:525)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:603)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.routeRequestOrPerformLocally(TransportShardReplicationOperationAction.java:444)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.doRun(TransportShardReplicationOperationAction.java:370)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:112)
at org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:136)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:114)
at org.elasticsearch.action.index.TransportIndexAction.doExecute(TransportIndexAction.java:63)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:182)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:270)
... 19 more
[2015-07-16 15:33:01,402][WARN ][cluster.action.shard ] [Scream] [test][1] received shard failed for [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,354][WARN ][cluster.action.shard ] [Scream] [test][3] received shard failed for [test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,401][WARN ][cluster.action.shard ] [Scream] [test][3] received shard failed for [test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,401][WARN ][cluster.action.shard ] [Scream] [test][3] received shard failed for [test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[STARTED], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [Failed to perform [indices:data/write/index] on replica, message [SendRequestTransportException[[Ringo Kid][inet[/172.17.0.52:9300]][indices:data/write/index[r]]]; nested: TransportException[TransportService is closed stopped can't send request]; ]]
[2015-07-16 15:33:01,396][DEBUG][indices ] [Scream] [test] closing ... (reason [shutdown])
[2015-07-16 15:33:01,406][DEBUG][indices ] [Scream] [test] closing index service (reason [shutdown])
[2015-07-16 15:33:01,406][DEBUG][index ] [Scream] [test] [0] closing... (reason: [shutdown])
[2015-07-16 15:33:01,408][DEBUG][index.shard ] [Scream] [test][0] state: [STARTED]->[CLOSED], reason [shutdown]
[2015-07-16 15:33:01,408][DEBUG][index.shard ] [Scream] [test][0] operations counter reached 0, will not accept any further writes
[2015-07-16 15:33:01,408][DEBUG][index.engine ] [Scream] [test][0] flushing shard on close - this might take some time to sync files to disk
[2015-07-16 15:33:01,408][TRACE][index.translog.fs ] [Scream] [test][0] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/translog/translog-1437060760068
[2015-07-16 15:33:01,409][TRACE][index.translog.fs ] [Scream] [test][0] created new transient translog id: 1437060760068
[2015-07-16 15:33:02,033][TRACE][index.translog.fs ] [Scream] [test][0] make transient current buffered{id=1437060760067, operationCounter=11528}
[2015-07-16 15:33:02,033][TRACE][index.translog.fs ] [Scream] [test][0] closing RAF reference delete: true length: 12315798 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/translog/translog-1437060760067
[2015-07-16 15:33:02,038][DEBUG][index.engine ] [Scream] [test][0] close now acquiring writeLock
[2015-07-16 15:33:02,038][DEBUG][index.engine ] [Scream] [test][0] close acquired writeLock
[2015-07-16 15:33:02,038][TRACE][index.translog.fs ] [Scream] [test][0] sync translog buffered{id=1437060760068, operationCounter=0}
[2015-07-16 15:33:02,040][DEBUG][index.engine ] [Scream] [test][0] engine closed [api]
[2015-07-16 15:33:02,040][TRACE][index.translog.fs ] [Scream] [test][0] closing RAF reference delete: false length: 17 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/translog/translog-1437060760068
[2015-07-16 15:33:02,040][DEBUG][index ] [Scream] [test] [0] closed (reason: [shutdown])
[2015-07-16 15:33:02,040][DEBUG][index ] [Scream] [test] [1] closing... (reason: [shutdown])
[2015-07-16 15:33:02,040][DEBUG][index.shard ] [Scream] [test][1] state: [STARTED]->[CLOSED], reason [shutdown]
[2015-07-16 15:33:02,040][DEBUG][index.shard ] [Scream] [test][1] operations counter reached 0, will not accept any further writes
[2015-07-16 15:33:02,040][DEBUG][index.engine ] [Scream] [test][1] flushing shard on close - this might take some time to sync files to disk
[2015-07-16 15:33:02,040][TRACE][index.translog.fs ] [Scream] [test][1] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/translog/translog-1437060760277
[2015-07-16 15:33:02,041][TRACE][index.translog.fs ] [Scream] [test][1] created new transient translog id: 1437060760277
[2015-07-16 15:33:02,218][TRACE][index.translog.fs ] [Scream] [test][1] make transient current buffered{id=1437060760276, operationCounter=11502}
[2015-07-16 15:33:02,218][TRACE][index.translog.fs ] [Scream] [test][1] closing RAF reference delete: true length: 12279460 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/translog/translog-1437060760276
[2015-07-16 15:33:02,220][DEBUG][index.engine ] [Scream] [test][1] close now acquiring writeLock
[2015-07-16 15:33:02,220][DEBUG][index.engine ] [Scream] [test][1] close acquired writeLock
[2015-07-16 15:33:02,220][TRACE][index.translog.fs ] [Scream] [test][1] sync translog buffered{id=1437060760277, operationCounter=0}
[2015-07-16 15:33:02,222][DEBUG][index.engine ] [Scream] [test][1] engine closed [api]
[2015-07-16 15:33:02,222][TRACE][index.translog.fs ] [Scream] [test][1] closing RAF reference delete: false length: 17 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/translog/translog-1437060760277
[2015-07-16 15:33:02,222][DEBUG][index ] [Scream] [test] [1] closed (reason: [shutdown])
[2015-07-16 15:33:02,222][DEBUG][index ] [Scream] [test] [2] closing... (reason: [shutdown])
[2015-07-16 15:33:02,223][DEBUG][index.shard ] [Scream] [test][2] state: [STARTED]->[CLOSED], reason [shutdown]
[2015-07-16 15:33:02,223][DEBUG][index.shard ] [Scream] [test][2] operations counter reached 0, will not accept any further writes
[2015-07-16 15:33:02,223][DEBUG][index.engine ] [Scream] [test][2] flushing shard on close - this might take some time to sync files to disk
[2015-07-16 15:33:02,223][TRACE][index.translog.fs ] [Scream] [test][2] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/translog/translog-1437060760067
[2015-07-16 15:33:02,223][TRACE][index.translog.fs ] [Scream] [test][2] created new transient translog id: 1437060760067
[2015-07-16 15:33:02,372][TRACE][index.translog.fs ] [Scream] [test][2] make transient current buffered{id=1437060760066, operationCounter=11486}
[2015-07-16 15:33:02,373][TRACE][index.translog.fs ] [Scream] [test][2] closing RAF reference delete: true length: 12224933 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/translog/translog-1437060760066
[2015-07-16 15:33:02,375][DEBUG][index.engine ] [Scream] [test][2] close now acquiring writeLock
[2015-07-16 15:33:02,375][DEBUG][index.engine ] [Scream] [test][2] close acquired writeLock
[2015-07-16 15:33:02,375][TRACE][index.translog.fs ] [Scream] [test][2] sync translog buffered{id=1437060760067, operationCounter=0}
[2015-07-16 15:33:02,377][DEBUG][index.engine ] [Scream] [test][2] engine closed [api]
[2015-07-16 15:33:02,377][TRACE][index.translog.fs ] [Scream] [test][2] closing RAF reference delete: false length: 17 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/translog/translog-1437060760067
[2015-07-16 15:33:02,377][DEBUG][index ] [Scream] [test] [2] closed (reason: [shutdown])
[2015-07-16 15:33:02,377][DEBUG][index ] [Scream] [test] [3] closing... (reason: [shutdown])
[2015-07-16 15:33:02,377][DEBUG][index.shard ] [Scream] [test][3] state: [STARTED]->[CLOSED], reason [shutdown]
[2015-07-16 15:33:02,377][DEBUG][index.shard ] [Scream] [test][3] operations counter reached 0, will not accept any further writes
[2015-07-16 15:33:02,377][DEBUG][index.engine ] [Scream] [test][3] flushing shard on close - this might take some time to sync files to disk
[2015-07-16 15:33:02,377][TRACE][index.translog.fs ] [Scream] [test][3] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/translog/translog-1437060760277
[2015-07-16 15:33:02,377][TRACE][index.translog.fs ] [Scream] [test][3] created new transient translog id: 1437060760277
[2015-07-16 15:33:02,485][TRACE][index.translog.fs ] [Scream] [test][3] make transient current buffered{id=1437060760276, operationCounter=11516}
[2015-07-16 15:33:02,485][TRACE][index.translog.fs ] [Scream] [test][3] closing RAF reference delete: true length: 12301901 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/translog/translog-1437060760276
[2015-07-16 15:33:02,487][DEBUG][index.engine ] [Scream] [test][3] close now acquiring writeLock
[2015-07-16 15:33:02,487][DEBUG][index.engine ] [Scream] [test][3] close acquired writeLock
[2015-07-16 15:33:02,488][TRACE][index.translog.fs ] [Scream] [test][3] sync translog buffered{id=1437060760277, operationCounter=0}
[2015-07-16 15:33:02,489][DEBUG][index.engine ] [Scream] [test][3] engine closed [api]
[2015-07-16 15:33:02,489][TRACE][index.translog.fs ] [Scream] [test][3] closing RAF reference delete: false length: 17 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/translog/translog-1437060760277
[2015-07-16 15:33:02,489][DEBUG][index ] [Scream] [test] [3] closed (reason: [shutdown])
[2015-07-16 15:33:02,489][DEBUG][index ] [Scream] [test] [4] closing... (reason: [shutdown])
[2015-07-16 15:33:02,489][DEBUG][index.shard ] [Scream] [test][4] state: [STARTED]->[CLOSED], reason [shutdown]
[2015-07-16 15:33:02,489][DEBUG][index.shard ] [Scream] [test][4] operations counter reached 0, will not accept any further writes
[2015-07-16 15:33:02,489][DEBUG][index.engine ] [Scream] [test][4] flushing shard on close - this might take some time to sync files to disk
[2015-07-16 15:33:02,489][TRACE][index.translog.fs ] [Scream] [test][4] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/translog/translog-1437060760067
[2015-07-16 15:33:02,489][TRACE][index.translog.fs ] [Scream] [test][4] created new transient translog id: 1437060760067
[2015-07-16 15:33:02,577][TRACE][index.translog.fs ] [Scream] [test][4] make transient current buffered{id=1437060760066, operationCounter=11516}
[2015-07-16 15:33:02,577][TRACE][index.translog.fs ] [Scream] [test][4] closing RAF reference delete: true length: 12266624 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/translog/translog-1437060760066
[2015-07-16 15:33:02,580][DEBUG][index.engine ] [Scream] [test][4] close now acquiring writeLock
[2015-07-16 15:33:02,580][DEBUG][index.engine ] [Scream] [test][4] close acquired writeLock
[2015-07-16 15:33:02,580][TRACE][index.translog.fs ] [Scream] [test][4] sync translog buffered{id=1437060760067, operationCounter=0}
[2015-07-16 15:33:02,581][DEBUG][index.engine ] [Scream] [test][4] engine closed [api]
[2015-07-16 15:33:02,581][TRACE][index.translog.fs ] [Scream] [test][4] closing RAF reference delete: false length: 17 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/translog/translog-1437060760067
[2015-07-16 15:33:02,581][DEBUG][index ] [Scream] [test] [4] closed (reason: [shutdown])
[2015-07-16 15:33:02,581][DEBUG][indices ] [Scream] [test] closing index cache (reason [shutdown])
[2015-07-16 15:33:02,581][DEBUG][index.cache.filter.weighted] [Scream] [test] full cache clear, reason [close]
[2015-07-16 15:33:02,582][DEBUG][index.cache.fixedbitset ] [Scream] [test] clearing all bitsets because [close]
[2015-07-16 15:33:02,582][DEBUG][indices ] [Scream] [test] clearing index field data (reason [shutdown])
[2015-07-16 15:33:02,582][DEBUG][indices ] [Scream] [test] closing analysis service (reason [shutdown])
[2015-07-16 15:33:02,582][DEBUG][indices ] [Scream] [test] closing index engine (reason [shutdown])
[2015-07-16 15:33:02,582][DEBUG][indices ] [Scream] [test] closing index gateway (reason [shutdown])
[2015-07-16 15:33:02,582][DEBUG][indices ] [Scream] [test] closing mapper service (reason [shutdown])
[2015-07-16 15:33:02,582][DEBUG][indices ] [Scream] [test] closing index query parser service (reason [shutdown])
[2015-07-16 15:33:02,582][DEBUG][indices ] [Scream] [test] closing index service (reason [shutdown])
[2015-07-16 15:33:02,582][DEBUG][indices ] [Scream] [test] closed... (reason [shutdown])
[2015-07-16 15:33:02,582][INFO ][node ] [Scream] stopped
[2015-07-16 15:33:02,582][INFO ][node ] [Scream] closing ...
[2015-07-16 15:33:02,589][INFO ][node ] [Scream] closed
[2015-07-16 15:33:03,442][INFO ][node ] [Warrior Woman] version[1.6.0], pid[1], build[cdd3ac4/2015-06-09T13:36:34Z]
[2015-07-16 15:33:03,443][INFO ][node ] [Warrior Woman] initializing ...
[2015-07-16 15:33:03,447][INFO ][plugins ] [Warrior Woman] loaded [], sites []
[2015-07-16 15:33:03,481][INFO ][env ] [Warrior Woman] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/fedora_fr--ws--152-root)]], net usable_space [1.4gb], net total_space [49gb], types [ext4]
[2015-07-16 15:33:05,290][INFO ][node ] [Warrior Woman] initialized
[2015-07-16 15:33:05,290][INFO ][node ] [Warrior Woman] starting ...
[2015-07-16 15:33:05,346][INFO ][transport ] [Warrior Woman] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/172.17.0.54:9300]}
[2015-07-16 15:33:05,360][INFO ][discovery ] [Warrior Woman] elasticsearch/eNxMpc5dQt-s2eZ72stWEQ
[2015-07-16 15:33:08,395][INFO ][cluster.service ] [Warrior Woman] detected_master [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}, added {[Marvel Man][8tHFm4FxRWyVg9BDMZhTRw][0135c667e85a][inet[/172.17.0.53:9300]]{data=false, client=true},[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true},}, reason: zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])
[2015-07-16 15:33:08,417][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: took 22ms done applying updated cluster_state (version: 18)
[2015-07-16 15:33:08,422][DEBUG][cluster.service ] [Warrior Woman] processing [finalize_join ([Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true})]: execute
[2015-07-16 15:33:08,423][DEBUG][cluster.service ] [Warrior Woman] processing [finalize_join ([Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true})]: took 0s no change in cluster_state
[2015-07-16 15:33:08,432][INFO ][http ] [Warrior Woman] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.17.0.54:9200]}
[2015-07-16 15:33:08,432][INFO ][node ] [Warrior Woman] started
[2015-07-16 15:33:17,410][DEBUG][indices.store ] [Warrior Woman] [test][4] loaded store meta data (took [68.8ms])
[2015-07-16 15:33:17,410][DEBUG][indices.store ] [Warrior Woman] [test][3] loaded store meta data (took [68.8ms])
[2015-07-16 15:33:17,410][DEBUG][indices.store ] [Warrior Woman] [test][2] loaded store meta data (took [67.8ms])
[2015-07-16 15:33:17,410][DEBUG][indices.store ] [Warrior Woman] [test][1] loaded store meta data (took [69.7ms])
[2015-07-16 15:33:17,411][DEBUG][indices.store ] [Warrior Woman] [test][0] loaded store meta data (took [73.3ms])
[2015-07-16 15:33:17,425][DEBUG][discovery.zen.publish ] [Warrior Woman] received cluster state version 19
[2015-07-16 15:33:17,426][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: execute
[2015-07-16 15:33:17,426][DEBUG][cluster.service ] [Warrior Woman] cluster state updated, version [19], source [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]
[2015-07-16 15:33:17,426][DEBUG][cluster.service ] [Warrior Woman] set local cluster state to version 19
[2015-07-16 15:33:17,426][DEBUG][indices.cluster ] [Warrior Woman] [test] creating index
[2015-07-16 15:33:17,426][DEBUG][indices ] [Warrior Woman] creating Index [test], shards [5]/[1]
[2015-07-16 15:33:17,537][DEBUG][index.mapper ] [Warrior Woman] [test] using dynamic[true], default mapping: default_mapping_location[null], loaded_from[jar:file:/usr/share/elasticsearch/lib/elasticsearch-1.6.0.jar!/org/elasticsearch/index/mapper/default-mapping.json], default percolator mapping: location[null], loaded_from[null]
[2015-07-16 15:33:17,537][DEBUG][index.cache.query.parser.resident] [Warrior Woman] [test] using [resident] query cache with max_size [100], expire [null]
[2015-07-16 15:33:17,540][DEBUG][index.store.fs ] [Warrior Woman] [test] using index.store.throttle.type [node], with index.store.throttle.max_bytes_per_sec [0b]
[2015-07-16 15:33:17,542][DEBUG][indices.cluster ] [Warrior Woman] [test] adding mapping [test], source [{"test":{"properties":{"d":{"type":"string"},"i":{"type":"long"}}}}]
[2015-07-16 15:33:17,584][DEBUG][indices.cluster ] [Warrior Woman] [test][4] creating shard
[2015-07-16 15:33:17,584][DEBUG][index ] [Warrior Woman] [test] creating shard_id [test][4]
[2015-07-16 15:33:17,629][DEBUG][index.store.fs ] [Warrior Woman] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/index] as shard's index location
[2015-07-16 15:33:17,690][DEBUG][index.store ] [Warrior Woman] [test][4] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:33:17,690][DEBUG][index.merge.scheduler ] [Warrior Woman] [test][4] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:33:17,691][DEBUG][index.store.fs ] [Warrior Woman] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/translog] as shard's translog location
[2015-07-16 15:33:17,695][DEBUG][index.deletionpolicy ] [Warrior Woman] [test][4] Using [keep_only_last] deletion policy
[2015-07-16 15:33:17,696][DEBUG][index.merge.policy ] [Warrior Woman] [test][4] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:33:17,697][DEBUG][index.shard ] [Warrior Woman] [test][4] state: [CREATED]
[2015-07-16 15:33:17,697][DEBUG][index.shard ] [Warrior Woman] [test][4] scheduling optimizer / merger every 1s
[2015-07-16 15:33:17,697][DEBUG][index.translog ] [Warrior Woman] [test][4] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:33:17,701][DEBUG][index.shard ] [Warrior Woman] [test][4] state: [CREATED]->[RECOVERING], reason [from [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]
[2015-07-16 15:33:17,703][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: took 276ms done applying updated cluster_state (version: 19)
[2015-07-16 15:33:17,716][DEBUG][index.store ] [Warrior Woman] [test][4] create legacy length-only output for recovery.544701690.segments_1
[2015-07-16 15:33:17,759][DEBUG][discovery.zen.publish ] [Warrior Woman] received cluster state version 20
[2015-07-16 15:33:17,759][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: execute
[2015-07-16 15:33:17,760][DEBUG][cluster.service ] [Warrior Woman] cluster state updated, version [20], source [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]
[2015-07-16 15:33:17,760][DEBUG][cluster.service ] [Warrior Woman] set local cluster state to version 20
[2015-07-16 15:33:17,761][DEBUG][indices.cluster ] [Warrior Woman] [test][0] creating shard
[2015-07-16 15:33:17,761][DEBUG][index ] [Warrior Woman] [test] creating shard_id [test][0]
[2015-07-16 15:33:17,768][DEBUG][index.store.fs ] [Warrior Woman] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/index] as shard's index location
[2015-07-16 15:33:17,770][DEBUG][index.store ] [Warrior Woman] [test][0] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:33:17,770][DEBUG][index.merge.scheduler ] [Warrior Woman] [test][0] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:33:17,771][DEBUG][index.store.fs ] [Warrior Woman] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/translog] as shard's translog location
[2015-07-16 15:33:17,772][DEBUG][index.deletionpolicy ] [Warrior Woman] [test][0] Using [keep_only_last] deletion policy
[2015-07-16 15:33:17,772][DEBUG][index.merge.policy ] [Warrior Woman] [test][0] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:33:17,772][DEBUG][index.shard ] [Warrior Woman] [test][0] state: [CREATED]
[2015-07-16 15:33:17,772][DEBUG][index.shard ] [Warrior Woman] [test][0] scheduling optimizer / merger every 1s
[2015-07-16 15:33:17,772][DEBUG][index.engine ] [Warrior Woman] [test][4] [[test][4]] skipping check for 3x segments
[2015-07-16 15:33:17,772][DEBUG][index.translog ] [Warrior Woman] [test][0] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:33:17,773][DEBUG][index.shard ] [Warrior Woman] [test][0] state: [CREATED]->[RECOVERING], reason [from [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]
[2015-07-16 15:33:17,774][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: took 13ms done applying updated cluster_state (version: 20)
[2015-07-16 15:33:17,805][TRACE][index.translog.fs ] [Warrior Woman] [test][4] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/translog/translog-1437060760066
[2015-07-16 15:33:17,809][TRACE][index.translog.fs ] [Warrior Woman] [test][4] created new translog id: 1437060760066
[2015-07-16 15:33:17,897][DEBUG][index.store ] [Warrior Woman] [test][0] create legacy length-only output for recovery.544701762.segments_1
[2015-07-16 15:33:17,916][DEBUG][index.engine ] [Warrior Woman] [test][0] [[test][0]] skipping check for 3x segments
[2015-07-16 15:33:17,922][TRACE][index.translog.fs ] [Warrior Woman] [test][0] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/translog/translog-1437060760067
[2015-07-16 15:33:17,923][TRACE][index.translog.fs ] [Warrior Woman] [test][0] created new translog id: 1437060760067
[2015-07-16 15:33:19,547][TRACE][index.translog.fs ] [Warrior Woman] [test][0] clearing unreferenced translog /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/translog/translog-1437060760068
[2015-07-16 15:33:19,556][TRACE][index.translog.fs ] [Warrior Woman] [test][4] clearing unreferenced translog /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/translog/translog-1437060760067
[2015-07-16 15:33:19,863][DEBUG][index.shard ] [Warrior Woman] [test][0] scheduling refresher every 1s
[2015-07-16 15:33:19,864][DEBUG][index.shard ] [Warrior Woman] [test][4] scheduling refresher every 1s
[2015-07-16 15:33:20,102][DEBUG][index.shard ] [Warrior Woman] [test][4] state: [RECOVERING]->[POST_RECOVERY], reason [peer recovery done]
[2015-07-16 15:33:20,103][DEBUG][cluster.action.shard ] [Warrior Woman] sending shard started for [test][4], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:20,103][DEBUG][indices.recovery ] [Warrior Woman] [test][4] recovery done from [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}], took [2.4s]
[2015-07-16 15:33:20,107][DEBUG][discovery.zen.publish ] [Warrior Woman] received cluster state version 21
[2015-07-16 15:33:20,107][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: execute
[2015-07-16 15:33:20,107][DEBUG][cluster.service ] [Warrior Woman] cluster state updated, version [21], source [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]
[2015-07-16 15:33:20,107][DEBUG][cluster.service ] [Warrior Woman] set local cluster state to version 21
[2015-07-16 15:33:20,108][DEBUG][indices.cluster ] [Warrior Woman] [test][1] creating shard
[2015-07-16 15:33:20,108][DEBUG][index ] [Warrior Woman] [test] creating shard_id [test][1]
[2015-07-16 15:33:20,116][DEBUG][index.store.fs ] [Warrior Woman] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/index] as shard's index location
[2015-07-16 15:33:20,117][DEBUG][index.store ] [Warrior Woman] [test][1] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:33:20,117][DEBUG][index.merge.scheduler ] [Warrior Woman] [test][1] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:33:20,117][DEBUG][index.store.fs ] [Warrior Woman] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/translog] as shard's translog location
[2015-07-16 15:33:20,118][DEBUG][index.deletionpolicy ] [Warrior Woman] [test][1] Using [keep_only_last] deletion policy
[2015-07-16 15:33:20,118][DEBUG][index.merge.policy ] [Warrior Woman] [test][1] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:33:20,118][DEBUG][index.shard ] [Warrior Woman] [test][1] state: [CREATED]
[2015-07-16 15:33:20,118][DEBUG][index.shard ] [Warrior Woman] [test][1] scheduling optimizer / merger every 1s
[2015-07-16 15:33:20,118][DEBUG][index.translog ] [Warrior Woman] [test][1] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:33:20,119][DEBUG][index.shard ] [Warrior Woman] [test][1] state: [CREATED]->[RECOVERING], reason [from [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]
[2015-07-16 15:33:20,119][DEBUG][index.shard ] [Warrior Woman] [test][4] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:33:20,156][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: took 48ms done applying updated cluster_state (version: 21)
[2015-07-16 15:33:20,184][DEBUG][index.shard ] [Warrior Woman] [test][0] state: [RECOVERING]->[POST_RECOVERY], reason [peer recovery done]
[2015-07-16 15:33:20,184][DEBUG][cluster.action.shard ] [Warrior Woman] sending shard started for [test][0], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:20,184][DEBUG][indices.recovery ] [Warrior Woman] [test][0] recovery done from [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}], took [2.4s]
[2015-07-16 15:33:20,188][DEBUG][discovery.zen.publish ] [Warrior Woman] received cluster state version 22
[2015-07-16 15:33:20,189][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: execute
[2015-07-16 15:33:20,189][DEBUG][cluster.service ] [Warrior Woman] cluster state updated, version [22], source [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]
[2015-07-16 15:33:20,189][DEBUG][cluster.service ] [Warrior Woman] set local cluster state to version 22
[2015-07-16 15:33:20,189][DEBUG][index.shard ] [Warrior Woman] [test][0] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:33:20,190][DEBUG][indices.cluster ] [Warrior Woman] [test][2] creating shard
[2015-07-16 15:33:20,190][DEBUG][index ] [Warrior Woman] [test] creating shard_id [test][2]
[2015-07-16 15:33:20,201][DEBUG][index.store.fs ] [Warrior Woman] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/index] as shard's index location
[2015-07-16 15:33:20,202][DEBUG][index.store ] [Warrior Woman] [test][2] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:33:20,202][DEBUG][index.merge.scheduler ] [Warrior Woman] [test][2] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:33:20,202][DEBUG][index.store.fs ] [Warrior Woman] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/translog] as shard's translog location
[2015-07-16 15:33:20,205][DEBUG][index.deletionpolicy ] [Warrior Woman] [test][2] Using [keep_only_last] deletion policy
[2015-07-16 15:33:20,205][DEBUG][index.merge.policy ] [Warrior Woman] [test][2] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:33:20,205][DEBUG][index.shard ] [Warrior Woman] [test][2] state: [CREATED]
[2015-07-16 15:33:20,206][DEBUG][index.shard ] [Warrior Woman] [test][2] scheduling optimizer / merger every 1s
[2015-07-16 15:33:20,206][DEBUG][index.translog ] [Warrior Woman] [test][2] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:33:20,206][DEBUG][index.shard ] [Warrior Woman] [test][2] state: [CREATED]->[RECOVERING], reason [from [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]
[2015-07-16 15:33:20,213][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: took 24ms done applying updated cluster_state (version: 22)
[2015-07-16 15:33:20,661][DEBUG][index.store ] [Warrior Woman] [test][1] create legacy length-only output for recovery.544704108.segments_1
[2015-07-16 15:33:20,672][DEBUG][index.engine ] [Warrior Woman] [test][1] [[test][1]] skipping check for 3x segments
[2015-07-16 15:33:20,673][TRACE][index.translog.fs ] [Warrior Woman] [test][1] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/translog/translog-1437060760276
[2015-07-16 15:33:20,673][TRACE][index.translog.fs ] [Warrior Woman] [test][1] created new translog id: 1437060760276
[2015-07-16 15:33:20,731][DEBUG][index.store ] [Warrior Woman] [test][2] create legacy length-only output for recovery.544704195.segments_1
[2015-07-16 15:33:20,749][DEBUG][index.engine ] [Warrior Woman] [test][2] [[test][2]] skipping check for 3x segments
[2015-07-16 15:33:20,751][TRACE][index.translog.fs ] [Warrior Woman] [test][2] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/translog/translog-1437060760066
[2015-07-16 15:33:20,751][TRACE][index.translog.fs ] [Warrior Woman] [test][2] created new translog id: 1437060760066
[2015-07-16 15:33:21,560][TRACE][index.translog.fs ] [Warrior Woman] [test][1] clearing unreferenced translog /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/translog/translog-1437060760277
[2015-07-16 15:33:21,577][TRACE][index.translog.fs ] [Warrior Woman] [test][2] clearing unreferenced translog /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/translog/translog-1437060760067
[2015-07-16 15:33:21,699][DEBUG][index.shard ] [Warrior Woman] [test][2] scheduling refresher every 1s
[2015-07-16 15:33:21,704][DEBUG][index.shard ] [Warrior Woman] [test][1] scheduling refresher every 1s
[2015-07-16 15:33:21,776][DEBUG][index.shard ] [Warrior Woman] [test][1] state: [RECOVERING]->[POST_RECOVERY], reason [peer recovery done]
[2015-07-16 15:33:21,776][DEBUG][cluster.action.shard ] [Warrior Woman] sending shard started for [test][1], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:21,776][DEBUG][indices.recovery ] [Warrior Woman] [test][1] recovery done from [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}], took [1.6s]
[2015-07-16 15:33:21,779][DEBUG][discovery.zen.publish ] [Warrior Woman] received cluster state version 23
[2015-07-16 15:33:21,779][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: execute
[2015-07-16 15:33:21,779][DEBUG][cluster.service ] [Warrior Woman] cluster state updated, version [23], source [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]
[2015-07-16 15:33:21,779][DEBUG][cluster.service ] [Warrior Woman] set local cluster state to version 23
[2015-07-16 15:33:21,780][DEBUG][indices.cluster ] [Warrior Woman] [test][3] creating shard
[2015-07-16 15:33:21,780][DEBUG][index ] [Warrior Woman] [test] creating shard_id [test][3]
[2015-07-16 15:33:21,786][DEBUG][index.store.fs ] [Warrior Woman] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/index] as shard's index location
[2015-07-16 15:33:21,787][DEBUG][index.store ] [Warrior Woman] [test][3] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:33:21,788][DEBUG][index.merge.scheduler ] [Warrior Woman] [test][3] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:33:21,788][DEBUG][index.store.fs ] [Warrior Woman] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/translog] as shard's translog location
[2015-07-16 15:33:21,789][DEBUG][index.deletionpolicy ] [Warrior Woman] [test][3] Using [keep_only_last] deletion policy
[2015-07-16 15:33:21,789][DEBUG][index.merge.policy ] [Warrior Woman] [test][3] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:33:21,789][DEBUG][index.shard ] [Warrior Woman] [test][3] state: [CREATED]
[2015-07-16 15:33:21,789][DEBUG][index.shard ] [Warrior Woman] [test][3] scheduling optimizer / merger every 1s
[2015-07-16 15:33:21,789][DEBUG][index.translog ] [Warrior Woman] [test][3] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:33:21,790][DEBUG][index.shard ] [Warrior Woman] [test][3] state: [CREATED]->[RECOVERING], reason [from [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]
[2015-07-16 15:33:21,790][DEBUG][index.shard ] [Warrior Woman] [test][1] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:33:21,796][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: took 17ms done applying updated cluster_state (version: 23)
[2015-07-16 15:33:21,802][DEBUG][index.store ] [Warrior Woman] [test][3] create legacy length-only output for recovery.544705779.segments_1
[2015-07-16 15:33:21,823][DEBUG][index.engine ] [Warrior Woman] [test][3] [[test][3]] skipping check for 3x segments
[2015-07-16 15:33:21,825][TRACE][index.translog.fs ] [Warrior Woman] [test][3] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/translog/translog-1437060760276
[2015-07-16 15:33:21,825][TRACE][index.translog.fs ] [Warrior Woman] [test][3] created new translog id: 1437060760276
[2015-07-16 15:33:21,852][DEBUG][index.shard ] [Warrior Woman] [test][2] state: [RECOVERING]->[POST_RECOVERY], reason [peer recovery done]
[2015-07-16 15:33:21,853][DEBUG][cluster.action.shard ] [Warrior Woman] sending shard started for [test][2], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:21,853][DEBUG][indices.recovery ] [Warrior Woman] [test][2] recovery done from [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}], took [1.6s]
[2015-07-16 15:33:21,857][DEBUG][discovery.zen.publish ] [Warrior Woman] received cluster state version 24
[2015-07-16 15:33:21,858][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: execute
[2015-07-16 15:33:21,858][DEBUG][cluster.service ] [Warrior Woman] cluster state updated, version [24], source [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]
[2015-07-16 15:33:21,858][DEBUG][cluster.service ] [Warrior Woman] set local cluster state to version 24
[2015-07-16 15:33:21,870][DEBUG][index.shard ] [Warrior Woman] [test][2] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:33:21,883][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: took 24ms done applying updated cluster_state (version: 24)
[2015-07-16 15:33:22,575][TRACE][index.translog.fs ] [Warrior Woman] [test][3] clearing unreferenced translog /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/translog/translog-1437060760277
[2015-07-16 15:33:22,630][DEBUG][index.shard ] [Warrior Woman] [test][3] scheduling refresher every 1s
[2015-07-16 15:33:22,699][TRACE][index.translog.fs ] [Warrior Woman] [test][4] sync translog buffered{id=1437060760066, operationCounter=11538}
[2015-07-16 15:33:22,772][TRACE][index.translog.fs ] [Warrior Woman] [test][0] sync translog buffered{id=1437060760067, operationCounter=11548}
[2015-07-16 15:33:22,879][DEBUG][index.shard ] [Warrior Woman] [test][3] state: [RECOVERING]->[POST_RECOVERY], reason [peer recovery done]
[2015-07-16 15:33:22,880][DEBUG][cluster.action.shard ] [Warrior Woman] sending shard started for [test][3], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:22,880][DEBUG][indices.recovery ] [Warrior Woman] [test][3] recovery done from [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}], took [1s]
[2015-07-16 15:33:22,883][DEBUG][discovery.zen.publish ] [Warrior Woman] received cluster state version 25
[2015-07-16 15:33:22,883][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: execute
[2015-07-16 15:33:22,883][DEBUG][cluster.service ] [Warrior Woman] cluster state updated, version [25], source [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]
[2015-07-16 15:33:22,884][DEBUG][cluster.service ] [Warrior Woman] set local cluster state to version 25
[2015-07-16 15:33:22,884][DEBUG][index.shard ] [Warrior Woman] [test][3] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:33:22,898][DEBUG][cluster.service ] [Warrior Woman] processing [zen-disco-receive(from master [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}])]: took 14ms done applying updated cluster_state (version: 25)
[2015-07-16 15:33:25,118][TRACE][index.translog.fs ] [Warrior Woman] [test][1] sync translog buffered{id=1437060760276, operationCounter=12926}
[2015-07-16 15:33:25,207][TRACE][index.translog.fs ] [Warrior Woman] [test][2] sync translog buffered{id=1437060760066, operationCounter=12958}
[2015-07-16 15:33:26,793][TRACE][index.translog.fs ] [Warrior Woman] [test][3] sync translog buffered{id=1437060760276, operationCounter=14121}
[2015-07-16 15:33:27,750][TRACE][index.translog.fs ] [Warrior Woman] [test][4] sync translog buffered{id=1437060760066, operationCounter=14786}
[2015-07-16 15:33:27,826][TRACE][index.translog.fs ] [Warrior Woman] [test][0] sync translog buffered{id=1437060760067, operationCounter=14834}
[2015-07-16 15:33:30,187][TRACE][index.translog.fs ] [Warrior Woman] [test][1] sync translog buffered{id=1437060760276, operationCounter=17421}
[2015-07-16 15:33:30,260][TRACE][index.translog.fs ] [Warrior Woman] [test][2] sync translog buffered{id=1437060760066, operationCounter=17469}
[2015-07-16 15:33:31,849][TRACE][index.translog.fs ] [Warrior Woman] [test][3] sync translog buffered{id=1437060760276, operationCounter=18779}
[2015-07-16 15:33:32,771][TRACE][index.translog.fs ] [Warrior Woman] [test][4] sync translog buffered{id=1437060760066, operationCounter=19790}
[2015-07-16 15:33:32,858][TRACE][index.translog.fs ] [Warrior Woman] [test][0] sync translog buffered{id=1437060760067, operationCounter=19853}
[2015-07-16 15:33:35,296][DEBUG][indices.memory ] [Warrior Woman] recalculating shard indexing buffer (reason=[[ADDED]]), total is [98.9mb] with [5] active shards, each shard set to indexing=[19.7mb], translog=[64kb]
[2015-07-16 15:33:35,299][DEBUG][index.shard ] [Warrior Woman] [test][0] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:33:35,299][DEBUG][index.shard ] [Warrior Woman] [test][1] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:33:35,299][DEBUG][index.shard ] [Warrior Woman] [test][2] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:33:35,299][DEBUG][index.shard ] [Warrior Woman] [test][3] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:33:35,299][DEBUG][index.shard ] [Warrior Woman] [test][4] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:33:35,313][TRACE][index.translog.fs ] [Warrior Woman] [test][1] sync translog buffered{id=1437060760276, operationCounter=22817}
[2015-07-16 15:33:35,367][TRACE][index.translog.fs ] [Warrior Woman] [test][2] sync translog buffered{id=1437060760066, operationCounter=22897}
[2015-07-16 15:33:36,934][TRACE][index.translog.fs ] [Warrior Woman] [test][3] sync translog buffered{id=1437060760276, operationCounter=24693}
[2015-07-16 15:33:37,858][TRACE][index.translog.fs ] [Warrior Woman] [test][4] sync translog buffered{id=1437060760066, operationCounter=25946}
[2015-07-16 15:33:37,942][TRACE][index.translog.fs ] [Warrior Woman] [test][0] sync translog buffered{id=1437060760067, operationCounter=26044}
[2015-07-16 15:33:40,449][TRACE][index.translog.fs ] [Warrior Woman] [test][1] sync translog buffered{id=1437060760276, operationCounter=29345}
[2015-07-16 15:33:40,459][TRACE][index.translog.fs ] [Warrior Woman] [test][2] sync translog buffered{id=1437060760066, operationCounter=29366}
[2015-07-16 15:33:42,031][TRACE][index.translog.fs ] [Warrior Woman] [test][3] sync translog buffered{id=1437060760276, operationCounter=31032}
[2015-07-16 15:33:43,016][TRACE][index.translog.fs ] [Warrior Woman] [test][4] sync translog buffered{id=1437060760066, operationCounter=32324}
[2015-07-16 15:33:43,042][TRACE][index.translog.fs ] [Warrior Woman] [test][0] sync translog buffered{id=1437060760067, operationCounter=32327}
[2015-07-16 15:33:45,686][TRACE][index.translog.fs ] [Warrior Woman] [test][2] sync translog buffered{id=1437060760066, operationCounter=35650}
[2015-07-16 15:33:45,687][TRACE][index.translog.fs ] [Warrior Woman] [test][1] sync translog buffered{id=1437060760276, operationCounter=35671}
[2015-07-16 15:33:47,059][TRACE][index.translog.fs ] [Warrior Woman] [test][3] sync translog buffered{id=1437060760276, operationCounter=37232}
[2015-07-16 15:33:48,155][TRACE][index.translog.fs ] [Warrior Woman] [test][4] sync translog buffered{id=1437060760066, operationCounter=39008}
[2015-07-16 15:33:48,159][TRACE][index.translog.fs ] [Warrior Woman] [test][0] sync translog buffered{id=1437060760067, operationCounter=39004}
[2015-07-16 15:33:50,844][TRACE][index.translog.fs ] [Warrior Woman] [test][1] sync translog buffered{id=1437060760276, operationCounter=39995}
[2015-07-16 15:33:50,844][TRACE][index.translog.fs ] [Warrior Woman] [test][2] sync translog buffered{id=1437060760066, operationCounter=40001}
[2015-07-16 15:33:52,164][TRACE][index.translog.fs ] [Warrior Woman] [test][3] sync translog buffered{id=1437060760276, operationCounter=39995}
[2015-07-16 15:33:53,324][TRACE][index.translog.fs ] [Warrior Woman] [test][0] sync translog buffered{id=1437060760067, operationCounter=40000}
[2015-07-16 15:33:53,324][TRACE][index.translog.fs ] [Warrior Woman] [test][4] sync translog buffered{id=1437060760066, operationCounter=40000}
[2015-07-16 15:31:26,240][INFO ][node ] [Ringo Kid] version[1.6.0], pid[1], build[cdd3ac4/2015-06-09T13:36:34Z]
[2015-07-16 15:31:26,241][INFO ][node ] [Ringo Kid] initializing ...
[2015-07-16 15:31:26,249][INFO ][plugins ] [Ringo Kid] loaded [], sites []
[2015-07-16 15:31:26,296][INFO ][env ] [Ringo Kid] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/fedora_fr--ws--152-root)]], net usable_space [1.5gb], net total_space [49gb], types [ext4]
[2015-07-16 15:31:28,700][INFO ][node ] [Ringo Kid] initialized
[2015-07-16 15:31:28,700][INFO ][node ] [Ringo Kid] starting ...
[2015-07-16 15:31:28,905][INFO ][transport ] [Ringo Kid] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/172.17.0.52:9300]}
[2015-07-16 15:31:28,920][INFO ][discovery ] [Ringo Kid] elasticsearch/uv6YUup_TeW823PDdszbkw
[2015-07-16 15:31:31,959][INFO ][cluster.service ] [Ringo Kid] detected_master [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}, added {[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true},}, reason: zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])
[2015-07-16 15:31:31,968][INFO ][http ] [Ringo Kid] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.17.0.52:9200]}
[2015-07-16 15:31:31,969][INFO ][node ] [Ringo Kid] started
[2015-07-16 15:31:32,346][INFO ][cluster.service ] [Ringo Kid] added {[Marvel Man][8tHFm4FxRWyVg9BDMZhTRw][0135c667e85a][inet[/172.17.0.53:9300]]{data=false, client=true},}, reason: zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])
[2015-07-16 15:31:33,244][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: took 1ms done applying updated cluster_state (version: 5)
[2015-07-16 15:31:36,521][DEBUG][discovery.zen.publish ] [Ringo Kid] received cluster state version 6
[2015-07-16 15:31:36,521][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: execute
[2015-07-16 15:31:36,527][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [6], source [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]
[2015-07-16 15:31:36,527][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 6
[2015-07-16 15:31:36,528][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: took 6ms done applying updated cluster_state (version: 6)
[2015-07-16 15:32:39,777][DEBUG][discovery.zen.publish ] [Ringo Kid] received cluster state version 7
[2015-07-16 15:32:39,778][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: execute
[2015-07-16 15:32:39,778][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [7], source [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]
[2015-07-16 15:32:39,778][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 7
[2015-07-16 15:32:39,780][DEBUG][indices.cluster ] [Ringo Kid] [test] creating index
[2015-07-16 15:32:39,780][DEBUG][indices ] [Ringo Kid] creating Index [test], shards [5]/[1]
[2015-07-16 15:32:39,914][DEBUG][index.mapper ] [Ringo Kid] [test] using dynamic[true], default mapping: default_mapping_location[null], loaded_from[jar:file:/usr/share/elasticsearch/lib/elasticsearch-1.6.0.jar!/org/elasticsearch/index/mapper/default-mapping.json], default percolator mapping: location[null], loaded_from[null]
[2015-07-16 15:32:39,914][DEBUG][index.cache.query.parser.resident] [Ringo Kid] [test] using [resident] query cache with max_size [100], expire [null]
[2015-07-16 15:32:39,918][DEBUG][index.store.fs ] [Ringo Kid] [test] using index.store.throttle.type [node], with index.store.throttle.max_bytes_per_sec [0b]
[2015-07-16 15:32:39,946][DEBUG][indices.cluster ] [Ringo Kid] [test][4] creating shard
[2015-07-16 15:32:39,946][DEBUG][index ] [Ringo Kid] [test] creating shard_id [test][4]
[2015-07-16 15:32:39,995][DEBUG][index.store.fs ] [Ringo Kid] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/index] as shard's index location
[2015-07-16 15:32:39,998][DEBUG][index.store ] [Ringo Kid] [test][4] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:32:39,999][DEBUG][index.merge.scheduler ] [Ringo Kid] [test][4] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:32:39,999][DEBUG][index.store.fs ] [Ringo Kid] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/translog] as shard's translog location
[2015-07-16 15:32:40,001][DEBUG][index.deletionpolicy ] [Ringo Kid] [test][4] Using [keep_only_last] deletion policy
[2015-07-16 15:32:40,002][DEBUG][index.merge.policy ] [Ringo Kid] [test][4] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:32:40,003][DEBUG][index.shard ] [Ringo Kid] [test][4] state: [CREATED]
[2015-07-16 15:32:40,004][DEBUG][index.shard ] [Ringo Kid] [test][4] scheduling optimizer / merger every 1s
[2015-07-16 15:32:40,004][DEBUG][index.translog ] [Ringo Kid] [test][4] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:32:40,008][DEBUG][index.shard ] [Ringo Kid] [test][4] state: [CREATED]->[RECOVERING], reason [from gateway]
[2015-07-16 15:32:40,009][DEBUG][indices.cluster ] [Ringo Kid] [test][0] creating shard
[2015-07-16 15:32:40,009][DEBUG][index.gateway ] [Ringo Kid] [test][4] starting recovery from local ...
[2015-07-16 15:32:40,009][DEBUG][index ] [Ringo Kid] [test] creating shard_id [test][0]
[2015-07-16 15:32:40,013][DEBUG][index.store.fs ] [Ringo Kid] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/index] as shard's index location
[2015-07-16 15:32:40,014][DEBUG][index.store ] [Ringo Kid] [test][0] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:32:40,014][DEBUG][index.merge.scheduler ] [Ringo Kid] [test][0] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:32:40,014][DEBUG][index.store.fs ] [Ringo Kid] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/translog] as shard's translog location
[2015-07-16 15:32:40,015][DEBUG][index.deletionpolicy ] [Ringo Kid] [test][0] Using [keep_only_last] deletion policy
[2015-07-16 15:32:40,015][DEBUG][index.merge.policy ] [Ringo Kid] [test][0] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:32:40,015][DEBUG][index.shard ] [Ringo Kid] [test][0] state: [CREATED]
[2015-07-16 15:32:40,015][DEBUG][index.shard ] [Ringo Kid] [test][0] scheduling optimizer / merger every 1s
[2015-07-16 15:32:40,015][DEBUG][index.translog ] [Ringo Kid] [test][0] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:32:40,016][DEBUG][index.shard ] [Ringo Kid] [test][0] state: [CREATED]->[RECOVERING], reason [from gateway]
[2015-07-16 15:32:40,017][DEBUG][indices.cluster ] [Ringo Kid] [test][2] creating shard
[2015-07-16 15:32:40,017][DEBUG][index ] [Ringo Kid] [test] creating shard_id [test][2]
[2015-07-16 15:32:40,017][DEBUG][index.gateway ] [Ringo Kid] [test][0] starting recovery from local ...
[2015-07-16 15:32:40,023][DEBUG][index.store.fs ] [Ringo Kid] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/index] as shard's index location
[2015-07-16 15:32:40,023][DEBUG][index.store ] [Ringo Kid] [test][2] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:32:40,023][DEBUG][index.merge.scheduler ] [Ringo Kid] [test][2] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:32:40,024][DEBUG][index.store.fs ] [Ringo Kid] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/translog] as shard's translog location
[2015-07-16 15:32:40,024][DEBUG][index.deletionpolicy ] [Ringo Kid] [test][2] Using [keep_only_last] deletion policy
[2015-07-16 15:32:40,025][DEBUG][index.merge.policy ] [Ringo Kid] [test][2] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:32:40,025][DEBUG][index.shard ] [Ringo Kid] [test][2] state: [CREATED]
[2015-07-16 15:32:40,025][DEBUG][index.shard ] [Ringo Kid] [test][2] scheduling optimizer / merger every 1s
[2015-07-16 15:32:40,025][DEBUG][index.translog ] [Ringo Kid] [test][2] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:32:40,026][DEBUG][index.shard ] [Ringo Kid] [test][2] state: [CREATED]->[RECOVERING], reason [from gateway]
[2015-07-16 15:32:40,026][DEBUG][index.gateway ] [Ringo Kid] [test][2] starting recovery from local ...
[2015-07-16 15:32:40,033][DEBUG][index.engine ] [Ringo Kid] [test][0] [[test][0]] skipping check for 3x segments
[2015-07-16 15:32:40,033][DEBUG][index.engine ] [Ringo Kid] [test][2] [[test][2]] skipping check for 3x segments
[2015-07-16 15:32:40,033][DEBUG][index.engine ] [Ringo Kid] [test][4] [[test][4]] skipping check for 3x segments
[2015-07-16 15:32:40,039][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: took 260ms done applying updated cluster_state (version: 7)
[2015-07-16 15:32:40,089][TRACE][index.translog.fs ] [Ringo Kid] [test][0] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/translog/translog-1437060760067
[2015-07-16 15:32:40,089][TRACE][index.translog.fs ] [Ringo Kid] [test][4] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/translog/translog-1437060760066
[2015-07-16 15:32:40,089][TRACE][index.translog.fs ] [Ringo Kid] [test][2] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/translog/translog-1437060760066
[2015-07-16 15:32:40,093][TRACE][index.translog.fs ] [Ringo Kid] [test][0] created new translog id: 1437060760067
[2015-07-16 15:32:40,093][TRACE][index.translog.fs ] [Ringo Kid] [test][4] created new translog id: 1437060760066
[2015-07-16 15:32:40,093][TRACE][index.translog.fs ] [Ringo Kid] [test][2] created new translog id: 1437060760066
[2015-07-16 15:32:40,096][DEBUG][index.shard ] [Ringo Kid] [test][4] scheduling refresher every 1s
[2015-07-16 15:32:40,096][DEBUG][index.shard ] [Ringo Kid] [test][2] scheduling refresher every 1s
[2015-07-16 15:32:40,097][DEBUG][index.shard ] [Ringo Kid] [test][4] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from gateway, no translog]
[2015-07-16 15:32:40,096][DEBUG][index.shard ] [Ringo Kid] [test][0] scheduling refresher every 1s
[2015-07-16 15:32:40,097][DEBUG][index.shard ] [Ringo Kid] [test][2] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from gateway, no translog]
[2015-07-16 15:32:40,097][DEBUG][index.gateway ] [Ringo Kid] [test][4] recovery completed from [local], took [89ms]
[2015-07-16 15:32:40,097][DEBUG][index.gateway ] [Ringo Kid] [test][2] recovery completed from [local], took [71ms]
[2015-07-16 15:32:40,097][DEBUG][index.shard ] [Ringo Kid] [test][0] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from gateway, no translog]
[2015-07-16 15:32:40,098][DEBUG][index.gateway ] [Ringo Kid] [test][0] recovery completed from [local], took [81ms]
[2015-07-16 15:32:40,098][DEBUG][cluster.action.shard ] [Ringo Kid] sending shard started for [test][0], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,097][DEBUG][cluster.action.shard ] [Ringo Kid] sending shard started for [test][2], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,097][DEBUG][cluster.action.shard ] [Ringo Kid] sending shard started for [test][4], node[uv6YUup_TeW823PDdszbkw], [P], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery from gateway]
[2015-07-16 15:32:40,235][DEBUG][discovery.zen.publish ] [Ringo Kid] received cluster state version 8
[2015-07-16 15:32:40,235][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: execute
[2015-07-16 15:32:40,235][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [8], source [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]
[2015-07-16 15:32:40,235][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 8
[2015-07-16 15:32:40,237][DEBUG][index.shard ] [Ringo Kid] [test][2] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:32:40,237][DEBUG][index.shard ] [Ringo Kid] [test][0] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:32:40,237][DEBUG][index.shard ] [Ringo Kid] [test][4] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:32:40,280][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: took 44ms done applying updated cluster_state (version: 8)
[2015-07-16 15:32:40,341][DEBUG][indices.store ] [Ringo Kid] [test][2] loaded store meta data (took [1.5ms])
[2015-07-16 15:32:40,346][DEBUG][discovery.zen.publish ] [Ringo Kid] received cluster state version 9
[2015-07-16 15:32:40,346][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: execute
[2015-07-16 15:32:40,346][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [9], source [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]
[2015-07-16 15:32:40,346][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 9
[2015-07-16 15:32:40,348][DEBUG][indices.store ] [Ringo Kid] [test][4] loaded store meta data (took [773.4micros])
[2015-07-16 15:32:40,350][DEBUG][indices.store ] [Ringo Kid] [test][0] loaded store meta data (took [640.1micros])
[2015-07-16 15:32:40,351][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: took 4ms done applying updated cluster_state (version: 9)
[2015-07-16 15:32:40,416][DEBUG][discovery.zen.publish ] [Ringo Kid] received cluster state version 10
[2015-07-16 15:32:40,416][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: execute
[2015-07-16 15:32:40,417][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [10], source [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]
[2015-07-16 15:32:40,417][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 10
[2015-07-16 15:32:40,417][DEBUG][indices.cluster ] [Ringo Kid] [test][3] creating shard
[2015-07-16 15:32:40,417][DEBUG][index ] [Ringo Kid] [test] creating shard_id [test][3]
[2015-07-16 15:32:40,422][DEBUG][index.store.fs ] [Ringo Kid] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/index] as shard's index location
[2015-07-16 15:32:40,423][DEBUG][index.store ] [Ringo Kid] [test][3] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:32:40,423][DEBUG][index.merge.scheduler ] [Ringo Kid] [test][3] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:32:40,423][DEBUG][index.store.fs ] [Ringo Kid] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/translog] as shard's translog location
[2015-07-16 15:32:40,424][DEBUG][index.deletionpolicy ] [Ringo Kid] [test][3] Using [keep_only_last] deletion policy
[2015-07-16 15:32:40,424][DEBUG][index.merge.policy ] [Ringo Kid] [test][3] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:32:40,425][DEBUG][index.shard ] [Ringo Kid] [test][3] state: [CREATED]
[2015-07-16 15:32:40,425][DEBUG][index.shard ] [Ringo Kid] [test][3] scheduling optimizer / merger every 1s
[2015-07-16 15:32:40,425][DEBUG][index.translog ] [Ringo Kid] [test][3] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:32:40,426][DEBUG][index.shard ] [Ringo Kid] [test][3] state: [CREATED]->[RECOVERING], reason [from [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]
[2015-07-16 15:32:40,428][DEBUG][indices.cluster ] [Ringo Kid] [test][1] creating shard
[2015-07-16 15:32:40,428][DEBUG][index ] [Ringo Kid] [test] creating shard_id [test][1]
[2015-07-16 15:32:40,434][DEBUG][index.store.fs ] [Ringo Kid] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/index] as shard's index location
[2015-07-16 15:32:40,434][DEBUG][index.store ] [Ringo Kid] [test][1] store stats are refreshed with refresh_interval [10s]
[2015-07-16 15:32:40,434][DEBUG][index.merge.scheduler ] [Ringo Kid] [test][1] using [concurrent] merge scheduler with max_thread_count[3], max_merge_count[5]
[2015-07-16 15:32:40,435][DEBUG][index.store.fs ] [Ringo Kid] [test] using [/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/translog] as shard's translog location
[2015-07-16 15:32:40,435][DEBUG][index.deletionpolicy ] [Ringo Kid] [test][1] Using [keep_only_last] deletion policy
[2015-07-16 15:32:40,435][DEBUG][index.merge.policy ] [Ringo Kid] [test][1] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-07-16 15:32:40,436][DEBUG][index.shard ] [Ringo Kid] [test][1] state: [CREATED]
[2015-07-16 15:32:40,436][DEBUG][index.shard ] [Ringo Kid] [test][1] scheduling optimizer / merger every 1s
[2015-07-16 15:32:40,436][DEBUG][index.translog ] [Ringo Kid] [test][1] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]
[2015-07-16 15:32:40,436][DEBUG][index.shard ] [Ringo Kid] [test][1] state: [CREATED]->[RECOVERING], reason [from [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]
[2015-07-16 15:32:40,446][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: took 29ms done applying updated cluster_state (version: 10)
[2015-07-16 15:32:40,533][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: execute
[2015-07-16 15:32:40,533][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: took 0s no change in cluster_state
[2015-07-16 15:32:40,533][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: execute
[2015-07-16 15:32:40,533][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: took 0s no change in cluster_state
[2015-07-16 15:32:40,544][DEBUG][discovery.zen.publish ] [Ringo Kid] received cluster state version 11
[2015-07-16 15:32:40,544][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: execute
[2015-07-16 15:32:40,545][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [11], source [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]
[2015-07-16 15:32:40,545][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 11
[2015-07-16 15:32:40,574][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: took 29ms done applying updated cluster_state (version: 11)
[2015-07-16 15:32:40,604][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: execute
[2015-07-16 15:32:40,604][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: took 0s no change in cluster_state
[2015-07-16 15:32:40,623][DEBUG][discovery.zen.publish ] [Ringo Kid] received cluster state version 12
[2015-07-16 15:32:40,623][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: execute
[2015-07-16 15:32:40,624][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [12], source [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]
[2015-07-16 15:32:40,624][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 12
[2015-07-16 15:32:40,630][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: took 6ms done applying updated cluster_state (version: 12)
[2015-07-16 15:32:40,741][DEBUG][discovery.zen.publish ] [Ringo Kid] received cluster state version 13
[2015-07-16 15:32:40,741][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: execute
[2015-07-16 15:32:40,742][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [13], source [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]
[2015-07-16 15:32:40,742][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 13
[2015-07-16 15:32:40,756][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: took 14ms done applying updated cluster_state (version: 13)
[2015-07-16 15:32:40,772][DEBUG][cluster.action.index ] [Ringo Kid] successfully updated master with mapping update: index [test], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], type [test] and source [{"test":{"properties":{"d":{"type":"string"},"i":{"type":"long"}}}}]
[2015-07-16 15:32:40,772][DEBUG][cluster.action.index ] [Ringo Kid] successfully updated master with mapping update: index [test], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], type [test] and source [{"test":{"properties":{"d":{"type":"string"},"i":{"type":"long"}}}}]
[2015-07-16 15:32:40,992][DEBUG][index.store ] [Ringo Kid] [test][3] create legacy length-only output for recovery.544664415.segments_1
[2015-07-16 15:32:40,992][DEBUG][index.store ] [Ringo Kid] [test][1] create legacy length-only output for recovery.544664425.segments_1
[2015-07-16 15:32:41,005][DEBUG][index.engine ] [Ringo Kid] [test][3] [[test][3]] skipping check for 3x segments
[2015-07-16 15:32:41,005][DEBUG][index.engine ] [Ringo Kid] [test][1] [[test][1]] skipping check for 3x segments
[2015-07-16 15:32:41,007][TRACE][index.translog.fs ] [Ringo Kid] [test][3] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/translog/translog-1437060760276
[2015-07-16 15:32:41,007][TRACE][index.translog.fs ] [Ringo Kid] [test][1] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/translog/translog-1437060760276
[2015-07-16 15:32:41,008][TRACE][index.translog.fs ] [Ringo Kid] [test][3] created new translog id: 1437060760276
[2015-07-16 15:32:41,008][TRACE][index.translog.fs ] [Ringo Kid] [test][1] created new translog id: 1437060760276
[2015-07-16 15:32:41,011][DEBUG][index.shard ] [Ringo Kid] [test][1] scheduling refresher every 1s
[2015-07-16 15:32:41,011][DEBUG][index.shard ] [Ringo Kid] [test][3] scheduling refresher every 1s
[2015-07-16 15:32:41,015][DEBUG][index.shard ] [Ringo Kid] [test][3] state: [RECOVERING]->[POST_RECOVERY], reason [peer recovery done]
[2015-07-16 15:32:41,015][DEBUG][cluster.action.shard ] [Ringo Kid] sending shard started for [test][3], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]]
[2015-07-16 15:32:41,015][DEBUG][index.shard ] [Ringo Kid] [test][1] state: [RECOVERING]->[POST_RECOVERY], reason [peer recovery done]
[2015-07-16 15:32:41,016][DEBUG][cluster.action.shard ] [Ringo Kid] sending shard started for [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}]]
[2015-07-16 15:32:41,016][DEBUG][indices.recovery ] [Ringo Kid] [test][3] recovery done from [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}], took [589ms]
[2015-07-16 15:32:41,016][DEBUG][indices.recovery ] [Ringo Kid] [test][1] recovery done from [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}], took [580ms]
[2015-07-16 15:32:41,018][DEBUG][discovery.zen.publish ] [Ringo Kid] received cluster state version 14
[2015-07-16 15:32:41,019][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: execute
[2015-07-16 15:32:41,019][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [14], source [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]
[2015-07-16 15:32:41,019][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 14
[2015-07-16 15:32:41,019][DEBUG][index.shard ] [Ringo Kid] [test][3] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:32:41,019][DEBUG][cluster.action.shard ] [Ringo Kid] sending shard started for [test][1], node[uv6YUup_TeW823PDdszbkw], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [master [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]
[2015-07-16 15:32:41,027][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: took 8ms done applying updated cluster_state (version: 14)
[2015-07-16 15:32:41,068][DEBUG][discovery.zen.publish ] [Ringo Kid] received cluster state version 15
[2015-07-16 15:32:41,068][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: execute
[2015-07-16 15:32:41,068][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [15], source [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]
[2015-07-16 15:32:41,068][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 15
[2015-07-16 15:32:41,068][DEBUG][index.shard ] [Ringo Kid] [test][1] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
[2015-07-16 15:32:41,074][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(from master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}])]: took 6ms done applying updated cluster_state (version: 15)
[2015-07-16 15:32:45,043][TRACE][index.translog.fs ] [Ringo Kid] [test][0] sync translog buffered{id=1437060760067, operationCounter=807}
[2015-07-16 15:32:45,043][TRACE][index.translog.fs ] [Ringo Kid] [test][4] sync translog buffered{id=1437060760066, operationCounter=808}
[2015-07-16 15:32:45,050][TRACE][index.translog.fs ] [Ringo Kid] [test][2] sync translog buffered{id=1437060760066, operationCounter=822}
[2015-07-16 15:32:45,429][TRACE][index.translog.fs ] [Ringo Kid] [test][3] sync translog buffered{id=1437060760276, operationCounter=942}
[2015-07-16 15:32:45,437][TRACE][index.translog.fs ] [Ringo Kid] [test][1] sync translog buffered{id=1437060760276, operationCounter=923}
[2015-07-16 15:32:50,066][TRACE][index.translog.fs ] [Ringo Kid] [test][0] sync translog buffered{id=1437060760067, operationCounter=2801}
[2015-07-16 15:32:50,071][TRACE][index.translog.fs ] [Ringo Kid] [test][2] sync translog buffered{id=1437060760066, operationCounter=2777}
[2015-07-16 15:32:50,071][TRACE][index.translog.fs ] [Ringo Kid] [test][4] sync translog buffered{id=1437060760066, operationCounter=2772}
[2015-07-16 15:32:50,443][TRACE][index.translog.fs ] [Ringo Kid] [test][3] sync translog buffered{id=1437060760276, operationCounter=2956}
[2015-07-16 15:32:50,451][TRACE][index.translog.fs ] [Ringo Kid] [test][1] sync translog buffered{id=1437060760276, operationCounter=2927}
[2015-07-16 15:32:55,092][TRACE][index.translog.fs ] [Ringo Kid] [test][0] sync translog buffered{id=1437060760067, operationCounter=5829}
[2015-07-16 15:32:55,092][TRACE][index.translog.fs ] [Ringo Kid] [test][2] sync translog buffered{id=1437060760066, operationCounter=5844}
[2015-07-16 15:32:55,092][TRACE][index.translog.fs ] [Ringo Kid] [test][4] sync translog buffered{id=1437060760066, operationCounter=5800}
[2015-07-16 15:32:55,463][TRACE][index.translog.fs ] [Ringo Kid] [test][3] sync translog buffered{id=1437060760276, operationCounter=6043}
[2015-07-16 15:32:55,465][TRACE][index.translog.fs ] [Ringo Kid] [test][1] sync translog buffered{id=1437060760276, operationCounter=6053}
[2015-07-16 15:32:58,710][DEBUG][indices.memory ] [Ringo Kid] recalculating shard indexing buffer (reason=[[ADDED]]), total is [98.9mb] with [5] active shards, each shard set to indexing=[19.7mb], translog=[64kb]
[2015-07-16 15:32:58,710][DEBUG][index.shard ] [Ringo Kid] [test][0] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:32:58,710][DEBUG][index.shard ] [Ringo Kid] [test][1] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:32:58,710][DEBUG][index.shard ] [Ringo Kid] [test][2] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:32:58,710][DEBUG][index.shard ] [Ringo Kid] [test][3] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:32:58,710][DEBUG][index.shard ] [Ringo Kid] [test][4] updating index_buffer_size from [64mb] to [19.7mb]
[2015-07-16 15:33:00,333][TRACE][index.translog.fs ] [Ringo Kid] [test][2] sync translog buffered{id=1437060760066, operationCounter=10791}
[2015-07-16 15:33:00,333][TRACE][index.translog.fs ] [Ringo Kid] [test][0] sync translog buffered{id=1437060760067, operationCounter=10829}
[2015-07-16 15:33:00,333][TRACE][index.translog.fs ] [Ringo Kid] [test][4] sync translog buffered{id=1437060760066, operationCounter=10812}
[2015-07-16 15:33:00,628][TRACE][index.translog.fs ] [Ringo Kid] [test][1] sync translog buffered{id=1437060760276, operationCounter=10974}
[2015-07-16 15:33:00,629][TRACE][index.translog.fs ] [Ringo Kid] [test][3] sync translog buffered{id=1437060760276, operationCounter=10978}
[2015-07-16 15:33:01,303][INFO ][discovery.zen ] [Ringo Kid] master_left [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}], reason [shut_down]
[2015-07-16 15:33:01,306][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-master_failed ([Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true})]: execute
[2015-07-16 15:33:01,307][WARN ][discovery.zen ] [Ringo Kid] master left (reason = shut_down), current nodes: {[Marvel Man][8tHFm4FxRWyVg9BDMZhTRw][0135c667e85a][inet[/172.17.0.53:9300]]{data=false, client=true},[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true},}
[2015-07-16 15:33:01,308][DEBUG][discovery.zen.fd ] [Ringo Kid] [master] stopping fault detection against master [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}], reason [master left (reason = shut_down)]
[2015-07-16 15:33:01,308][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [15], source [zen-disco-master_failed ([Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true})]
[2015-07-16 15:33:01,309][INFO ][cluster.service ] [Ringo Kid] removed {[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true},}, reason: zen-disco-master_failed ([Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true})
[2015-07-16 15:33:01,309][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 15
[2015-07-16 15:33:01,311][DEBUG][transport.netty ] [Ringo Kid] disconnecting from [[Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}] due to explicit disconnect call
[2015-07-16 15:33:01,335][WARN ][action.index ] [Ringo Kid] failed to perform indices:data/write/index on remote replica [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}[test][0]
org.elasticsearch.transport.NodeDisconnectedException: [Scream][inet[/172.17.0.51:9300]][indices:data/write/index[r]] disconnected
[2015-07-16 15:33:01,340][WARN ][action.index ] [Ringo Kid] failed to perform indices:data/write/index on remote replica [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}[test][0]
org.elasticsearch.transport.NodeDisconnectedException: [Scream][inet[/172.17.0.51:9300]][indices:data/write/index[r]] disconnected
[2015-07-16 15:33:01,341][WARN ][cluster.action.shard ] [Ringo Kid] can't send shard failed for [test][0], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[STARTED], no master known.
[2015-07-16 15:33:01,341][WARN ][cluster.action.shard ] [Ringo Kid] can't send shard failed for [test][0], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[STARTED], no master known.
[2015-07-16 15:33:01,341][WARN ][action.index ] [Ringo Kid] failed to perform indices:data/write/index on remote replica [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}[test][4]
org.elasticsearch.transport.NodeDisconnectedException: [Scream][inet[/172.17.0.51:9300]][indices:data/write/index[r]] disconnected
[2015-07-16 15:33:01,341][WARN ][cluster.action.shard ] [Ringo Kid] can't send shard failed for [test][4], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[STARTED], no master known.
[2015-07-16 15:33:01,347][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-master_failed ([Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true})]: took 40ms done applying updated cluster_state (version: 15)
[2015-07-16 15:33:01,348][WARN ][action.index ] [Ringo Kid] failed to perform indices:data/write/index on remote replica [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}[test][0]
org.elasticsearch.transport.NodeDisconnectedException: [Scream][inet[/172.17.0.51:9300]][indices:data/write/index[r]] disconnected
[2015-07-16 15:33:01,348][WARN ][cluster.action.shard ] [Ringo Kid] can't send shard failed for [test][0], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[STARTED], no master known.
[2015-07-16 15:33:01,352][WARN ][action.index ] [Ringo Kid] failed to perform indices:data/write/index on remote replica [Scream][5_NiA8akT5Gj2Zpe_G4J9w][4c9853760f85][inet[/172.17.0.51:9300]]{master=true}[test][0]
org.elasticsearch.transport.NodeDisconnectedException: [Scream][inet[/172.17.0.51:9300]][indices:data/write/index[r]] disconnected
[2015-07-16 15:33:01,354][WARN ][cluster.action.shard ] [Ringo Kid] can't send shard failed for [test][0], node[5_NiA8akT5Gj2Zpe_G4J9w], [R], s[STARTED], no master known.
[2015-07-16 15:33:04,313][DEBUG][discovery.zen ] [Ringo Kid] filtered ping responses: (filter_client[true], filter_data[false]) {none}
[2015-07-16 15:33:05,371][DEBUG][transport.netty ] [Ringo Kid] connected to node [[Warrior Woman][eNxMpc5dQt-s2eZ72stWEQ][4c9853760f85][inet[/172.17.0.54:9300]]{master=true}]
[2015-07-16 15:33:05,568][TRACE][index.translog.fs ] [Ringo Kid] [test][4] sync translog buffered{id=1437060760066, operationCounter=11517}
[2015-07-16 15:33:05,568][TRACE][index.translog.fs ] [Ringo Kid] [test][2] sync translog buffered{id=1437060760066, operationCounter=11486}
[2015-07-16 15:33:05,568][TRACE][index.translog.fs ] [Ringo Kid] [test][0] sync translog buffered{id=1437060760067, operationCounter=11528}
[2015-07-16 15:33:05,894][TRACE][index.translog.fs ] [Ringo Kid] [test][3] sync translog buffered{id=1437060760276, operationCounter=11506}
[2015-07-16 15:33:05,895][TRACE][index.translog.fs ] [Ringo Kid] [test][1] sync translog buffered{id=1437060760276, operationCounter=11494}
[2015-07-16 15:33:07,316][DEBUG][discovery.zen ] [Ringo Kid] filtered ping responses: (filter_client[true], filter_data[false])
--> ping_response{node [[Warrior Woman][eNxMpc5dQt-s2eZ72stWEQ][4c9853760f85][inet[/172.17.0.54:9300]]{master=true}], id[7], master [null], hasJoinedOnce [false], cluster_name[elasticsearch]}
[2015-07-16 15:33:07,317][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-join (elected_as_master)]: execute
[2015-07-16 15:33:07,323][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [16], source [zen-disco-join (elected_as_master)]
[2015-07-16 15:33:07,323][INFO ][cluster.service ] [Ringo Kid] new_master [Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}, reason: zen-disco-join (elected_as_master)
[2015-07-16 15:33:07,323][DEBUG][cluster.service ] [Ringo Kid] publishing cluster state version 16
[2015-07-16 15:33:07,331][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 16
[2015-07-16 15:33:07,335][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:33:07,339][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:33:07,378][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-join (elected_as_master)]: took 61ms done applying updated cluster_state (version: 16)
[2015-07-16 15:33:07,378][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(join from node[[Marvel Man][8tHFm4FxRWyVg9BDMZhTRw][0135c667e85a][inet[/172.17.0.53:9300]]{data=false, client=true}])]: execute
[2015-07-16 15:33:07,379][DEBUG][discovery.zen ] [Ringo Kid] received a join request for an existing node [[Marvel Man][8tHFm4FxRWyVg9BDMZhTRw][0135c667e85a][inet[/172.17.0.53:9300]]{data=false, client=true}]
[2015-07-16 15:33:07,379][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [17], source [zen-disco-receive(join from node[[Marvel Man][8tHFm4FxRWyVg9BDMZhTRw][0135c667e85a][inet[/172.17.0.53:9300]]{data=false, client=true}])]
[2015-07-16 15:33:07,379][DEBUG][cluster.service ] [Ringo Kid] publishing cluster state version 17
[2015-07-16 15:33:07,381][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 17
[2015-07-16 15:33:07,381][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:33:07,381][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:33:07,382][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(join from node[[Marvel Man][8tHFm4FxRWyVg9BDMZhTRw][0135c667e85a][inet[/172.17.0.53:9300]]{data=false, client=true}])]: took 3ms done applying updated cluster_state (version: 17)
[2015-07-16 15:33:08,384][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(join from node[[Warrior Woman][eNxMpc5dQt-s2eZ72stWEQ][4c9853760f85][inet[/172.17.0.54:9300]]{master=true}])]: execute
[2015-07-16 15:33:08,384][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [18], source [zen-disco-receive(join from node[[Warrior Woman][eNxMpc5dQt-s2eZ72stWEQ][4c9853760f85][inet[/172.17.0.54:9300]]{master=true}])]
[2015-07-16 15:33:08,384][INFO ][cluster.service ] [Ringo Kid] added {[Warrior Woman][eNxMpc5dQt-s2eZ72stWEQ][4c9853760f85][inet[/172.17.0.54:9300]]{master=true},}, reason: zen-disco-receive(join from node[[Warrior Woman][eNxMpc5dQt-s2eZ72stWEQ][4c9853760f85][inet[/172.17.0.54:9300]]{master=true}])
[2015-07-16 15:33:08,384][DEBUG][cluster.service ] [Ringo Kid] publishing cluster state version 18
[2015-07-16 15:33:08,417][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 18
[2015-07-16 15:33:08,418][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:33:08,418][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:33:08,420][DEBUG][cluster.service ] [Ringo Kid] processing [zen-disco-receive(join from node[[Warrior Woman][eNxMpc5dQt-s2eZ72stWEQ][4c9853760f85][inet[/172.17.0.54:9300]]{master=true}])]: took 36ms done applying updated cluster_state (version: 18)
[2015-07-16 15:33:17,332][DEBUG][cluster.service ] [Ringo Kid] processing [routing-table-updater]: execute
[2015-07-16 15:33:17,338][DEBUG][indices.store ] [Ringo Kid] [test][1] loaded store meta data (took [699.3micros])
[2015-07-16 15:33:17,339][DEBUG][indices.store ] [Ringo Kid] [test][0] loaded store meta data (took [1.9ms])
[2015-07-16 15:33:17,340][DEBUG][indices.store ] [Ringo Kid] [test][4] loaded store meta data (took [695.5micros])
[2015-07-16 15:33:17,341][DEBUG][indices.store ] [Ringo Kid] [test][3] loaded store meta data (took [2.1ms])
[2015-07-16 15:33:17,342][DEBUG][indices.store ] [Ringo Kid] [test][2] loaded store meta data (took [931micros])
[2015-07-16 15:33:17,342][DEBUG][cluster.service ] [Ringo Kid] processing [routing-table-updater]: took 9ms no change in cluster_state
[2015-07-16 15:33:17,413][DEBUG][cluster.service ] [Ringo Kid] processing [async_shard_fetch]: execute
[2015-07-16 15:33:17,420][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [19], source [async_shard_fetch]
[2015-07-16 15:33:17,420][DEBUG][cluster.service ] [Ringo Kid] publishing cluster state version 19
[2015-07-16 15:33:17,703][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 19
[2015-07-16 15:33:17,703][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:33:17,703][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:33:17,753][DEBUG][cluster.service ] [Ringo Kid] processing [async_shard_fetch]: took 339ms done applying updated cluster_state (version: 19)
[2015-07-16 15:33:17,753][DEBUG][cluster.service ] [Ringo Kid] processing [async_shard_fetch]: execute
[2015-07-16 15:33:17,755][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [20], source [async_shard_fetch]
[2015-07-16 15:33:17,755][DEBUG][cluster.service ] [Ringo Kid] publishing cluster state version 20
[2015-07-16 15:33:17,774][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 20
[2015-07-16 15:33:17,774][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:33:17,774][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:33:17,909][DEBUG][cluster.service ] [Ringo Kid] processing [async_shard_fetch]: took 155ms done applying updated cluster_state (version: 20)
[2015-07-16 15:33:17,909][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: execute
[2015-07-16 15:33:17,909][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: took 0s no change in cluster_state
[2015-07-16 15:33:17,927][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: execute
[2015-07-16 15:33:17,927][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: took 0s no change in cluster_state
[2015-07-16 15:33:19,864][TRACE][index.translog.fs ] [Ringo Kid] [test][0] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/translog/translog-1437060760068
[2015-07-16 15:33:19,864][TRACE][index.translog.fs ] [Ringo Kid] [test][4] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/translog/translog-1437060760067
[2015-07-16 15:33:19,865][TRACE][index.translog.fs ] [Ringo Kid] [test][0] created new transient translog id: 1437060760068
[2015-07-16 15:33:19,877][TRACE][index.translog.fs ] [Ringo Kid] [test][4] created new transient translog id: 1437060760067
[2015-07-16 15:33:20,099][TRACE][index.translog.fs ] [Ringo Kid] [test][4] make transient current buffered{id=1437060760066, operationCounter=11517}
[2015-07-16 15:33:20,099][TRACE][index.translog.fs ] [Ringo Kid] [test][4] closing RAF reference delete: true length: 12307246 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/4/translog/translog-1437060760066
[2015-07-16 15:33:20,104][DEBUG][cluster.action.shard ] [Ringo Kid] received shard started for [test][4], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:20,104][DEBUG][cluster.service ] [Ringo Kid] processing [shard-started ([test][4], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: execute
[2015-07-16 15:33:20,104][DEBUG][cluster.action.shard ] [Ringo Kid] [test][4] will apply shard started [test][4], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:20,105][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [21], source [shard-started ([test][4], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]
[2015-07-16 15:33:20,106][DEBUG][cluster.service ] [Ringo Kid] publishing cluster state version 21
[2015-07-16 15:33:20,128][DEBUG][indices.recovery ] [Ringo Kid] delaying recovery of [test][1] as it is not listed as assigned to target node [Warrior Woman][eNxMpc5dQt-s2eZ72stWEQ][4c9853760f85][inet[/172.17.0.54:9300]]{master=true}
[2015-07-16 15:33:20,156][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 21
[2015-07-16 15:33:20,157][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:33:20,157][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:33:20,174][TRACE][index.translog.fs ] [Ringo Kid] [test][0] make transient current buffered{id=1437060760067, operationCounter=11528}
[2015-07-16 15:33:20,174][TRACE][index.translog.fs ] [Ringo Kid] [test][0] closing RAF reference delete: true length: 12319005 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/0/translog/translog-1437060760067
[2015-07-16 15:33:20,176][DEBUG][cluster.service ] [Ringo Kid] processing [shard-started ([test][4], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: took 71ms done applying updated cluster_state (version: 21)
[2015-07-16 15:33:20,185][DEBUG][cluster.action.shard ] [Ringo Kid] received shard started for [test][0], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:20,186][DEBUG][cluster.service ] [Ringo Kid] processing [shard-started ([test][0], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: execute
[2015-07-16 15:33:20,186][DEBUG][cluster.action.shard ] [Ringo Kid] [test][0] will apply shard started [test][0], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:20,187][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [22], source [shard-started ([test][0], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]
[2015-07-16 15:33:20,187][DEBUG][cluster.service ] [Ringo Kid] publishing cluster state version 22
[2015-07-16 15:33:20,213][DEBUG][indices.recovery ] [Ringo Kid] delaying recovery of [test][2] as it is not listed as assigned to target node [Warrior Woman][eNxMpc5dQt-s2eZ72stWEQ][4c9853760f85][inet[/172.17.0.54:9300]]{master=true}
[2015-07-16 15:33:20,214][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 22
[2015-07-16 15:33:20,215][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:33:20,215][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:33:20,232][DEBUG][cluster.service ] [Ringo Kid] processing [shard-started ([test][0], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: took 46ms done applying updated cluster_state (version: 22)
[2015-07-16 15:33:20,642][TRACE][index.translog.fs ] [Ringo Kid] [test][0] sync translog buffered{id=1437060760068, operationCounter=7}
[2015-07-16 15:33:20,642][TRACE][index.translog.fs ] [Ringo Kid] [test][4] sync translog buffered{id=1437060760067, operationCounter=4}
[2015-07-16 15:33:20,674][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: execute
[2015-07-16 15:33:20,674][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: took 0s no change in cluster_state
[2015-07-16 15:33:20,752][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: execute
[2015-07-16 15:33:20,752][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: took 0s no change in cluster_state
[2015-07-16 15:33:21,701][TRACE][index.translog.fs ] [Ringo Kid] [test][2] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/translog/translog-1437060760067
[2015-07-16 15:33:21,701][TRACE][index.translog.fs ] [Ringo Kid] [test][2] created new transient translog id: 1437060760067
[2015-07-16 15:33:21,705][TRACE][index.translog.fs ] [Ringo Kid] [test][1] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/translog/translog-1437060760277
[2015-07-16 15:33:21,705][TRACE][index.translog.fs ] [Ringo Kid] [test][1] created new transient translog id: 1437060760277
[2015-07-16 15:33:21,772][TRACE][index.translog.fs ] [Ringo Kid] [test][1] make transient current buffered{id=1437060760276, operationCounter=11494}
[2015-07-16 15:33:21,772][TRACE][index.translog.fs ] [Ringo Kid] [test][1] closing RAF reference delete: true length: 12282667 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/1/translog/translog-1437060760276
[2015-07-16 15:33:21,776][DEBUG][cluster.action.shard ] [Ringo Kid] received shard started for [test][1], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:21,776][DEBUG][cluster.service ] [Ringo Kid] processing [shard-started ([test][1], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: execute
[2015-07-16 15:33:21,776][DEBUG][cluster.action.shard ] [Ringo Kid] [test][1] will apply shard started [test][1], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:21,777][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [23], source [shard-started ([test][1], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]
[2015-07-16 15:33:21,777][DEBUG][cluster.service ] [Ringo Kid] publishing cluster state version 23
[2015-07-16 15:33:21,797][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 23
[2015-07-16 15:33:21,797][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:33:21,797][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:33:21,824][DEBUG][cluster.service ] [Ringo Kid] processing [shard-started ([test][1], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: took 47ms done applying updated cluster_state (version: 23)
[2015-07-16 15:33:21,825][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: execute
[2015-07-16 15:33:21,826][DEBUG][cluster.service ] [Ringo Kid] processing [recovery_mapping_check]: took 0s no change in cluster_state
[2015-07-16 15:33:21,847][TRACE][index.translog.fs ] [Ringo Kid] [test][2] make transient current buffered{id=1437060760066, operationCounter=11486}
[2015-07-16 15:33:21,847][TRACE][index.translog.fs ] [Ringo Kid] [test][2] closing RAF reference delete: true length: 12274107 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/2/translog/translog-1437060760066
[2015-07-16 15:33:21,853][DEBUG][cluster.action.shard ] [Ringo Kid] received shard started for [test][2], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:21,854][DEBUG][cluster.service ] [Ringo Kid] processing [shard-started ([test][2], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: execute
[2015-07-16 15:33:21,854][DEBUG][cluster.action.shard ] [Ringo Kid] [test][2] will apply shard started [test][2], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:21,855][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [24], source [shard-started ([test][2], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]
[2015-07-16 15:33:21,855][DEBUG][cluster.service ] [Ringo Kid] publishing cluster state version 24
[2015-07-16 15:33:21,883][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 24
[2015-07-16 15:33:21,884][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:33:21,884][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:33:21,896][DEBUG][cluster.service ] [Ringo Kid] processing [shard-started ([test][2], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: took 42ms done applying updated cluster_state (version: 24)
[2015-07-16 15:33:22,631][TRACE][index.translog.fs ] [Ringo Kid] [test][3] created RAF reference for /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/translog/translog-1437060760277
[2015-07-16 15:33:22,631][TRACE][index.translog.fs ] [Ringo Kid] [test][3] created new transient translog id: 1437060760277
[2015-07-16 15:33:22,876][TRACE][index.translog.fs ] [Ringo Kid] [test][3] make transient current buffered{id=1437060760276, operationCounter=11506}
[2015-07-16 15:33:22,876][TRACE][index.translog.fs ] [Ringo Kid] [test][3] closing RAF reference delete: true length: 12295487 file: /usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/test/3/translog/translog-1437060760276
[2015-07-16 15:33:22,880][DEBUG][cluster.action.shard ] [Ringo Kid] received shard started for [test][3], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:22,880][DEBUG][cluster.service ] [Ringo Kid] processing [shard-started ([test][3], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: execute
[2015-07-16 15:33:22,880][DEBUG][cluster.action.shard ] [Ringo Kid] [test][3] will apply shard started [test][3], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING], indexUUID [FcFFAqI7Q46nw4CRH1gy6g], reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]
[2015-07-16 15:33:22,881][DEBUG][cluster.service ] [Ringo Kid] cluster state updated, version [25], source [shard-started ([test][3], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]
[2015-07-16 15:33:22,881][DEBUG][cluster.service ] [Ringo Kid] publishing cluster state version 25
[2015-07-16 15:33:22,898][DEBUG][cluster.service ] [Ringo Kid] set local cluster state to version 25
[2015-07-16 15:33:22,898][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: execute
[2015-07-16 15:33:22,898][DEBUG][river.cluster ] [Ringo Kid] processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-07-16 15:33:22,909][DEBUG][cluster.service ] [Ringo Kid] processing [shard-started ([test][3], node[eNxMpc5dQt-s2eZ72stWEQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Ringo Kid][uv6YUup_TeW823PDdszbkw][13b63ec5f127][inet[/172.17.0.52:9300]]{master=true}]]]: took 28ms done applying updated cluster_state (version: 25)
[2015-07-16 15:33:25,642][TRACE][index.translog.fs ] [Ringo Kid] [test][2] sync translog buffered{id=1437060760067, operationCounter=1815}
[2015-07-16 15:33:25,650][TRACE][index.translog.fs ] [Ringo Kid] [test][4] sync translog buffered{id=1437060760067, operationCounter=1818}
[2015-07-16 15:33:25,650][TRACE][index.translog.fs ] [Ringo Kid] [test][0] sync translog buffered{id=1437060760068, operationCounter=1799}
[2015-07-16 15:33:25,927][TRACE][index.translog.fs ] [Ringo Kid] [test][3] sync translog buffered{id=1437060760277, operationCounter=2005}
[2015-07-16 15:33:25,927][TRACE][index.translog.fs ] [Ringo Kid] [test][1] sync translog buffered{id=1437060760277, operationCounter=2035}
[2015-07-16 15:33:30,667][TRACE][index.translog.fs ] [Ringo Kid] [test][2] sync translog buffered{id=1437060760067, operationCounter=6260}
[2015-07-16 15:33:30,672][TRACE][index.translog.fs ] [Ringo Kid] [test][0] sync translog buffered{id=1437060760068, operationCounter=6218}
[2015-07-16 15:33:30,673][TRACE][index.translog.fs ] [Ringo Kid] [test][4] sync translog buffered{id=1437060760067, operationCounter=6217}
[2015-07-16 15:33:30,960][TRACE][index.translog.fs ] [Ringo Kid] [test][1] sync translog buffered{id=1437060760277, operationCounter=6417}
[2015-07-16 15:33:30,961][TRACE][index.translog.fs ] [Ringo Kid] [test][3] sync translog buffered{id=1437060760277, operationCounter=6379}
[2015-07-16 15:33:35,825][TRACE][index.translog.fs ] [Ringo Kid] [test][4] sync translog buffered{id=1437060760067, operationCounter=11972}
[2015-07-16 15:33:35,826][TRACE][index.translog.fs ] [Ringo Kid] [test][0] sync translog buffered{id=1437060760068, operationCounter=11917}
[2015-07-16 15:33:35,825][TRACE][index.translog.fs ] [Ringo Kid] [test][2] sync translog buffered{id=1437060760067, operationCounter=12011}
[2015-07-16 15:33:36,081][TRACE][index.translog.fs ] [Ringo Kid] [test][3] sync translog buffered{id=1437060760277, operationCounter=12151}
[2015-07-16 15:33:36,081][TRACE][index.translog.fs ] [Ringo Kid] [test][1] sync translog buffered{id=1437060760277, operationCounter=12189}
[2015-07-16 15:33:41,018][TRACE][index.translog.fs ] [Ringo Kid] [test][2] sync translog buffered{id=1437060760067, operationCounter=18549}
[2015-07-16 15:33:41,018][TRACE][index.translog.fs ] [Ringo Kid] [test][0] sync translog buffered{id=1437060760068, operationCounter=18480}
[2015-07-16 15:33:41,018][TRACE][index.translog.fs ] [Ringo Kid] [test][4] sync translog buffered{id=1437060760067, operationCounter=18521}
[2015-07-16 15:33:41,227][TRACE][index.translog.fs ] [Ringo Kid] [test][3] sync translog buffered{id=1437060760277, operationCounter=18571}
[2015-07-16 15:33:41,229][TRACE][index.translog.fs ] [Ringo Kid] [test][1] sync translog buffered{id=1437060760277, operationCounter=18592}
[2015-07-16 15:33:46,178][TRACE][index.translog.fs ] [Ringo Kid] [test][4] sync translog buffered{id=1437060760067, operationCounter=24730}
[2015-07-16 15:33:46,180][TRACE][index.translog.fs ] [Ringo Kid] [test][0] sync translog buffered{id=1437060760068, operationCounter=24670}
[2015-07-16 15:33:46,181][TRACE][index.translog.fs ] [Ringo Kid] [test][2] sync translog buffered{id=1437060760067, operationCounter=24681}
[2015-07-16 15:33:46,463][TRACE][index.translog.fs ] [Ringo Kid] [test][1] sync translog buffered{id=1437060760277, operationCounter=24960}
[2015-07-16 15:33:46,463][TRACE][index.translog.fs ] [Ringo Kid] [test][3] sync translog buffered{id=1437060760277, operationCounter=24965}
[2015-07-16 15:33:51,358][TRACE][index.translog.fs ] [Ringo Kid] [test][4] sync translog buffered{id=1437060760067, operationCounter=28483}
[2015-07-16 15:33:51,358][TRACE][index.translog.fs ] [Ringo Kid] [test][2] sync translog buffered{id=1437060760067, operationCounter=28515}
[2015-07-16 15:33:51,358][TRACE][index.translog.fs ] [Ringo Kid] [test][0] sync translog buffered{id=1437060760068, operationCounter=28472}
[2015-07-16 15:33:51,607][TRACE][index.translog.fs ] [Ringo Kid] [test][3] sync translog buffered{id=1437060760277, operationCounter=28489}
[2015-07-16 15:33:51,607][TRACE][index.translog.fs ] [Ringo Kid] [test][1] sync translog buffered{id=1437060760277, operationCounter=28501}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment