-
-
Save bittusarkar/52b29da60c7fa4dda1ef51d29f9d71cf to your computer and use it in GitHub Desktop.
Shards stuck initializing [2/2]
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[2018-12-26T14:20:56,695][INFO ][o.e.c.s.ClusterSettings ] [es-d25-rm] updating [transport.tracer.include] from [[]] to [["internal:index/shard/recovery/*"]] | |
[2018-12-26T15:11:19,222][TRACE][o.e.t.T.tracer ] [es-d25-rm] [323965][internal:index/shard/recovery/start_recovery] received request | |
[2018-12-26T15:11:19,222][TRACE][o.e.i.r.PeerRecoverySourceService] [es-d25-rm] [codesearchshared_11_0][22] starting recovery to {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} | |
[2018-12-26T15:11:19,222][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] skipping [phase1]- identical sync id [L-RPHHhCSiSkVwspNAKP5A] found on both source and target | |
[2018-12-26T15:11:19,222][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] recovery [phase1]: took [0s] | |
[2018-12-26T15:11:19,222][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] recovery [phase1]: prepare remote engine for translog | |
[2018-12-26T15:11:19,222][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14566589][internal:index/shard/recovery/prepare_translog] sent to [{es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17}] (timeout: [15m]) | |
[2018-12-26T15:11:19,300][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14566589][internal:index/shard/recovery/prepare_translog] received response from [{es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17}] | |
[2018-12-26T15:11:19,300][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] recovery [phase1]: remote engine start took [66.7ms] | |
[2018-12-26T15:11:19,300][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] all operations up to [89] completed, which will be used as an ending sequence number | |
[2018-12-26T15:11:19,300][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] snapshot translog for recovery; current size is [0] | |
[2018-12-26T15:11:19,300][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] recovery [phase2]: sending transaction log operations (seq# from [0], required [90:89] | |
[2018-12-26T15:11:19,300][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] no translog operations to send | |
[2018-12-26T15:11:19,300][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14566590][internal:index/shard/recovery/translog_ops] sent to [{es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17}] (timeout: [30m]) | |
[2018-12-26T15:11:19,300][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14566590][internal:index/shard/recovery/translog_ops] received response from [{es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17}] | |
[2018-12-26T15:11:19,300][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] sent final batch of [0][0b] (total: [0]) translog operations | |
[2018-12-26T15:11:19,300][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] recovery [phase2]: took [1.6ms] | |
[2018-12-26T15:11:19,300][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] finalizing recovery | |
[2018-12-26T16:14:11,670][TRACE][o.e.t.T.tracer ] [es-d25-rm] [846470][internal:index/shard/recovery/start_recovery] received request | |
[2018-12-26T16:14:11,670][TRACE][o.e.i.r.PeerRecoverySourceService] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10] starting recovery to {es-d35-rm}{IU6OaIyRRPuRN6wxtV7t3Q}{oII66QMKSUKduxumwrVBwA}{192.168.0.185}{192.168.0.185:9300}{faultDomain=2, updateDomain=14} | |
[2018-12-26T16:14:11,685][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d35-rm] skipping [phase1]- identical sync id [FubPdg3OTROGhDE2n_QJRw] found on both source and target | |
[2018-12-26T16:14:11,685][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d35-rm] recovery [phase1]: took [0s] | |
[2018-12-26T16:14:11,685][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d35-rm] recovery [phase1]: prepare remote engine for translog | |
[2018-12-26T16:14:11,685][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14595867][internal:index/shard/recovery/prepare_translog] sent to [{es-d35-rm}{IU6OaIyRRPuRN6wxtV7t3Q}{oII66QMKSUKduxumwrVBwA}{192.168.0.185}{192.168.0.185:9300}{faultDomain=2, updateDomain=14}] (timeout: [15m]) | |
[2018-12-26T16:14:11,889][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14595867][internal:index/shard/recovery/prepare_translog] received response from [{es-d35-rm}{IU6OaIyRRPuRN6wxtV7t3Q}{oII66QMKSUKduxumwrVBwA}{192.168.0.185}{192.168.0.185:9300}{faultDomain=2, updateDomain=14}] | |
[2018-12-26T16:14:11,889][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d35-rm] recovery [phase1]: remote engine start took [192.5ms] | |
[2018-12-26T16:14:11,889][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d35-rm] all operations up to [25] completed, which will be used as an ending sequence number | |
[2018-12-26T16:14:11,889][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d35-rm] snapshot translog for recovery; current size is [0] | |
[2018-12-26T16:14:11,889][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d35-rm] recovery [phase2]: sending transaction log operations (seq# from [0], required [26:25] | |
[2018-12-26T16:14:11,889][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d35-rm] no translog operations to send | |
[2018-12-26T16:14:11,889][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14595873][internal:index/shard/recovery/translog_ops] sent to [{es-d35-rm}{IU6OaIyRRPuRN6wxtV7t3Q}{oII66QMKSUKduxumwrVBwA}{192.168.0.185}{192.168.0.185:9300}{faultDomain=2, updateDomain=14}] (timeout: [30m]) | |
[2018-12-26T16:14:11,889][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14595873][internal:index/shard/recovery/translog_ops] received response from [{es-d35-rm}{IU6OaIyRRPuRN6wxtV7t3Q}{oII66QMKSUKduxumwrVBwA}{192.168.0.185}{192.168.0.185:9300}{faultDomain=2, updateDomain=14}] | |
[2018-12-26T16:14:11,889][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d35-rm] sent final batch of [0][0b] (total: [0]) translog operations | |
[2018-12-26T16:14:11,889][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d35-rm] recovery [phase2]: took [1.5ms] | |
[2018-12-26T16:14:11,889][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d35-rm] finalizing recovery | |
[2018-12-26T17:14:13,083][TRACE][o.e.t.T.tracer ] [es-d25-rm] [4203659][internal:index/shard/recovery/start_recovery] received request | |
[2018-12-26T17:14:13,083][TRACE][o.e.i.r.PeerRecoverySourceService] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10] starting recovery to {es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5} | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9z.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9z.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9z.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y_Lucene50_0.tip], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y_Lucene50_0.tim], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y.nvd], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y.nvm], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y.fnm], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y.fdx], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y.dim], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y_Lucene50_0.pay], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y.dii], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y_Lucene54_0.dvd], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y.fdt], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y_Lucene54_0.dvm], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y_Lucene50_0.pos], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_3y_Lucene50_0.doc], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_ak.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_ak.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_ak.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_am.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_am.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_am.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_8c.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_8c.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_8c.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_an.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_an.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_an.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_aw.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_aw.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_aw.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_8n.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_8n.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_8n.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_a6.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_a6.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_a6.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_ay.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_ay.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_ay.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_ax.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_ax.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_ax.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_4o.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_4o.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_4o.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_86.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_86.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_86.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_8z.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_8z.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_8z.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9a.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9a.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9a.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9k.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9k.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9k.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h_Lucene50_0.pos], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h_Lucene50_0.doc], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h.fdt], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h.fdx], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h_Lucene50_0.tim], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h.nvd], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h_Lucene50_0.tip], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h.dim], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h_Lucene50_0.pay], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h_Lucene54_0.dvm], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h_Lucene54_0.dvd], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h.dii], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h.nvm], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7h.fnm], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_1f.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_1f.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_1f.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7q.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7q.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7q.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9u.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9u.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9u.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9w.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9w.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9w.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9v.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9v.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9v.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_1n.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_1n.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_1n.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_96.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_96.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_96.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7w.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7w.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_7w.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9y.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9y.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9y.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9x.cfe], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9x.si], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9x.cfs], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_1n_1.liv], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_ax_1.liv], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9k_5.liv], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_8c_2.liv], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9w_2.liv], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_8z_2.liv], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_ak_1.liv], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_9a_3.liv], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_aw_1.liv], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_am_2.liv], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [_a6_1.liv], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering [segments_4l], does not exist in remote | |
[2018-12-26T17:14:13,099][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10][recover to es-d26-rm] recovery [phase1]: recovering_files [117] with total_size [9.5gb], reusing_files [0] with total_size [0b] | |
[2018-12-26T17:14:13,099][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624142][internal:index/shard/recovery/filesInfo] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,115][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624142][internal:index/shard/recovery/filesInfo] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,115][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624143][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,130][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624143][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,130][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624144][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,146][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624144][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,146][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624145][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,162][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624145][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,162][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624146][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,162][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624146][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,162][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624147][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,177][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624147][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,177][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624148][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,193][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624148][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,193][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624149][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,208][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624149][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,208][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624150][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,224][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624150][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,224][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624151][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,240][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624151][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,240][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624152][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,240][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624152][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,255][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624153][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,255][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624153][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,255][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624154][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,271][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624154][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,271][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624155][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,287][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624155][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,287][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624157][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,302][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624157][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,302][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624158][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,318][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624158][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,318][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624159][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,318][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624159][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,318][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624160][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,333][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624160][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,333][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624161][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,349][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624161][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,349][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624162][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,365][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624162][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,365][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624163][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,365][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624163][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,365][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624164][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,380][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624164][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,380][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624165][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,380][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624165][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,380][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624166][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,396][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624166][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,396][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624167][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,412][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624167][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,412][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624168][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,427][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624168][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,427][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624169][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,427][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624169][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,427][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624170][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,443][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624170][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,443][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624171][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,458][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624171][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,458][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624172][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,458][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624172][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,458][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624173][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,474][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624173][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,474][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624174][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,490][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624174][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,490][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624175][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,490][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624175][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,490][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624176][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,505][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624176][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,505][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624177][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,505][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624177][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,505][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624178][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,524][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624178][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,524][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624179][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,537][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624179][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,537][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624180][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,537][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624180][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,537][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624181][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,552][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624181][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,552][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624182][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,552][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624182][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,552][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624183][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,568][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624183][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,568][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624184][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,583][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624184][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,583][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624185][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,583][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624185][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,583][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624186][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,599][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624186][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,599][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624187][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,615][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624187][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,615][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624188][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,615][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624188][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,615][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624189][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,630][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624189][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,630][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624190][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,630][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624190][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,630][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624191][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,646][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624191][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,646][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624192][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,661][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624192][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,661][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624193][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,661][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624193][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,661][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624194][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,677][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624194][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,677][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624195][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,693][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624195][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,693][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624196][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,693][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624196][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,693][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624197][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,708][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624197][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,708][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624198][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,708][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624198][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,708][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624199][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,724][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624199][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,724][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624200][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,740][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624200][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,740][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624201][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,740][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624201][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,740][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624202][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,755][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624202][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,755][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624203][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,771][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624203][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,771][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624204][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,771][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624204][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,771][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624205][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,786][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624205][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,786][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624206][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,802][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624206][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,802][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624207][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,818][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624207][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,818][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624208][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,833][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624208][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,833][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624209][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,833][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624209][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,849][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624210][internal:index/shard/recovery/file_chunk] sent to [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] (timeout: [15m]) | |
[2018-12-26T17:14:13,849][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14624210][internal:index/shard/recovery/file_chunk] received response from [{es-d26-rm}{5zgNjKhaR7CxLqtP8jItCg}{f9WbUC38S_mvZ-Ej2Sl2Ig}{192.168.0.176}{192.168.0.176:9300}{faultDomain=2, updateDomain=5}] | |
[2018-12-26T17:14:13,849][TRACE][o.e.t.T.tracer ] [es-d25-rm] [4203659][internal:index/shard/recovery/start_recovery] sent error response | |
org.elasticsearch.index.engine.RecoveryEngineException: Phase[1] phase1 failed | |
at org.elasticsearch.indices.recovery.RecoverySourceHandler.recoverToTarget(RecoverySourceHandler.java:175) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoverySourceService.recover(PeerRecoverySourceService.java:98) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoverySourceService.access$000(PeerRecoverySourceService.java:50) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoverySourceService$StartRecoveryTransportRequestHandler.messageReceived(PeerRecoverySourceService.java:107) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoverySourceService$StartRecoveryTransportRequestHandler.messageReceived(PeerRecoverySourceService.java:104) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:30) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1555) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Caused by: org.elasticsearch.indices.recovery.RecoverFilesRecoveryException: Failed to transfer [117] files with total size of [9.5gb] | |
at org.elasticsearch.indices.recovery.RecoverySourceHandler.phase1(RecoverySourceHandler.java:419) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.RecoverySourceHandler.recoverToTarget(RecoverySourceHandler.java:173) ~[elasticsearch-6.2.4.jar:6.2.4] | |
... 12 more | |
Caused by: org.elasticsearch.transport.RemoteTransportException: [es-d26-rm][192.168.0.176:9300][internal:index/shard/recovery/file_chunk] | |
Caused by: org.elasticsearch.index.shard.IndexShardClosedException: CurrentState[CLOSED] Closed | |
at org.elasticsearch.indices.recovery.RecoveriesCollection.getRecoverySafe(RecoveriesCollection.java:150) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:578) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:571) ~[elasticsearch-6.2.4.jar:6.2.4] | |
... 8 more | |
[2018-12-26T17:32:02,080][TRACE][o.e.t.T.tracer ] [es-d25-rm] [339755][internal:index/shard/recovery/start_recovery] received request | |
[2018-12-26T17:32:02,080][TRACE][o.e.i.r.PeerRecoverySourceService] [es-d25-rm] [codesearchshared_11_0][22] starting recovery to {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} | |
[2018-12-26T17:32:02,220][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] skipping [phase1]- identical sync id [L-RPHHhCSiSkVwspNAKP5A] found on both source and target | |
[2018-12-26T17:32:02,220][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] recovery [phase1]: took [0s] | |
[2018-12-26T17:32:02,220][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] recovery [phase1]: prepare remote engine for translog | |
[2018-12-26T17:32:02,220][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14632430][internal:index/shard/recovery/prepare_translog] sent to [{es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17}] (timeout: [15m]) | |
[2018-12-26T17:32:02,361][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14632430][internal:index/shard/recovery/prepare_translog] received response from [{es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17}] | |
[2018-12-26T17:32:02,361][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] recovery [phase1]: remote engine start took [136ms] | |
[2018-12-26T17:32:02,361][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] all operations up to [89] completed, which will be used as an ending sequence number | |
[2018-12-26T17:32:02,361][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] snapshot translog for recovery; current size is [0] | |
[2018-12-26T17:32:02,361][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] recovery [phase2]: sending transaction log operations (seq# from [0], required [90:89] | |
[2018-12-26T17:32:02,361][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] no translog operations to send | |
[2018-12-26T17:32:02,361][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14632444][internal:index/shard/recovery/translog_ops] sent to [{es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17}] (timeout: [30m]) | |
[2018-12-26T17:32:02,361][TRACE][o.e.t.T.tracer ] [es-d25-rm] [14632444][internal:index/shard/recovery/translog_ops] received response from [{es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17}] | |
[2018-12-26T17:32:02,361][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] sent final batch of [0][0b] (total: [0]) translog operations | |
[2018-12-26T17:32:02,361][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] recovery [phase2]: took [1.4ms] | |
[2018-12-26T17:32:02,361][TRACE][o.e.i.r.RecoverySourceHandler] [es-d25-rm] [codesearchshared_11_0][22][recover to es-d38-rm] finalizing recovery |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[2018-12-26T14:20:56,631][INFO ][o.e.c.s.ClusterSettings ] [es-d38-rm] updating [transport.tracer.include] from [[]] to [["internal:index/shard/recovery/*"]] | |
[2018-12-26T14:28:49,358][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [monitor] no status found for [1119], shutting down | |
[2018-12-26T14:40:58,691][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [monitor] no status found for [1121], shutting down | |
[2018-12-26T14:41:18,410][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [monitor] no status found for [1124], shutting down | |
[2018-12-26T14:41:18,425][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [monitor] rescheduling check for [1125]. last access time is [1425370290869100] | |
[2018-12-26T15:11:18,434][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][41] failing recovery from {es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10}, id [1125]. Send shard failure: [true] | |
[2018-12-26T15:11:18,434][WARN ][o.e.i.c.IndicesClusterStateService] [es-d38-rm] [[codesearchshared_11_0][41]] marking and sending shard failed due to [failed recovery] | |
org.elasticsearch.indices.recovery.RecoveryFailedException: [codesearchshared_11_0][41]: Recovery failed from {es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10} into {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} (no activity after [30m]) | |
at org.elasticsearch.indices.recovery.RecoveriesCollection$RecoveryMonitor.doRun(RecoveriesCollection.java:286) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Caused by: org.elasticsearch.ElasticsearchTimeoutException: no activity after [30m] | |
... 6 more | |
[2018-12-26T15:11:18,434][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] recovery cancelled | |
org.elasticsearch.common.util.CancellableThreads$ExecutionCancelledException: operation was cancelled reason [failed recovery [RecoveryFailedException[[codesearchshared_11_0][41]: Recovery failed from {es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10} into {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} (no activity after [30m])]; nested: ElasticsearchTimeoutException[no activity after [30m]]; | |
at org.elasticsearch.indices.recovery.RecoveriesCollection$RecoveryMonitor.doRun(RecoveriesCollection.java:286) | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
Caused by: ElasticsearchTimeoutException[no activity after [30m]] | |
... 6 more | |
]] | |
at org.elasticsearch.common.util.CancellableThreads.onCancel(CancellableThreads.java:63) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:129) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.CancellableThreads.execute(CancellableThreads.java:86) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.doRecovery(PeerRecoveryTargetService.java:195) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.access$900(PeerRecoveryTargetService.java:81) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRunner.doRun(PeerRecoveryTargetService.java:635) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Suppressed: java.lang.IllegalStateException: Future got interrupted | |
at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:47) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:32) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.lambda$doRecovery$1(PeerRecoveryTargetService.java:202) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:105) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.CancellableThreads.execute(CancellableThreads.java:86) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.doRecovery(PeerRecoveryTargetService.java:195) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.access$900(PeerRecoveryTargetService.java:81) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRunner.doRun(PeerRecoveryTargetService.java:635) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Caused by: java.lang.InterruptedException | |
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) ~[?:1.8.0_72] | |
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) ~[?:1.8.0_72] | |
at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:251) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:94) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:44) ~[elasticsearch-6.2.4.jar:6.2.4] | |
... 12 more | |
[2018-12-26T15:11:19,153][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][22] started recovery from {es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4}, id [1126] | |
[2018-12-26T15:11:19,153][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][22] collecting local files for [{es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4}] | |
[2018-12-26T15:11:19,153][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][22] local file count [34] | |
[2018-12-26T15:11:19,153][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] Calculate starting seqno based on global checkpoint [-2], safe commit [CommitPoint{segment[segments_9v], userData[{history_uuid=vLnWP8VURmaFbYqP0MpGQw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=L-RPHHhCSiSkVwspNAKP5A, translog_generation=1, translog_uuid=27gFE-t_SDSsd2GdeB7ZBw}]}], existing commits [CommitPoint{segment[segments_9v], userData[{history_uuid=vLnWP8VURmaFbYqP0MpGQw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=L-RPHHhCSiSkVwspNAKP5A, translog_generation=1, translog_uuid=27gFE-t_SDSsd2GdeB7ZBw}]}] | |
[2018-12-26T15:11:19,153][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][22] preparing for file-based recovery from [{es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4}] | |
[2018-12-26T15:11:19,153][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][22] preparing shard for peer recovery | |
[2018-12-26T15:11:19,153][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][22] starting recovery from {es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4} | |
[2018-12-26T15:11:19,153][TRACE][o.e.t.T.tracer ] [es-d38-rm] [323965][internal:index/shard/recovery/start_recovery] sent to [{es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4}] (timeout: [null]) | |
[2018-12-26T15:11:19,168][TRACE][o.e.t.T.tracer ] [es-d38-rm] [14566589][internal:index/shard/recovery/prepare_translog] received request | |
[2018-12-26T15:11:19,231][TRACE][o.e.t.T.tracer ] [es-d38-rm] [14566589][internal:index/shard/recovery/prepare_translog] sent response | |
[2018-12-26T15:11:19,231][TRACE][o.e.t.T.tracer ] [es-d38-rm] [14566590][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T15:11:19,231][TRACE][o.e.t.T.tracer ] [es-d38-rm] [14566590][internal:index/shard/recovery/translog_ops] sent response | |
[2018-12-26T15:41:19,160][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [monitor] rescheduling check for [1126]. last access time is [1428970925690100] | |
[2018-12-26T16:11:19,168][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][22] failing recovery from {es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4}, id [1126]. Send shard failure: [true] | |
[2018-12-26T16:11:19,168][WARN ][o.e.i.c.IndicesClusterStateService] [es-d38-rm] [[codesearchshared_11_0][22]] marking and sending shard failed due to [failed recovery] | |
org.elasticsearch.indices.recovery.RecoveryFailedException: [codesearchshared_11_0][22]: Recovery failed from {es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4} into {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} (no activity after [30m]) | |
at org.elasticsearch.indices.recovery.RecoveriesCollection$RecoveryMonitor.doRun(RecoveriesCollection.java:286) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Caused by: org.elasticsearch.ElasticsearchTimeoutException: no activity after [30m] | |
... 6 more | |
[2018-12-26T16:11:19,168][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] recovery cancelled | |
org.elasticsearch.common.util.CancellableThreads$ExecutionCancelledException: operation was cancelled reason [failed recovery [RecoveryFailedException[[codesearchshared_11_0][22]: Recovery failed from {es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4} into {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} (no activity after [30m])]; nested: ElasticsearchTimeoutException[no activity after [30m]]; | |
at org.elasticsearch.indices.recovery.RecoveriesCollection$RecoveryMonitor.doRun(RecoveriesCollection.java:286) | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
Caused by: ElasticsearchTimeoutException[no activity after [30m]] | |
... 6 more | |
]] | |
at org.elasticsearch.common.util.CancellableThreads.onCancel(CancellableThreads.java:63) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:129) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.CancellableThreads.execute(CancellableThreads.java:86) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.doRecovery(PeerRecoveryTargetService.java:195) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.access$900(PeerRecoveryTargetService.java:81) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRunner.doRun(PeerRecoveryTargetService.java:635) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Suppressed: java.lang.IllegalStateException: Future got interrupted | |
at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:47) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:32) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.lambda$doRecovery$1(PeerRecoveryTargetService.java:202) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:105) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.CancellableThreads.execute(CancellableThreads.java:86) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.doRecovery(PeerRecoveryTargetService.java:195) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.access$900(PeerRecoveryTargetService.java:81) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRunner.doRun(PeerRecoveryTargetService.java:635) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Caused by: java.lang.InterruptedException | |
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) ~[?:1.8.0_72] | |
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) ~[?:1.8.0_72] | |
at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:251) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:94) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:44) ~[elasticsearch-6.2.4.jar:6.2.4] | |
... 12 more | |
[2018-12-26T16:11:19,918][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][41] started recovery from {es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10}, id [1127] | |
[2018-12-26T16:11:19,918][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][41] collecting local files for [{es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10}] | |
[2018-12-26T16:11:19,934][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][41] local file count [116] | |
[2018-12-26T16:11:19,950][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] Calculate starting seqno based on global checkpoint [-2], safe commit [CommitPoint{segment[segments_h9], userData[{history_uuid=qEZHmHZnQUW6hHxrm7jWnw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=yEvXW87fTsOwJYrRbZJOpA, translog_generation=1, translog_uuid=qotEZ4YIQt6ObvHNyEV3eg}]}], existing commits [CommitPoint{segment[segments_h9], userData[{history_uuid=qEZHmHZnQUW6hHxrm7jWnw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=yEvXW87fTsOwJYrRbZJOpA, translog_generation=1, translog_uuid=qotEZ4YIQt6ObvHNyEV3eg}]}] | |
[2018-12-26T16:11:19,950][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][41] preparing for file-based recovery from [{es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10}] | |
[2018-12-26T16:11:19,950][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][41] preparing shard for peer recovery | |
[2018-12-26T16:11:19,950][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][41] starting recovery from {es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10} | |
[2018-12-26T16:11:19,950][TRACE][o.e.t.T.tracer ] [es-d38-rm] [330693][internal:index/shard/recovery/start_recovery] sent to [{es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10}] (timeout: [null]) | |
[2018-12-26T16:11:19,965][TRACE][o.e.t.T.tracer ] [es-d38-rm] [6595079][internal:index/shard/recovery/prepare_translog] received request | |
[2018-12-26T16:11:20,090][TRACE][o.e.t.T.tracer ] [es-d38-rm] [6595079][internal:index/shard/recovery/prepare_translog] sent response | |
[2018-12-26T16:11:20,106][TRACE][o.e.t.T.tracer ] [es-d38-rm] [6595080][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T16:11:20,106][TRACE][o.e.t.T.tracer ] [es-d38-rm] [6595080][internal:index/shard/recovery/translog_ops] sent response | |
[2018-12-26T16:41:19,944][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [monitor] rescheduling check for [1127]. last access time is [1432571786585800] | |
[2018-12-26T17:11:19,958][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][41] failing recovery from {es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10}, id [1127]. Send shard failure: [true] | |
[2018-12-26T17:11:19,958][WARN ][o.e.i.c.IndicesClusterStateService] [es-d38-rm] [[codesearchshared_11_0][41]] marking and sending shard failed due to [failed recovery] | |
org.elasticsearch.indices.recovery.RecoveryFailedException: [codesearchshared_11_0][41]: Recovery failed from {es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10} into {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} (no activity after [30m]) | |
at org.elasticsearch.indices.recovery.RecoveriesCollection$RecoveryMonitor.doRun(RecoveriesCollection.java:286) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Caused by: org.elasticsearch.ElasticsearchTimeoutException: no activity after [30m] | |
... 6 more | |
[2018-12-26T17:11:19,958][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] recovery cancelled | |
org.elasticsearch.common.util.CancellableThreads$ExecutionCancelledException: operation was cancelled reason [failed recovery [RecoveryFailedException[[codesearchshared_11_0][41]: Recovery failed from {es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10} into {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} (no activity after [30m])]; nested: ElasticsearchTimeoutException[no activity after [30m]]; | |
at org.elasticsearch.indices.recovery.RecoveriesCollection$RecoveryMonitor.doRun(RecoveriesCollection.java:286) | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
Caused by: ElasticsearchTimeoutException[no activity after [30m]] | |
... 6 more | |
]] | |
at org.elasticsearch.common.util.CancellableThreads.onCancel(CancellableThreads.java:63) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:129) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.CancellableThreads.execute(CancellableThreads.java:86) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.doRecovery(PeerRecoveryTargetService.java:195) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.access$900(PeerRecoveryTargetService.java:81) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRunner.doRun(PeerRecoveryTargetService.java:635) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Suppressed: java.lang.IllegalStateException: Future got interrupted | |
at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:47) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:32) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.lambda$doRecovery$1(PeerRecoveryTargetService.java:202) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:105) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.CancellableThreads.execute(CancellableThreads.java:86) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.doRecovery(PeerRecoveryTargetService.java:195) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.access$900(PeerRecoveryTargetService.java:81) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRunner.doRun(PeerRecoveryTargetService.java:635) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) [elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Caused by: java.lang.InterruptedException | |
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) ~[?:1.8.0_72] | |
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) ~[?:1.8.0_72] | |
at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:251) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:94) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:44) ~[elasticsearch-6.2.4.jar:6.2.4] | |
... 12 more | |
[2018-12-26T17:11:24,333][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_9_0][50] started recovery from {es-d28-rm}{42J8VHHwQDSAWMMBKpPq8g}{c7J0DHt_QkKQSSS18gcg7w}{192.168.0.178}{192.168.0.178:9300}{faultDomain=1, updateDomain=7}, id [1128] | |
[2018-12-26T17:11:24,333][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_9_0][50] collecting local files for [{es-d28-rm}{42J8VHHwQDSAWMMBKpPq8g}{c7J0DHt_QkKQSSS18gcg7w}{192.168.0.178}{192.168.0.178:9300}{faultDomain=1, updateDomain=7}] | |
[2018-12-26T17:11:25,790][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_9_0][50] local file count [420] | |
[2018-12-26T17:11:25,836][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] Calculate starting seqno based on global checkpoint [-2], safe commit [CommitPoint{segment[segments_tj6], userData[{max_unsafe_auto_id_timestamp=-1, sync_id=AWfgsqQaUeUCgNRyvzY5, translog_generation=1138, translog_uuid=JsAM9PujSd-IykqMqExFrw}]}], existing commits [CommitPoint{segment[segments_tj6], userData[{max_unsafe_auto_id_timestamp=-1, sync_id=AWfgsqQaUeUCgNRyvzY5, translog_generation=1138, translog_uuid=JsAM9PujSd-IykqMqExFrw}]}] | |
[2018-12-26T17:11:25,836][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_9_0][50] preparing for file-based recovery from [{es-d28-rm}{42J8VHHwQDSAWMMBKpPq8g}{c7J0DHt_QkKQSSS18gcg7w}{192.168.0.178}{192.168.0.178:9300}{faultDomain=1, updateDomain=7}] | |
[2018-12-26T17:11:25,836][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_9_0][50] preparing shard for peer recovery | |
[2018-12-26T17:11:25,836][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_9_0][50] starting recovery from {es-d28-rm}{42J8VHHwQDSAWMMBKpPq8g}{c7J0DHt_QkKQSSS18gcg7w}{192.168.0.178}{192.168.0.178:9300}{faultDomain=1, updateDomain=7} | |
[2018-12-26T17:11:25,836][TRACE][o.e.t.T.tracer ] [es-d38-rm] [337466][internal:index/shard/recovery/start_recovery] sent to [{es-d28-rm}{42J8VHHwQDSAWMMBKpPq8g}{c7J0DHt_QkKQSSS18gcg7w}{192.168.0.178}{192.168.0.178:9300}{faultDomain=1, updateDomain=7}] (timeout: [null]) | |
[2018-12-26T17:11:27,008][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385657][internal:index/shard/recovery/filesInfo] received request | |
[2018-12-26T17:11:27,008][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385657][internal:index/shard/recovery/filesInfo] sent response | |
[2018-12-26T17:11:27,008][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385658][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,008][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385658][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,008][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385659][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,024][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385659][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,024][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385660][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,040][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385660][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,040][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385661][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,040][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385661][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,055][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385662][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,055][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385662][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,055][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385663][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,071][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385663][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,071][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385664][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,086][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385664][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,086][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385665][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,086][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385665][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,086][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385666][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,102][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385666][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,102][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385667][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,102][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385667][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,102][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385668][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,118][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385668][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,118][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385669][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,133][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385669][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,133][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385670][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,133][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385670][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,133][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385671][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,149][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385671][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,149][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385672][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,149][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385672][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,149][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385673][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,165][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385673][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,165][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385674][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,180][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385674][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,180][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385675][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,180][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385675][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,180][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385676][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,196][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385676][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,196][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385677][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,212][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385677][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,212][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385678][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,212][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385678][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,212][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385679][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,227][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385679][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,227][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385680][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,227][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385680][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,227][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385681][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,243][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385681][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,243][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385682][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,258][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385682][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,258][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385683][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,258][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385683][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,258][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385684][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,274][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385684][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,274][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385685][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,274][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385685][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,274][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385686][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,290][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385686][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,290][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385687][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,306][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385687][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,306][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385688][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,306][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385688][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,306][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385689][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,321][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385689][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,321][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385690][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,321][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385690][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,337][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385691][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,337][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385691][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,337][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385692][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,352][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385692][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,352][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385693][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,352][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385693][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,352][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385694][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,368][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385694][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,368][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385695][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,368][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385695][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,368][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385696][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,384][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385696][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,384][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385697][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,399][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385697][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,399][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385698][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,399][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385698][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,399][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385699][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,415][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385699][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,415][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385700][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,415][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385700][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,431][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385701][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,431][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385701][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,431][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385702][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:11:27,447][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385702][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:11:27,447][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9385703][internal:index/shard/recovery/file_chunk] received request | |
<Logs removed for brevity> | |
[2018-12-26T17:31:40,500][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474513][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,516][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474514][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:40,516][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474514][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,516][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474515][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:40,531][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474515][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,531][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474516][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:40,547][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474516][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,547][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474517][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:40,547][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474517][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,562][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474518][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:40,578][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474518][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,578][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474519][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:40,578][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474519][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,594][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474520][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:40,594][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474520][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,609][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474521][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:40,609][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474521][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,625][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474522][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:40,625][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474522][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,641][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474523][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:40,641][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474523][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,656][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474524][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:40,656][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474524][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,672][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474525][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:40,672][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474525][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,687][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474526][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:40,687][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474526][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:40,703][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474527][internal:index/shard/recovery/file_chunk] received request | |
[2018-12-26T17:31:42,086][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474527][internal:index/shard/recovery/file_chunk] sent response | |
[2018-12-26T17:31:42,790][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474537][internal:index/shard/recovery/clean_files] received request | |
[2018-12-26T17:31:49,755][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474537][internal:index/shard/recovery/clean_files] sent response | |
[2018-12-26T17:31:49,755][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474617][internal:index/shard/recovery/prepare_translog] received request | |
[2018-12-26T17:32:00,095][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474617][internal:index/shard/recovery/prepare_translog] sent response | |
[2018-12-26T17:32:00,141][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474752][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T17:32:00,188][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474752][internal:index/shard/recovery/translog_ops] sent response | |
[2018-12-26T17:32:00,250][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474753][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T17:32:00,297][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474753][internal:index/shard/recovery/translog_ops] sent response | |
[2018-12-26T17:32:00,329][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474754][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T17:32:00,344][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474754][internal:index/shard/recovery/translog_ops] sent response | |
[2018-12-26T17:32:00,391][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474755][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T17:32:00,422][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474755][internal:index/shard/recovery/translog_ops] sent response | |
[2018-12-26T17:32:00,469][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474756][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T17:32:00,500][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474756][internal:index/shard/recovery/translog_ops] sent response | |
[2018-12-26T17:32:00,610][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474757][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T17:32:00,657][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474757][internal:index/shard/recovery/translog_ops] sent response | |
[2018-12-26T17:32:00,688][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474758][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T17:32:00,719][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474758][internal:index/shard/recovery/translog_ops] sent response | |
[2018-12-26T17:32:00,782][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474759][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T17:32:00,813][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474759][internal:index/shard/recovery/translog_ops] sent response | |
[2018-12-26T17:32:00,891][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474760][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T17:32:00,922][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474760][internal:index/shard/recovery/translog_ops] sent response | |
[2018-12-26T17:32:00,969][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474761][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T17:32:01,000][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474761][internal:index/shard/recovery/translog_ops] sent response | |
[2018-12-26T17:32:01,016][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474763][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T17:32:01,047][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474763][internal:index/shard/recovery/translog_ops] sent response | |
[2018-12-26T17:32:01,047][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474764][internal:index/shard/recovery/finalize] received request | |
[2018-12-26T17:32:01,063][TRACE][o.e.t.T.tracer ] [es-d38-rm] [9474764][internal:index/shard/recovery/finalize] sent response | |
[2018-12-26T17:32:01,063][TRACE][o.e.t.T.tracer ] [es-d38-rm] [337466][internal:index/shard/recovery/start_recovery] received response from [{es-d28-rm}{42J8VHHwQDSAWMMBKpPq8g}{c7J0DHt_QkKQSSS18gcg7w}{192.168.0.178}{192.168.0.178:9300}{faultDomain=1, updateDomain=7}] | |
[2018-12-26T17:32:01,063][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_9_0][50] marking recovery from {es-d28-rm}{42J8VHHwQDSAWMMBKpPq8g}{c7J0DHt_QkKQSSS18gcg7w}{192.168.0.178}{192.168.0.178:9300}{faultDomain=1, updateDomain=7} as done, id [1128] | |
[2018-12-26T17:32:01,063][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_9_0][50] recovery completed from {es-d28-rm}{42J8VHHwQDSAWMMBKpPq8g}{c7J0DHt_QkKQSSS18gcg7w}{192.168.0.178}{192.168.0.178:9300}{faultDomain=1, updateDomain=7}, took[20.6m] | |
phase1: recovered_files [195] with total_size of [301gb], took [0s], throttling_wait [0s] | |
: reusing_files [216] with total_size of [262.1gb] | |
phase2: start took [10.3s] | |
: recovered [1106] transaction log operations, took [949ms] | |
[2018-12-26T17:32:01,907][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][22] started recovery from {es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4}, id [1129] | |
[2018-12-26T17:32:01,907][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][22] collecting local files for [{es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4}] | |
[2018-12-26T17:32:02,000][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][22] local file count [34] | |
[2018-12-26T17:32:02,016][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] Calculate starting seqno based on global checkpoint [-2], safe commit [CommitPoint{segment[segments_9w], userData[{history_uuid=vLnWP8VURmaFbYqP0MpGQw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=L-RPHHhCSiSkVwspNAKP5A, translog_generation=1, translog_uuid=p4WVis9nS8S4Qc2XUXpLdQ}]}], existing commits [CommitPoint{segment[segments_9w], userData[{history_uuid=vLnWP8VURmaFbYqP0MpGQw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=L-RPHHhCSiSkVwspNAKP5A, translog_generation=1, translog_uuid=p4WVis9nS8S4Qc2XUXpLdQ}]}] | |
[2018-12-26T17:32:02,016][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][22] preparing for file-based recovery from [{es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4}] | |
[2018-12-26T17:32:02,016][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][22] preparing shard for peer recovery | |
[2018-12-26T17:32:02,016][TRACE][o.e.i.r.PeerRecoveryTargetService] [es-d38-rm] [codesearchshared_11_0][22] starting recovery from {es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4} | |
[2018-12-26T17:32:02,016][TRACE][o.e.t.T.tracer ] [es-d38-rm] [339755][internal:index/shard/recovery/start_recovery] sent to [{es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4}] (timeout: [null]) | |
[2018-12-26T17:32:02,157][TRACE][o.e.t.T.tracer ] [es-d38-rm] [14632430][internal:index/shard/recovery/prepare_translog] received request | |
[2018-12-26T17:32:02,297][TRACE][o.e.t.T.tracer ] [es-d38-rm] [14632430][internal:index/shard/recovery/prepare_translog] sent response | |
[2018-12-26T17:32:02,297][TRACE][o.e.t.T.tracer ] [es-d38-rm] [14632444][internal:index/shard/recovery/translog_ops] received request | |
[2018-12-26T17:32:02,297][TRACE][o.e.t.T.tracer ] [es-d38-rm] [14632444][internal:index/shard/recovery/translog_ops] sent response |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[2018-12-26T14:20:56,921][INFO ][o.e.c.s.ClusterSettings ] [es-m01-rm] updating [transport.tracer.include] from [[]] to [["internal:index/shard/recovery/*"]] | |
[2018-12-26T14:21:00,708][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:21:34,137][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:21:57,022][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_8_4][6] received shard failed for shard id [[workitemsearchshared_8_4][6]], allocation id [L0rgTEuhT6KGmANQhEJ9AA], primary term [11], message [mark copy as stale] | |
[2018-12-26T14:22:05,692][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:22:36,919][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:22:42,831][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_3_1311][8] received shard failed for shard id [[wikisearchshared_3_1311][8]], allocation id [11I7qaaZTk-eMgBirCbnPw], primary term [28], message [mark copy as stale] | |
[2018-12-26T14:23:09,453][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:23:41,666][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:24:12,912][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:24:45,249][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:25:16,574][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:25:38,148][INFO ][o.e.m.j.JvmGcMonitorService] [es-m01-rm] [gc][1045477] overhead, spent [263ms] collecting in the last [1s] | |
[2018-12-26T14:25:47,601][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:26:17,762][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:26:49,462][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:26:55,563][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_3_4][11] received shard failed for shard id [[workitemsearchshared_3_4][11]], allocation id [tZa4l0dtSgyyqJwjPrBKew], primary term [9], message [mark copy as stale] | |
[2018-12-26T14:27:21,003][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:27:53,230][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:28:24,699][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:28:55,089][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:29:25,840][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:29:58,033][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:30:30,059][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:31:00,704][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:31:34,223][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:32:06,022][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:32:36,975][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:33:09,208][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:33:41,372][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:34:12,843][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:34:44,797][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:35:15,232][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:35:46,656][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:36:17,371][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:36:20,577][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_1_4][10] received shard failed for shard id [[workitemsearchshared_1_4][10]], allocation id [4uBEjTuNQ_6TnTV5ROpzGQ], primary term [10], message [mark copy as stale] | |
[2018-12-26T14:36:49,446][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:37:20,999][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:37:53,319][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:38:24,599][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:38:54,758][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:39:25,716][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:39:57,678][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:40:29,878][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526.1gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:41:00,679][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 526gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:41:33,957][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:42:05,802][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:42:36,998][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:43:09,166][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:43:41,470][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:44:13,103][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:44:44,902][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:45:15,394][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:45:19,560][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_5_4][3] received shard failed for shard id [[workitemsearchshared_5_4][3]], allocation id [yrBM1Sk8Ry6Q9pPXvtJMAg], primary term [7], message [mark copy as stale] | |
[2018-12-26T14:45:46,827][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:46:17,457][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:46:49,538][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:47:21,069][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:47:53,142][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:48:24,720][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:48:54,901][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:49:25,828][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:49:58,077][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:50:30,229][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:50:57,877][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wiki_wikicontract_1421_shared_cd9f3002-d8e8-4669-9a9d-c949480096aa][2] received shard failed for shard id [[wiki_wikicontract_1421_shared_cd9f3002-d8e8-4669-9a9d-c949480096aa][2]], allocation id [4mlv-5FuTxOULqasrc2D9w], primary term [5], message [mark copy as stale] | |
[2018-12-26T14:50:58,792][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_0_4][5] received shard failed for shard id [[workitemsearchshared_0_4][5]], allocation id [uzKfygMAQI6qWP74Q7LMaA], primary term [9], message [mark copy as stale] | |
[2018-12-26T14:51:00,989][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:51:34,486][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:51:39,410][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_0_4][10] received shard failed for shard id [[workitemsearchshared_0_4][10]], allocation id [bkXpu9CvT5amtrf5SHG1sw], primary term [12], message [mark copy as stale] | |
[2018-12-26T14:52:06,024][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:52:37,394][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:53:10,104][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:53:32,858][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wiki_wikicontract_1421_shared_cd9f3002-d8e8-4669-9a9d-c949480096aa][10] received shard failed for shard id [[wiki_wikicontract_1421_shared_cd9f3002-d8e8-4669-9a9d-c949480096aa][10]], allocation id [Yevqlml3R421AeB1dv5UxA], primary term [5], message [mark copy as stale] | |
[2018-12-26T14:53:32,858][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wiki_wikicontract_1421_shared_cd9f3002-d8e8-4669-9a9d-c949480096aa][8] received shard failed for shard id [[wiki_wikicontract_1421_shared_cd9f3002-d8e8-4669-9a9d-c949480096aa][8]], allocation id [wMbxSCSsS5uXxzi58UlWpA], primary term [5], message [mark copy as stale] | |
[2018-12-26T14:53:32,858][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wiki_wikicontract_1421_shared_cd9f3002-d8e8-4669-9a9d-c949480096aa][4] received shard failed for shard id [[wiki_wikicontract_1421_shared_cd9f3002-d8e8-4669-9a9d-c949480096aa][4]], allocation id [ciXuXhknRhW9nLYaKFC6xw], primary term [5], message [mark copy as stale] | |
[2018-12-26T14:53:35,313][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wiki_wikicontract_1421_shared_cd9f3002-d8e8-4669-9a9d-c949480096aa][7] received shard failed for shard id [[wiki_wikicontract_1421_shared_cd9f3002-d8e8-4669-9a9d-c949480096aa][7]], allocation id [1JiCZrfBSxeQopVIUhg6iA], primary term [4], message [mark copy as stale] | |
[2018-12-26T14:53:42,399][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:54:13,399][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:54:45,084][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:55:15,528][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:55:46,889][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:56:17,583][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:56:49,787][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:57:21,258][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:57:53,374][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:58:24,999][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:58:55,415][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:59:26,106][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T14:59:28,571][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_6_4][9] received shard failed for shard id [[workitemsearchshared_6_4][9]], allocation id [1EgtUI5lSTu51bMWAOlY5g], primary term [9], message [mark copy as stale] | |
[2018-12-26T14:59:38,179][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_7_4][4] received shard failed for shard id [[workitemsearchshared_7_4][4]], allocation id [J_RrFlTPSWuImGFlNxzcHw], primary term [8], message [mark copy as stale] | |
[2018-12-26T14:59:58,395][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:00:11,552][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_0_4][8] received shard failed for shard id [[workitemsearchshared_0_4][8]], allocation id [BP3KjpCVTwiC-N6IdzZVvA], primary term [9], message [mark copy as stale] | |
[2018-12-26T15:00:25,144][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_3_4][10] received shard failed for shard id [[workitemsearchshared_3_4][10]], allocation id [zvRDRx-bTgS4JsJDO0N_Qw], primary term [8], message [mark copy as stale] | |
[2018-12-26T15:00:30,267][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:01:01,691][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:01:34,602][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_8_4][3] received shard failed for shard id [[workitemsearchshared_8_4][3]], allocation id [ut_Gt9bCSHy2Vx2863ryvQ], primary term [8], message [mark copy as stale] | |
[2018-12-26T15:01:35,102][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:01:37,427][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_8_4][4] received shard failed for shard id [[workitemsearchshared_8_4][4]], allocation id [I1TwEatUTXWBw4zgQf4Ugg], primary term [8], message [mark copy as stale] | |
[2018-12-26T15:02:06,379][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:02:37,657][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:03:10,496][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:03:42,613][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:04:13,533][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:04:45,184][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:05:15,889][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:05:47,014][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:06:17,622][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:06:49,444][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:07:21,096][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:07:53,284][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:08:25,050][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:08:55,499][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:09:26,226][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:09:58,539][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:10:30,429][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:11:01,845][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:11:18,686][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_11_0][41] received shard failed for shard id [[codesearchshared_11_0][41]], allocation id [6KoGuze-QmSkeLwVptzzIw], primary term [0], message [failed recovery], failure [RecoveryFailedException[[codesearchshared_11_0][41]: Recovery failed from {es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10} into {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} (no activity after [30m])]; nested: ElasticsearchTimeoutException[no activity after [30m]]; ] | |
org.elasticsearch.indices.recovery.RecoveryFailedException: [codesearchshared_11_0][41]: Recovery failed from {es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10} into {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} (no activity after [30m]) | |
at org.elasticsearch.indices.recovery.RecoveriesCollection$RecoveryMonitor.doRun(RecoveriesCollection.java:286) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Caused by: org.elasticsearch.ElasticsearchTimeoutException: no activity after [30m] | |
... 6 more | |
[2018-12-26T15:11:35,313][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:12:07,000][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:12:37,857][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:12:38,531][INFO ][o.e.m.j.JvmGcMonitorService] [es-m01-rm] [gc][1048268] overhead, spent [342ms] collecting in the last [1.1s] | |
[2018-12-26T15:13:10,883][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:13:42,824][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:14:10,155][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_7_0][45] received shard failed for shard id [[codesearchshared_7_0][45]], allocation id [IUlFDNbpQK24-4wr18Xk0w], primary term [0], message [failed recovery], failure [RecoveryFailedException[[codesearchshared_7_0][45]: Recovery failed from {es-d22-rm}{fa_SY1bUTFCHJh501G3WjA}{6HPzzFZ8TI-wAP694RNc3g}{192.168.0.172}{192.168.0.172:9300}{faultDomain=1, updateDomain=1} into {es-d35-rm}{IU6OaIyRRPuRN6wxtV7t3Q}{oII66QMKSUKduxumwrVBwA}{192.168.0.185}{192.168.0.185:9300}{faultDomain=2, updateDomain=14} (no activity after [30m])]; nested: ElasticsearchTimeoutException[no activity after [30m]]; ] | |
org.elasticsearch.indices.recovery.RecoveryFailedException: [codesearchshared_7_0][45]: Recovery failed from {es-d22-rm}{fa_SY1bUTFCHJh501G3WjA}{6HPzzFZ8TI-wAP694RNc3g}{192.168.0.172}{192.168.0.172:9300}{faultDomain=1, updateDomain=1} into {es-d35-rm}{IU6OaIyRRPuRN6wxtV7t3Q}{oII66QMKSUKduxumwrVBwA}{192.168.0.185}{192.168.0.185:9300}{faultDomain=2, updateDomain=14} (no activity after [30m]) | |
at org.elasticsearch.indices.recovery.RecoveriesCollection$RecoveryMonitor.doRun(RecoveriesCollection.java:286) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Caused by: org.elasticsearch.ElasticsearchTimeoutException: no activity after [30m] | |
... 6 more | |
[2018-12-26T15:14:13,790][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:14:45,978][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:15:17,133][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:15:48,094][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:16:19,308][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:16:51,062][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:17:22,325][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:17:53,760][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:18:25,316][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:18:38,448][INFO ][o.e.m.j.JvmGcMonitorService] [es-m01-rm] [gc][1048624] overhead, spent [274ms] collecting in the last [1s] | |
[2018-12-26T15:18:56,223][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:19:26,602][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitem_workitemcontract_1421_shared_ba7636b3-986d-4fe1-a282-faa03b3ed474][5] received shard failed for shard id [[workitem_workitemcontract_1421_shared_ba7636b3-986d-4fe1-a282-faa03b3ed474][5]], allocation id [747Eb4DgRneTpyJwENvGfw], primary term [6], message [mark copy as stale] | |
[2018-12-26T15:19:27,423][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:19:39,097][INFO ][o.e.m.j.JvmGcMonitorService] [es-m01-rm] [gc][1048684] overhead, spent [269ms] collecting in the last [1s] | |
[2018-12-26T15:19:59,955][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:20:12,444][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_5_4][10] received shard failed for shard id [[workitemsearchshared_5_4][10]], allocation id [r8xXjYWSRE-ywzJXeHHD7w], primary term [9], message [mark copy as stale] | |
[2018-12-26T15:20:32,435][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:21:02,858][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:21:36,042][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:22:06,815][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:22:37,852][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:23:10,686][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:23:14,880][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_5_3][8] received shard failed for shard id [[workitemsearchshared_5_3][8]], allocation id [IeMwGQCMTwWufjxylc74zg], primary term [23], message [mark copy as stale] | |
[2018-12-26T15:23:42,776][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:24:13,885][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:24:45,467][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:25:15,803][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:25:47,135][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:26:17,818][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:26:49,568][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:27:21,234][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:27:53,007][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:28:25,574][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:28:56,847][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:29:28,986][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:30:01,211][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:30:34,838][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:31:06,723][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:31:37,505][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:32:09,702][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:32:41,893][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:33:13,419][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:33:45,342][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:34:15,608][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:34:47,084][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:35:17,824][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:35:49,575][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:35:59,251][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_0_3][11] received shard failed for shard id [[workitemsearchshared_0_3][11]], allocation id [KOlodThFQzqULZ-XenUZVw], primary term [23], message [mark copy as stale] | |
[2018-12-26T15:36:21,268][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:36:52,974][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:37:24,523][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:37:54,908][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:38:26,053][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:38:58,358][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:39:30,164][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:40:01,106][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:40:34,598][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:41:06,453][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:41:37,467][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:42:09,435][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:42:40,138][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:43:12,818][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:43:43,939][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:43:50,158][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_20_0][40] received shard failed for shard id [[codesearchshared_20_0][40]], allocation id [8LOPxb8_RNuDfSBnur3DbA], primary term [14], message [mark copy as stale] | |
[2018-12-26T15:43:50,158][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_20_0][86] received shard failed for shard id [[codesearchshared_20_0][86]], allocation id [SJTDAQ-3SPG7y9uHQwFp1A], primary term [12], message [mark copy as stale] | |
[2018-12-26T15:43:50,158][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_20_0][4] received shard failed for shard id [[codesearchshared_20_0][4]], allocation id [hUDK9N-TT1aci2vJVgTw9Q], primary term [11], message [mark copy as stale] | |
[2018-12-26T15:43:50,158][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_20_0][90] received shard failed for shard id [[codesearchshared_20_0][90]], allocation id [E3VtLee6S5m3FfsViYO5WA], primary term [13], message [mark copy as stale] | |
[2018-12-26T15:43:50,158][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_20_0][88] received shard failed for shard id [[codesearchshared_20_0][88]], allocation id [B-PiwXOcQIyyWC-AnVc_dQ], primary term [16], message [mark copy as stale] | |
[2018-12-26T15:43:50,158][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_20_0][33] received shard failed for shard id [[codesearchshared_20_0][33]], allocation id [0qvvt3G1R2O2N2WPpOHSXQ], primary term [13], message [mark copy as stale] | |
[2018-12-26T15:43:50,158][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_20_0][80] received shard failed for shard id [[codesearchshared_20_0][80]], allocation id [UZDcxvpvTY2SPrxAwBu1YQ], primary term [11], message [mark copy as stale] | |
[2018-12-26T15:43:50,158][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_20_0][10] received shard failed for shard id [[codesearchshared_20_0][10]], allocation id [-TWI1w7STHCIDdiNbGgcHQ], primary term [13], message [mark copy as stale] | |
[2018-12-26T15:43:50,158][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_20_0][17] received shard failed for shard id [[codesearchshared_20_0][17]], allocation id [jpWkO6DeR5KLxHDPcXeA8A], primary term [13], message [mark copy as stale] | |
[2018-12-26T15:43:50,158][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_20_0][76] received shard failed for shard id [[codesearchshared_20_0][76]], allocation id [_XlFG3UzQeO5aqDPqdjGzA], primary term [13], message [mark copy as stale] | |
[2018-12-26T15:43:50,158][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_20_0][81] received shard failed for shard id [[codesearchshared_20_0][81]], allocation id [cFclo0OcRpOGZGNPcUT_JQ], primary term [11], message [mark copy as stale] | |
[2018-12-26T15:43:50,158][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_20_0][83] received shard failed for shard id [[codesearchshared_20_0][83]], allocation id [as9b-4amRKawIUva00Cknw], primary term [11], message [mark copy as stale] | |
[2018-12-26T15:44:14,277][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:44:46,047][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:45:16,946][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:45:48,211][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:46:18,744][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:46:50,458][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:47:21,676][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:47:53,543][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:48:25,253][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:48:55,412][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:49:26,313][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:49:58,859][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:50:30,671][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:51:01,406][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:51:35,212][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:52:06,799][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:52:37,809][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:53:10,246][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:53:42,435][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:54:13,754][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:54:45,963][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:55:16,665][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:55:16,992][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_1_1311][10] received shard failed for shard id [[wikisearchshared_1_1311][10]], allocation id [txgXIQe6Ss6fCWtDACyL8Q], primary term [25], message [mark copy as stale] | |
[2018-12-26T15:55:34,875][INFO ][o.e.c.m.MetaDataMappingService] [es-m01-rm] [workitemsearchshared_6_3/LwqATEVxTv-vahGQjuYtXg] update_mapping [workItemContract] | |
[2018-12-26T15:55:47,747][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:56:18,183][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:56:49,993][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:57:21,546][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:57:52,911][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:58:11,006][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_7_4][8] received shard failed for shard id [[workitemsearchshared_7_4][8]], allocation id [sTPJlsyuRSyOKpCTeJkB9g], primary term [8], message [mark copy as stale] | |
[2018-12-26T15:58:24,174][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:58:54,837][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:59:26,112][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T15:59:58,155][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:00:30,202][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:00:37,680][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_6_4][8] received shard failed for shard id [[workitemsearchshared_6_4][8]], allocation id [LkvgUn41SMCIlYdJZPPz1A], primary term [9], message [mark copy as stale] | |
[2018-12-26T16:01:01,168][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:01:34,478][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:02:06,714][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:02:38,021][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:02:49,714][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_9_3][10] received shard failed for shard id [[workitemsearchshared_9_3][10]], allocation id [vIzt8-syQueSpJi_slp9TQ], primary term [24], message [mark copy as stale] | |
[2018-12-26T16:03:10,624][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:03:42,722][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:04:13,911][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:04:45,749][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:05:16,269][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:05:20,927][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_4_4][10] received shard failed for shard id [[workitemsearchshared_4_4][10]], allocation id [ho2Bjf-WTpGreKnjr49wjQ], primary term [10], message [mark copy as stale] | |
[2018-12-26T16:05:41,384][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_1_1311][2] received shard failed for shard id [[wikisearchshared_1_1311][2]], allocation id [OY2cnhDmS_aTosK8bP2DrQ], primary term [23], message [mark copy as stale] | |
[2018-12-26T16:05:47,424][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:05:54,832][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_4_1311][3] received shard failed for shard id [[wikisearchshared_4_1311][3]], allocation id [5vigYbTIRy-vBOzZmw-SZQ], primary term [26], message [mark copy as stale] | |
[2018-12-26T16:06:18,166][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:06:47,400][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_1_0][21] received shard failed for shard id [[codesearchshared_1_0][21]], allocation id [td6cFaNESG6wfFZ6naiN9w], primary term [27], message [mark copy as stale] | |
[2018-12-26T16:06:50,057][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:07:21,768][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:07:53,545][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:08:25,269][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_7_3][11] received shard failed for shard id [[workitemsearchshared_7_3][11]], allocation id [CN-E3ZtfSYqvjOAWg3TFGQ], primary term [23], message [mark copy as stale] | |
[2018-12-26T16:08:25,410][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:08:55,569][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:09:26,555][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:09:58,927][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:10:30,703][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:11:01,519][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:11:13,088][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_5_1311][4] received shard failed for shard id [[wikisearchshared_5_1311][4]], allocation id [MR-AdXWoR6OQYdvybBBGHQ], primary term [22], message [mark copy as stale] | |
[2018-12-26T16:11:19,425][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_11_0][22] received shard failed for shard id [[codesearchshared_11_0][22]], allocation id [rOTHQaV1Tuq2dZTFaNF8xw], primary term [0], message [failed recovery], failure [RecoveryFailedException[[codesearchshared_11_0][22]: Recovery failed from {es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4} into {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} (no activity after [30m])]; nested: ElasticsearchTimeoutException[no activity after [30m]]; ] | |
org.elasticsearch.indices.recovery.RecoveryFailedException: [codesearchshared_11_0][22]: Recovery failed from {es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4} into {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} (no activity after [30m]) | |
at org.elasticsearch.indices.recovery.RecoveriesCollection$RecoveryMonitor.doRun(RecoveriesCollection.java:286) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Caused by: org.elasticsearch.ElasticsearchTimeoutException: no activity after [30m] | |
... 6 more | |
[2018-12-26T16:11:35,105][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:11:46,098][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_1_1311][5] received shard failed for shard id [[wikisearchshared_1_1311][5]], allocation id [B_NFmFbsQICfrMFrg2Foyg], primary term [24], message [mark copy as stale] | |
[2018-12-26T16:12:06,862][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:12:38,140][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:13:10,600][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:13:42,650][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:14:10,992][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_6_0][56] received shard failed for shard id [[codesearchshared_6_0][56]], allocation id [xskylPX8RQ2tRnkm9whCZw], primary term [0], message [failed recovery], failure [RecoveryFailedException[[codesearchshared_6_0][56]: Recovery failed from {es-d27-rm}{dKxgy4hXT1CWsnhOyJnuTQ}{m4Q7Ml1fSAKMh8pmab8GVQ}{192.168.0.177}{192.168.0.177:9300}{faultDomain=0, updateDomain=6} into {es-d35-rm}{IU6OaIyRRPuRN6wxtV7t3Q}{oII66QMKSUKduxumwrVBwA}{192.168.0.185}{192.168.0.185:9300}{faultDomain=2, updateDomain=14} (no activity after [30m])]; nested: ElasticsearchTimeoutException[no activity after [30m]]; ] | |
org.elasticsearch.indices.recovery.RecoveryFailedException: [codesearchshared_6_0][56]: Recovery failed from {es-d27-rm}{dKxgy4hXT1CWsnhOyJnuTQ}{m4Q7Ml1fSAKMh8pmab8GVQ}{192.168.0.177}{192.168.0.177:9300}{faultDomain=0, updateDomain=6} into {es-d35-rm}{IU6OaIyRRPuRN6wxtV7t3Q}{oII66QMKSUKduxumwrVBwA}{192.168.0.185}{192.168.0.185:9300}{faultDomain=2, updateDomain=14} (no activity after [30m]) | |
at org.elasticsearch.indices.recovery.RecoveriesCollection$RecoveryMonitor.doRun(RecoveriesCollection.java:286) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Caused by: org.elasticsearch.ElasticsearchTimeoutException: no activity after [30m] | |
... 6 more | |
[2018-12-26T16:14:13,968][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:14:45,496][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:15:15,959][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:15:47,410][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:16:18,249][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:16:50,127][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:17:21,766][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:17:53,591][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:17:54,209][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_8_1311][0] received shard failed for shard id [[wikisearchshared_8_1311][0]], allocation id [vlY27lxoS-6E2-RKrr0h8w], primary term [25], message [mark copy as stale] | |
[2018-12-26T16:18:25,375][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:18:55,541][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:19:26,579][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:19:58,980][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:20:21,105][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_4_1311][1] received shard failed for shard id [[wikisearchshared_4_1311][1]], allocation id [kDvN-yavTFKPO5VSM_f-Ew], primary term [24], message [mark copy as stale] | |
[2018-12-26T16:20:30,832][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:21:01,704][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:21:35,277][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:22:07,053][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:22:38,135][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:23:10,276][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:23:19,226][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wiki_wikicontract_1421_shared_eddf7100-5cfb-4b3d-92cf-06f34accefc5][2] received shard failed for shard id [[wiki_wikicontract_1421_shared_eddf7100-5cfb-4b3d-92cf-06f34accefc5][2]], allocation id [S9-O47SKQWWdOKTWnNLjew], primary term [7], message [mark copy as stale] | |
[2018-12-26T16:23:36,797][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_2_1311][9] received shard failed for shard id [[wikisearchshared_2_1311][9]], allocation id [VEZxN1WfSbKrg3qIGmofdQ], primary term [27], message [mark copy as stale] | |
[2018-12-26T16:23:41,782][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:24:13,810][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:24:46,138][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:25:16,437][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:25:47,662][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:26:18,803][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:26:50,574][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:26:54,437][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_9_4][3] received shard failed for shard id [[workitemsearchshared_9_4][3]], allocation id [K9-thzNxROS_HAOx3aEekQ], primary term [9], message [mark copy as stale] | |
[2018-12-26T16:27:21,974][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:27:53,825][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:28:25,576][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:28:41,610][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_9_1311][0] received shard failed for shard id [[wikisearchshared_9_1311][0]], allocation id [MwqT-WzDQdudpImi31aHdQ], primary term [26], message [mark copy as stale] | |
[2018-12-26T16:28:55,767][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:29:27,672][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:29:59,585][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:30:31,218][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:31:01,844][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:31:25,310][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_8_4][11] received shard failed for shard id [[workitemsearchshared_8_4][11]], allocation id [vHcreO3lSGOEaBvVq1ZwJw], primary term [8], message [mark copy as stale] | |
[2018-12-26T16:31:35,422][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:32:07,464][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:32:38,219][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:32:39,259][INFO ][o.e.m.j.JvmGcMonitorService] [es-m01-rm] [gc][1053017] overhead, spent [289ms] collecting in the last [1.1s] | |
[2018-12-26T16:33:10,436][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:33:41,889][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:34:13,901][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:34:49,178][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:35:19,414][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:35:38,780][INFO ][o.e.m.j.JvmGcMonitorService] [es-m01-rm] [gc][1053187] overhead, spent [256ms] collecting in the last [1s] | |
[2018-12-26T16:35:41,097][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_7_4][10] received shard failed for shard id [[workitemsearchshared_7_4][10]], allocation id [xlHIb2JjRrqghPJ21VXRnw], primary term [9], message [mark copy as stale] | |
[2018-12-26T16:35:41,097][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_9_1311][6] received shard failed for shard id [[wikisearchshared_9_1311][6]], allocation id [w0zFLOV2SCGgmPqROkmZ0Q], primary term [24], message [mark copy as stale] | |
[2018-12-26T16:35:54,083][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:36:26,493][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:36:50,792][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_7_1311][8] received shard failed for shard id [[wikisearchshared_7_1311][8]], allocation id [W29vafW2QjukDRzCpsohbQ], primary term [25], message [mark copy as stale] | |
[2018-12-26T16:36:58,499][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:37:31,775][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:38:02,137][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:38:35,372][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:39:07,237][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:39:38,184][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:40:10,342][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:40:41,537][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:41:13,788][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:41:39,412][INFO ][o.e.m.j.JvmGcMonitorService] [es-m01-rm] [gc][1053544] overhead, spent [348ms] collecting in the last [1s] | |
[2018-12-26T16:41:45,197][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:42:15,377][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:42:47,063][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.9gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:43:17,951][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:43:49,089][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:44:19,398][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:44:51,346][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:44:58,110][INFO ][o.e.c.m.MetaDataMappingService] [es-m01-rm] [workitemsearchshared_5_3/pOlrkTJ0SjytvG1OJ3CAlQ] update_mapping [workItemContract] | |
[2018-12-26T16:45:22,330][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:45:54,126][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:46:25,999][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:46:39,531][INFO ][o.e.m.j.JvmGcMonitorService] [es-m01-rm] [gc][1053841] overhead, spent [273ms] collecting in the last [1s] | |
[2018-12-26T16:46:56,506][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:47:27,209][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:47:59,441][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:48:31,598][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:49:02,995][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:49:36,570][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:50:07,654][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:50:08,061][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitem_workitemcontract_1421_shared_ba7636b3-986d-4fe1-a282-faa03b3ed474][6] received shard failed for shard id [[workitem_workitemcontract_1421_shared_ba7636b3-986d-4fe1-a282-faa03b3ed474][6]], allocation id [Lf_i5TiBSnWNmMWniTMNMw], primary term [4], message [mark copy as stale] | |
[2018-12-26T16:50:38,797][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:51:11,780][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:51:43,645][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:52:14,510][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:52:32,931][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_8_4][9] received shard failed for shard id [[workitemsearchshared_8_4][9]], allocation id [qVeVGX5CS9eNUkVqp1g-qw], primary term [10], message [mark copy as stale] | |
[2018-12-26T16:52:46,018][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:53:16,532][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:53:47,879][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:54:18,698][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:54:50,546][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:55:21,783][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:55:53,765][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.8gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:56:25,159][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.7gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:56:56,423][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.7gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:57:28,408][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.7gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:58:00,847][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.7gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:58:32,927][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.7gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:59:03,115][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.7gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T16:59:21,304][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitem_workitemcontract_1421_shared_ba7636b3-986d-4fe1-a282-faa03b3ed474][9] received shard failed for shard id [[workitem_workitemcontract_1421_shared_ba7636b3-986d-4fe1-a282-faa03b3ed474][9]], allocation id [mEslnTBWTnOvHZdL7y9Efg], primary term [5], message [mark copy as stale] | |
[2018-12-26T16:59:36,670][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.7gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:00:07,753][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.7gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:00:12,269][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_5_1311][1] received shard failed for shard id [[wikisearchshared_5_1311][1]], allocation id [Juhnz5y4Qpe9XLjpkHvvnQ], primary term [31], message [mark copy as stale] | |
[2018-12-26T17:00:38,641][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.7gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:01:11,247][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.7gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:01:43,156][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.7gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:01:43,766][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wiki_wikicontract_1421_shared_3d043d24-dc38-461a-99d5-b4f736de3cda][8] received shard failed for shard id [[wiki_wikicontract_1421_shared_3d043d24-dc38-461a-99d5-b4f736de3cda][8]], allocation id [VRd4e19XQwKuJ6dSeHhqvA], primary term [4], message [mark copy as stale] | |
[2018-12-26T17:02:14,945][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.6gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:02:40,804][INFO ][o.e.c.m.MetaDataMappingService] [es-m01-rm] [workitem_workitemcontract_1421_shared_a1cfe652-beca-4e68-86e4-25cfe0bf8ffa/GCrcOxanQWe516opQYmQrA] update_mapping [workItemContract] | |
[2018-12-26T17:02:46,337][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:03:17,204][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:03:43,794][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_5_4][5] received shard failed for shard id [[workitemsearchshared_5_4][5]], allocation id [MP59BBAMT_SKqwyTDag0sA], primary term [8], message [mark copy as stale] | |
[2018-12-26T17:03:48,429][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:04:19,058][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:04:50,894][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:05:20,805][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_1_1311][9] received shard failed for shard id [[wikisearchshared_1_1311][9]], allocation id [1TaJ-MgISYmHOWjrxiaj2Q], primary term [27], message [mark copy as stale] | |
[2018-12-26T17:05:22,035][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:05:53,845][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:06:25,431][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:06:56,639][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:07:27,379][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:07:54,378][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_3_0][22] received shard failed for shard id [[codesearchshared_3_0][22]], allocation id [iHCU_krrQWSfAtGZ1mznpw], primary term [33], message [mark copy as stale] | |
[2018-12-26T17:07:59,418][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:08:31,461][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:09:02,347][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:09:35,494][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:10:07,921][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:10:38,619][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:11:10,857][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:11:20,214][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [codesearchshared_11_0][41] received shard failed for shard id [[codesearchshared_11_0][41]], allocation id [rd4FQB5CR4Sqpt6EC31ADg], primary term [0], message [failed recovery], failure [RecoveryFailedException[[codesearchshared_11_0][41]: Recovery failed from {es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10} into {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} (no activity after [30m])]; nested: ElasticsearchTimeoutException[no activity after [30m]]; ] | |
org.elasticsearch.indices.recovery.RecoveryFailedException: [codesearchshared_11_0][41]: Recovery failed from {es-d31-rm}{HikLtl9eRvW1AoJRSR_lPw}{MX4LeLzjRKyc9GGTWC0dhQ}{192.168.0.181}{192.168.0.181:9300}{faultDomain=1, updateDomain=10} into {es-d38-rm}{pqToiNuwSwadtWyiTvtYbg}{BY404Vu7QB26pCw7YCcpsQ}{192.168.0.188}{192.168.0.188:9300}{faultDomain=2, updateDomain=17} (no activity after [30m]) | |
at org.elasticsearch.indices.recovery.RecoveriesCollection$RecoveryMonitor.doRun(RecoveriesCollection.java:286) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Caused by: org.elasticsearch.ElasticsearchTimeoutException: no activity after [30m] | |
... 6 more | |
[2018-12-26T17:11:41,505][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:12:14,052][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:12:45,150][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:13:15,961][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:13:47,500][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:14:11,769][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10] received shard failed for shard id [[code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10]], allocation id [kZqrlEqSTx-C88L2coDUzw], primary term [0], message [failed recovery], failure [RecoveryFailedException[[code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10]: Recovery failed from {es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4} into {es-d35-rm}{IU6OaIyRRPuRN6wxtV7t3Q}{oII66QMKSUKduxumwrVBwA}{192.168.0.185}{192.168.0.185:9300}{faultDomain=2, updateDomain=14} (no activity after [30m])]; nested: ElasticsearchTimeoutException[no activity after [30m]]; ] | |
org.elasticsearch.indices.recovery.RecoveryFailedException: [code_sourcenodedupefilecontractv3_1421_shared_e5d976fc-8938-4a65-822e-737ec7a91a74][10]: Recovery failed from {es-d25-rm}{eYchayAPRKuaLmKIAU9vtg}{BLH9iWfoR0mwzqIkxLst3A}{192.168.0.175}{192.168.0.175:9300}{faultDomain=1, updateDomain=4} into {es-d35-rm}{IU6OaIyRRPuRN6wxtV7t3Q}{oII66QMKSUKduxumwrVBwA}{192.168.0.185}{192.168.0.185:9300}{faultDomain=2, updateDomain=14} (no activity after [30m]) | |
at org.elasticsearch.indices.recovery.RecoveriesCollection$RecoveryMonitor.doRun(RecoveriesCollection.java:286) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.4.jar:6.2.4] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_72] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_72] | |
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72] | |
Caused by: org.elasticsearch.ElasticsearchTimeoutException: no activity after [30m] | |
... 6 more | |
[2018-12-26T17:14:18,504][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:14:49,355][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:15:19,614][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:15:51,278][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:16:22,522][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:16:54,563][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:17:26,412][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:17:51,302][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_2_1311][5] received shard failed for shard id [[wikisearchshared_2_1311][5]], allocation id [_3J19V0fRye18YQk536ilQ], primary term [29], message [mark copy as stale] | |
[2018-12-26T17:17:51,302][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_2_1311][2] received shard failed for shard id [[wikisearchshared_2_1311][2]], allocation id [8XCdjc23RrSYLhtMUPHNWw], primary term [26], message [mark copy as stale] | |
[2018-12-26T17:17:51,302][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [wikisearchshared_2_1311][7] received shard failed for shard id [[wikisearchshared_2_1311][7]], allocation id [S1mP0W4FRf-MfRHA6f5YtA], primary term [26], message [mark copy as stale] | |
[2018-12-26T17:17:57,334][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:18:28,896][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:18:39,037][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_2_4][0] received shard failed for shard id [[workitemsearchshared_2_4][0]], allocation id [SWwYjQseQdKclJh1By5sKQ], primary term [11], message [mark copy as stale] | |
[2018-12-26T17:19:01,245][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:19:33,761][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:20:03,908][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:20:37,267][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:20:49,694][INFO ][o.e.c.m.MetaDataMappingService] [es-m01-rm] [workitemsearchshared_6_3/LwqATEVxTv-vahGQjuYtXg] update_mapping [workItemContract] | |
[2018-12-26T17:21:08,395][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:21:38,916][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:22:11,657][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:22:43,563][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:23:15,213][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:23:46,788][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:24:17,839][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:24:49,193][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:25:19,608][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:25:50,763][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:26:21,961][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:26:53,447][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:27:24,247][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:27:55,237][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:28:26,848][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:28:58,411][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:29:30,991][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:30:02,164][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:30:14,950][WARN ][o.e.c.a.s.ShardStateAction] [es-m01-rm] [workitemsearchshared_3_4][9] received shard failed for shard id [[workitemsearchshared_3_4][9]], allocation id [kopO7d4aTkmBbahsvGx-zw], primary term [7], message [mark copy as stale] | |
[2018-12-26T17:30:35,512][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node | |
[2018-12-26T17:31:07,911][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-m01-rm] low disk watermark [85%] exceeded on [1jqD3BqWSoykJ59u85pm-g][es-d32-rm][F:\data\nodes\0] free: 525.5gb[12.8%], replicas will not be assigned to this node |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment