Skip to content

Instantly share code, notes, and snippets.

@bunste
Created October 14, 2021 11:17
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bunste/ca12cdea833226cbc0ef916e8720fa67 to your computer and use it in GitHub Desktop.
Save bunste/ca12cdea833226cbc0ef916e8720fa67 to your computer and use it in GitHub Desktop.
further debugging info for elasticsearch
[2021-10-13T14:15:08,671][TRACE][o.e.c.s.ClusterApplierService] [es-node05-a] connecting to nodes of cluster state with version 5429193
[2021-10-13T14:15:08,671][DEBUG][o.e.c.s.ClusterApplierService] [es-node05-a] applying settings from cluster state with version 5429193
[2021-10-13T14:15:08,671][DEBUG][o.e.c.s.ClusterApplierService] [es-node05-a] apply cluster state with version 5429193
[2021-10-13T14:15:08,671][TRACE][o.e.c.s.ClusterApplierService] [es-node05-a] calling [org.elasticsearch.repositories.RepositoriesService@4a0995e7] with change to version [5429193]
[2021-10-13T14:15:08,671][TRACE][o.e.c.s.ClusterApplierService] [es-node05-a] calling [org.elasticsearch.indices.cluster.IndicesClusterStateService@c418f17] with change to version [5429193]
[2021-10-13T14:17:09,136][INFO ][o.e.i.r.PeerRecoveryTargetService] [es-node05-a] recovery of [events-2021.10.05][0] from [{es-node06-a}{R00-RxIuQGud85biVet1XA}{rPoopo8EQQCMkkaXnIu0Xg}{192.168.200.185}{192.168.200.185:19301}{cdhilrstw}{ml.machine_memory=67085619200, ml.max_open_jobs=20, xpack.installed=true, disks=ssd, machine=192.168.6.185, transform.node=true}] interrupted by network disconnect, will retry in [5s]; cause: [[es-node05-a][192.168.200.184:19301][internal:index/shard/recovery/file_chunk] disconnected]
[2021-10-13T14:17:09,185][INFO ][o.e.i.r.PeerRecoveryTargetService] [es-node05-a] recovery of [groot_news_bucket_23_v3][0] from [{es-node03-a}{O_DOHlu7QqChJNdkgZHtbQ}{6F2-uSzeSka08lRdaE2VIw}{192.168.200.182}{192.168.200.182:19301}{cdhilrstw}{ml.machine_memory=135073177600, ml.max_open_jobs=20, xpack.installed=true, disks=ssd, machine=192.168.6.182, transform.node=true}] interrupted by network disconnect, will retry in [5s]; cause: [[es-node05-a][192.168.200.184:19301][internal:index/shard/recovery/file_chunk] disconnected]
[2021-10-13T14:17:11,607][INFO ][o.e.c.c.Coordinator ] [es-node05-a] master node [{es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] failed, restarting discovery
org.elasticsearch.ElasticsearchException: node [{es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] failed [3] consecutive checks
at org.elasticsearch.cluster.coordination.LeaderChecker$CheckScheduler$1.handleException(LeaderChecker.java:293) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1181) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1181) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundHandler.lambda$handleException$3(InboundHandler.java:277) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:224) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundHandler.handleException(InboundHandler.java:275) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundHandler.handlerResponseError(InboundHandler.java:267) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:131) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:89) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:700) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[?:?]
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[?:?]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[?:?]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[?:?]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: org.elasticsearch.transport.RemoteTransportException: [es-master02][192.168.200.52:9300][internal:coordination/fault_detection/leader_check]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: rejecting leader check since [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, ml.max_open_jobs=20, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true}] has been removed from the cluster
at org.elasticsearch.cluster.coordination.LeaderChecker.handleLeaderCheck(LeaderChecker.java:192) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.cluster.coordination.LeaderChecker.lambda$new$0(LeaderChecker.java:113) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257) ~[?:?]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315) ~[?:?]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:207) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:107) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:89) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:700) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) ~[elasticsearch-7.10.1.jar:7.10.1]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[?:?]
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[?:?]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[?:?]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[?:?]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[?:?]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
at java.lang.Thread.run(Thread.java:834) ~[?:?]
[2021-10-13T14:17:27,608][INFO ][o.e.n.Node ] [es-node05-a] stopping ...
[2021-10-13T14:17:27,615][INFO ][o.e.x.w.WatcherService ] [es-node05-a] stopping watch service, reason [shutdown initiated]
[2021-10-13T14:17:27,615][INFO ][o.e.x.w.WatcherLifeCycleService] [es-node05-a] watcher has stopped and shutdown
[2021-10-13T14:17:27,662][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [es-node05-a] [controller/2794] [Main.cc@154] ML controller exiting
[2021-10-13T14:17:27,668][INFO ][o.e.x.m.p.NativeController] [es-node05-a] Native controller process has stopped - no new native processes can be started
[2021-10-13T14:17:47,661][INFO ][o.e.c.c.Coordinator ] [es-node05-a] master node [{es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] failed, restarting discovery
org.elasticsearch.transport.NodeDisconnectedException: [es-master02][192.168.200.52:9300][disconnected] disconnected
[2021-10-13T14:17:57,672][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:18:07,673][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:18:17,674][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:18:27,675][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:18:37,676][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:18:47,677][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:18:57,677][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:19:07,678][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:19:17,679][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:19:27,680][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:19:37,681][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:19:47,682][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:19:57,682][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:20:07,683][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:20:17,684][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:20:27,685][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:20:37,686][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:20:47,687][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:20:57,688][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:21:07,689][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:21:17,689][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:21:27,690][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:21:37,691][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:21:47,692][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:21:57,693][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:22:07,694][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:22:17,695][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:22:27,696][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:22:37,696][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:22:47,697][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:22:57,698][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:23:07,699][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:23:17,700][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:23:27,701][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:23:37,702][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:23:47,703][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:23:57,704][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:24:07,705][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:24:17,705][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:24:27,706][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:24:37,707][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:24:47,707][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:24:57,708][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:25:07,709][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:25:17,710][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:25:27,710][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:25:37,711][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:25:47,712][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:25:57,713][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:26:07,714][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:26:17,715][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:26:27,716][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:26:37,716][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:26:47,717][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:26:57,718][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:27:07,719][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:27:17,719][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:27:27,720][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:27:37,721][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:27:47,722][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:27:57,722][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:28:07,723][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:28:17,724][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:28:27,725][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:28:37,725][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:28:47,726][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:28:57,727][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:29:07,727][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:29:17,728][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:29:27,729][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:29:37,730][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:29:47,730][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:29:57,731][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:30:07,732][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:30:17,732][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:30:27,733][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:30:37,734][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:30:47,735][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:30:57,735][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:31:07,736][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:31:17,737][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:31:27,738][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:31:37,738][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:31:47,739][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:31:57,740][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:32:07,740][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:32:17,741][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:32:27,742][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:32:37,743][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:32:47,743][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:32:57,744][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:33:07,744][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:33:17,745][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:33:27,746][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:33:37,746][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:33:47,747][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:33:57,748][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:34:07,748][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:34:17,749][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:34:27,750][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:34:37,750][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:34:47,751][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:34:57,752][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:35:07,752][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:35:17,753][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:35:27,754][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:35:37,754][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:35:47,755][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:35:57,756][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:36:07,756][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:36:17,757][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:36:27,758][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:36:37,758][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:36:47,759][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:36:57,760][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:37:07,760][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:37:17,761][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:37:27,762][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:37:37,762][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:37:47,763][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:37:57,764][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:38:07,764][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:38:17,765][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:38:27,766][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:38:37,766][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:38:47,767][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:38:57,768][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:39:07,768][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:39:17,769][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:39:27,770][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:39:37,770][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:39:47,771][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:39:57,772][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:40:07,772][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:40:17,773][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:40:27,774][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:40:37,774][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:40:47,775][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:40:57,776][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:41:07,776][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:41:17,777][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:41:27,778][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:41:37,779][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:41:47,779][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:41:57,780][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:42:07,781][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:42:17,781][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:42:27,782][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:42:37,782][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:42:47,783][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:42:57,784][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:43:07,784][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:43:17,785][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:43:27,786][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:43:37,786][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:43:47,787][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:43:57,788][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:44:07,788][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:44:17,789][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:44:27,790][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:44:37,790][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:44:47,791][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:44:57,791][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:45:07,792][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:45:17,793][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:45:27,793][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:45:37,794][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:45:47,795][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:45:57,795][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:46:07,796][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:46:17,796][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:46:27,797][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:46:37,798][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:46:47,798][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:46:57,799][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:47:07,799][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:47:17,800][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:47:27,801][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:47:37,801][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:47:47,802][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:47:57,803][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:48:07,803][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:48:17,804][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:48:27,804][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:48:37,805][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:48:47,806][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:48:57,806][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:49:07,807][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:49:17,808][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:49:27,808][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:49:37,809][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:49:47,810][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:49:57,810][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:50:07,811][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:50:17,812][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:50:27,812][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:50:37,813][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:50:47,814][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:50:57,814][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:51:07,815][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:51:17,815][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:51:27,816][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:51:37,817][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:51:47,817][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:51:57,818][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:52:07,818][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:52:17,819][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:52:27,820][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:52:37,820][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:52:47,821][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:52:57,821][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:53:07,822][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:53:17,823][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:53:27,823][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:53:37,824][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:53:47,824][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:53:57,825][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:54:07,826][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:54:17,826][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:54:27,827][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:54:37,827][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:54:47,828][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:54:57,829][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:55:07,829][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:55:17,830][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:55:27,830][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:55:37,831][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:55:47,832][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:55:57,832][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:56:07,833][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T14:56:17,834][WARN ][o.e.c.c.ClusterFormationFailureHelper] [es-node05-a] master not discovered yet: have discovered [{es-node05-a}{KEWUUnwASICujt430nUhOA}{0nN2CmrJT5Wsape0a_OxgA}{192.168.200.184}{192.168.200.184:19301}{cdhilrstw}{ml.machine_memory=67185217536, xpack.installed=true, disks=ssd, machine=192.168.6.184, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [192.168.200.51:9300, 192.168.200.52:9300, 192.168.200.53:9300] from hosts providers and [{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}, {es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}] from last-known cluster state; node term 9, last-accepted version 5429203 in term 9
[2021-10-13T15:14:56,811][INFO ][o.e.n.Node ] [es-node05-a] version[7.10.1], pid[1806], build[default/tar/1c34507e66d7db1211f66f3513706fdf548736aa/2020-12-05T01:00:33.671820Z], OS[Linux/5.10.0-9-amd64/amd64], JVM[Debian/OpenJDK 64-Bit Server VM/11.0.12/11.0.12+7-post-Debian-2]
[2021-10-13T15:14:56,813][INFO ][o.e.n.Node ] [es-node05-a] JVM home [/usr/lib/jvm/java-11-openjdk-amd64], using bundled JDK [false]
[2021-10-13T15:14:56,813][INFO ][o.e.n.Node ] [es-node05-a] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms25g, -Xmx25g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-17545342988263904608, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch/es-node05-a/1, -XX:ErrorFile=/var/log/elasticsearch/es-node05-a/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/es-node05-a/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -XX:MaxDirectMemorySize=13421772800, -Des.path.home=/opt/elasticsearch/es-node05-a/current, -Des.path.conf=/etc/elasticsearch/es-node05-a, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=true]
[2021-10-13T15:15:01,038][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [aggs-matrix-stats]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [analysis-common]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [constant-keyword]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [flattened]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [frozen-indices]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [ingest-common]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [ingest-geoip]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [ingest-user-agent]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [kibana]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [lang-expression]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [lang-mustache]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [lang-painless]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [mapper-extras]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [mapper-version]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [parent-join]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [percolator]
[2021-10-13T15:15:01,039][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [rank-eval]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [reindex]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [repositories-metering-api]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [repository-url]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [search-business-rules]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [searchable-snapshots]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [spatial]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [transform]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [transport-netty4]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [unsigned-long]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [vectors]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [wildcard]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-analytics]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-async]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-async-search]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-autoscaling]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-ccr]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-core]
[2021-10-13T15:15:01,040][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-data-streams]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-deprecation]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-enrich]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-eql]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-graph]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-identity-provider]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-ilm]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-logstash]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-ml]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-monitoring]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-ql]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-rollup]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-security]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-sql]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-stack]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-voting-only-node]
[2021-10-13T15:15:01,041][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded module [x-pack-watcher]
[2021-10-13T15:15:01,042][INFO ][o.e.p.PluginsService ] [es-node05-a] loaded plugin [repository-s3]
[2021-10-13T15:15:01,118][INFO ][o.e.e.NodeEnvironment ] [es-node05-a] using [6] data paths, mounts [[/var/lib/elasticsearch/es-node05-a/3 (/dev/sdc1), /var/lib/elasticsearch/es-node05-a/4 (/dev/sdd1), /var/lib/elasticsearch/es-node05-a/5 (/dev/sdf1), /var/lib/elasticsearch/es-node05-a/6 (/dev/sde1), /var/lib/elasticsearch/es-node05-a/2 (/dev/sdg1), /var/lib/elasticsearch/es-node05-a/1 (/dev/sdb1)]], net usable_space [4.6tb], net total_space [5.1tb], types [ext4]
[2021-10-13T15:15:01,118][INFO ][o.e.e.NodeEnvironment ] [es-node05-a] heap size [25gb], compressed ordinary object pointers [true]
[2021-10-13T15:15:01,324][INFO ][o.e.n.Node ] [es-node05-a] node name [es-node05-a], node ID [KEWUUnwASICujt430nUhOA], cluster name [wilma_van_der_heel], roles [transform, remote_cluster_client, data, ml, data_content, data_hot, data_warm, data_cold, ingest]
[2021-10-13T15:15:04,520][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [es-node05-a] [controller/2014] [Main.cc@114] controller (64 bit): Version 7.10.1 (Build 11e1ac84105757) Copyright (c) 2020 Elasticsearch BV
[2021-10-13T15:15:05,947][DEBUG][o.e.a.ActionModule ] [es-node05-a] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2021-10-13T15:15:06,023][INFO ][o.e.t.NettyAllocator ] [es-node05-a] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=8mb}]
[2021-10-13T15:15:06,085][INFO ][o.e.d.DiscoveryModule ] [es-node05-a] using discovery type [zen] and seed hosts providers [settings]
[2021-10-13T15:15:06,484][WARN ][o.e.g.DanglingIndicesState] [es-node05-a] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2021-10-13T15:15:06,830][INFO ][o.e.n.Node ] [es-node05-a] initialized
[2021-10-13T15:15:06,831][INFO ][o.e.n.Node ] [es-node05-a] starting ...
[2021-10-13T15:15:07,000][INFO ][o.e.t.TransportService ] [es-node05-a] publish_address {192.168.200.184:19301}, bound_addresses {192.168.200.184:19301}, {127.0.0.1:19301}
[2021-10-13T15:15:12,153][INFO ][o.e.b.BootstrapChecks ] [es-node05-a] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2021-10-13T15:15:12,171][INFO ][o.e.c.c.Coordinator ] [es-node05-a] cluster UUID [kgz2xT5FQPG3cNnRJqImYQ]
[2021-10-13T15:15:12,849][INFO ][o.e.c.s.ClusterApplierService] [es-node05-a] master node changed {previous [], current [{es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}]}, added {{es-client01}{wLMLml-9S8GBxva93lJ_6A}{BdCfgJCISYCiqkT62S6wOQ}{192.168.200.56}{192.168.200.56:9300}{ilr}{ml.machine_memory=8375873536, ml.max_open_jobs=20, xpack.installed=true, transform.node=false},{es-node04-a}{tzXs87AORfmHIuI1LOBudA}{0z-xPWcjT3eoXhtbA7j8Zg}{192.168.200.183}{192.168.200.183:19301}{cdhilrstw}{ml.machine_memory=67207499776, ml.max_open_jobs=20, xpack.installed=true, disks=ssd, machine=192.168.6.183, transform.node=true},{es-master01}{iH6fK6e2SRKdxkWtLySJSw}{ZFGHfelHR-2Dewd94OHl3A}{192.168.200.51}{192.168.200.51:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false},{es-node01-a}{4zwUdqc5Qsm5DsozHMl4vA}{q1CNC4A5Rz2FhPtS5nnBiA}{192.168.200.180}{192.168.200.180:19301}{cdhilrstw}{ml.machine_memory=135076884480, ml.max_open_jobs=20, xpack.installed=true, disks=hdd, machine=192.168.6.180, transform.node=true},{es-node02-b}{1OUXY72JRDe1QiX046IGFQ}{teroAX1yTju98Wd9r4tetQ}{192.168.200.181}{192.168.200.181:19302}{cdhilrstw}{ml.machine_memory=135076884480, ml.max_open_jobs=20, xpack.installed=true, disks=ssd, machine=192.168.6.181, transform.node=true},{es-master03}{YmgnEi44TsujRVxeUNhCzg}{C7OHlcRcTMyRMs34sqBlYA}{192.168.200.53}{192.168.200.53:9300}{ilmr}{ml.machine_memory=8375996416, ml.max_open_jobs=20, xpack.installed=true, transform.node=false},{es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false},{es-node02-a}{Lnk6dcA9THefzLfpiNqUOg}{UOrar75yTpiuH-_J38sx2w}{192.168.200.181}{192.168.200.181:19301}{cdhilrstw}{ml.machine_memory=135076884480, ml.max_open_jobs=20, xpack.installed=true, disks=hdd, machine=192.168.6.181, transform.node=true},{es-client02}{eCelB4tYRXC7KEECCBntUw}{_PKCBjWsTNWkFRRouSZQmA}{192.168.200.57}{192.168.200.57:9300}{ilr}{ml.machine_memory=8375857152, ml.max_open_jobs=20, xpack.installed=true, transform.node=false},{es-node03-a}{O_DOHlu7QqChJNdkgZHtbQ}{6F2-uSzeSka08lRdaE2VIw}{192.168.200.182}{192.168.200.182:19301}{cdhilrstw}{ml.machine_memory=135073177600, ml.max_open_jobs=20, xpack.installed=true, disks=ssd, machine=192.168.6.182, transform.node=true},{es-node01-b}{JZxy6a10QH280x_ZfP80nA}{Ag2RkQoFRcSuwKi0DjXaaA}{192.168.200.180}{192.168.200.180:19302}{cdhilrstw}{ml.machine_memory=135076884480, ml.max_open_jobs=20, xpack.installed=true, disks=ssd, machine=192.168.6.180, transform.node=true},{es-node06-a}{R00-RxIuQGud85biVet1XA}{rPoopo8EQQCMkkaXnIu0Xg}{192.168.200.185}{192.168.200.185:19301}{cdhilrstw}{ml.machine_memory=67085619200, ml.max_open_jobs=20, xpack.installed=true, disks=ssd, machine=192.168.6.185, transform.node=true},{es-node03-b}{n-feBJxBTu-okM6b4GMpLA}{wMrHFR71R8GFtmvZMc1EKQ}{192.168.200.182}{192.168.200.182:19302}{cdhilrstw}{ml.machine_memory=135073177600, ml.max_open_jobs=20, xpack.installed=true, disks=ssd, machine=192.168.6.182, transform.node=true}}, term: 9, version: 5429265, reason: ApplyCommitRequest{term=9, version=5429265, sourceNode={es-master02}{RLYFvvrgSCymDsgMDbx-dw}{LVDIbVeiQ4qtVEg8CaWq6w}{192.168.200.52}{192.168.200.52:9300}{ilmr}{ml.machine_memory=8376066048, ml.max_open_jobs=20, xpack.installed=true, transform.node=false}}
[2021-10-13T15:15:12,894][INFO ][o.e.c.s.ClusterSettings ] [es-node05-a] updating [xpack.monitoring.collection.enabled] from [false] to [true]
[2021-10-13T15:15:12,895][INFO ][o.e.c.s.ClusterSettings ] [es-node05-a] updating [indices.recovery.max_bytes_per_sec] from [100mb] to [300mb]
[2021-10-13T15:15:12,895][INFO ][o.e.c.s.ClusterSettings ] [es-node05-a] updating [indices.recovery.max_concurrent_file_chunks] from [2] to [5]
[2021-10-13T15:15:13,286][INFO ][o.e.x.s.a.TokenService ] [es-node05-a] refresh keys
[2021-10-13T15:15:13,436][INFO ][o.e.x.s.a.TokenService ] [es-node05-a] refreshed keys
[2021-10-13T15:15:13,939][INFO ][o.e.l.LicenseService ] [es-node05-a] license [fcca186c-a957-4b49-8803-11f5e8103616] mode [basic] - valid
[2021-10-13T15:15:13,941][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [es-node05-a] Active license is now [BASIC]; Security is disabled
[2021-10-13T15:15:13,956][INFO ][o.e.h.AbstractHttpServerTransport] [es-node05-a] publish_address {192.168.6.184:19201}, bound_addresses {192.168.6.184:19201}, {127.0.0.1:19201}
[2021-10-13T15:15:13,957][INFO ][o.e.n.Node ] [es-node05-a] started
[2021-10-13T15:15:48,581][WARN ][o.e.g.PersistedClusterStateService] [es-node05-a] writing cluster state took [28815ms] which is above the warn threshold of [10s]; wrote global metadata [false] and metadata for [0] indices and skipped [1215] unchanged indices
[2021-10-13T15:16:54,352][WARN ][o.e.g.PersistedClusterStateService] [es-node05-a] writing cluster state took [19009ms] which is above the warn threshold of [10s]; wrote global metadata [false] and metadata for [1] indices and skipped [1214] unchanged indices
[2021-10-13T15:17:23,439][INFO ][o.e.c.s.ClusterSettings ] [es-node05-a] updating [cluster.routing.allocation.enable] from [all] to [new_primaries]
[2021-10-13T15:27:52,943][WARN ][o.e.m.f.FsHealthService ] [es-node05-a] health check of [/var/lib/elasticsearch/es-node05-a/6/nodes/0] took [406043ms] which is above the warn threshold of [5s]
[2021-10-13T15:27:53,374][WARN ][o.e.g.PersistedClusterStateService] [es-node05-a] writing cluster state took [282577ms] which is above the warn threshold of [10s]; wrote global metadata [false] and metadata for [1] indices and skipped [1214] unchanged indices
[2021-10-13T15:31:06,756][INFO ][o.e.c.s.ClusterSettings ] [es-node05-a] updating [cluster.routing.allocation.enable] from [new_primaries] to [all]
[2021-10-13T15:31:23,206][INFO ][o.e.c.s.ClusterSettings ] [es-node05-a] updating [cluster.routing.allocation.enable] from [all] to [new_primaries]
[2021-10-13T15:33:52,933][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:34:12,921][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:34:32,921][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:34:52,922][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:35:12,922][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:35:32,923][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:35:52,923][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:36:12,923][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:36:32,924][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:36:52,924][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:37:12,925][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:37:32,925][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:37:52,927][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:38:12,926][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:38:32,927][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:38:52,927][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:39:12,928][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:39:32,928][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:39:52,929][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[2021-10-13T15:40:12,929][ERROR][o.e.x.m.c.n.NodeStatsCollector] [es-node05-a] collector [node_stats] timed out when collecting data
[Thu Oct 14 12:35:00 2021] INFO: task jbd2/sdf1-8:730 blocked for more than 120 seconds.
[Thu Oct 14 12:35:00 2021] Tainted: G I 5.10.0-9-amd64 #1 Debian 5.10.70-1
[Thu Oct 14 12:35:00 2021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Thu Oct 14 12:35:00 2021] task:jbd2/sdf1-8 state:D stack: 0 pid: 730 ppid: 2 flags:0x00004000
[Thu Oct 14 12:35:00 2021] Call Trace:
[Thu Oct 14 12:35:00 2021] __schedule+0x282/0x870
[Thu Oct 14 12:35:00 2021] ? out_of_line_wait_on_bit_lock+0xb0/0xb0
[Thu Oct 14 12:35:00 2021] schedule+0x46/0xb0
[Thu Oct 14 12:35:00 2021] io_schedule+0x42/0x70
[Thu Oct 14 12:35:00 2021] bit_wait_io+0xd/0x50
[Thu Oct 14 12:35:00 2021] __wait_on_bit+0x2a/0x90
[Thu Oct 14 12:35:00 2021] out_of_line_wait_on_bit+0x92/0xb0
[Thu Oct 14 12:35:00 2021] ? var_wake_function+0x20/0x20
[Thu Oct 14 12:35:00 2021] jbd2_journal_commit_transaction+0x11a7/0x1ad0 [jbd2]
[Thu Oct 14 12:35:00 2021] kjournald2+0xab/0x270 [jbd2]
[Thu Oct 14 12:35:00 2021] ? add_wait_queue_exclusive+0x70/0x70
[Thu Oct 14 12:35:00 2021] ? load_superblock.part.0+0xb0/0xb0 [jbd2]
[Thu Oct 14 12:35:00 2021] kthread+0x11b/0x140
[Thu Oct 14 12:35:00 2021] ? __kthread_bind_mask+0x60/0x60
[Thu Oct 14 12:35:00 2021] ret_from_fork+0x1f/0x30
[Thu Oct 14 12:35:00 2021] INFO: task elasticsearch[e:5914 blocked for more than 120 seconds.
[Thu Oct 14 12:35:00 2021] Tainted: G I 5.10.0-9-amd64 #1 Debian 5.10.70-1
[Thu Oct 14 12:35:00 2021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Thu Oct 14 12:35:00 2021] task:elasticsearch[e state:D stack: 0 pid: 5914 ppid: 1 flags:0x00004320
[Thu Oct 14 12:35:00 2021] Call Trace:
[Thu Oct 14 12:35:00 2021] __schedule+0x282/0x870
[Thu Oct 14 12:35:00 2021] ? blk_mq_flush_plug_list+0x100/0x190
[Thu Oct 14 12:35:00 2021] schedule+0x46/0xb0
[Thu Oct 14 12:35:00 2021] io_schedule+0x42/0x70
[Thu Oct 14 12:35:00 2021] wait_on_page_bit_common+0x116/0x3b0
[Thu Oct 14 12:35:00 2021] ? trace_event_raw_event_file_check_and_advance_wb_err+0xf0/0xf0
[Thu Oct 14 12:35:00 2021] mpage_prepare_extent_to_map+0x257/0x290 [ext4]
[Thu Oct 14 12:35:00 2021] ext4_writepages+0x325/0xfc0 [ext4]
[Thu Oct 14 12:35:00 2021] do_writepages+0x34/0xc0
[Thu Oct 14 12:35:00 2021] ? handle_mm_fault+0x1490/0x1bf0
[Thu Oct 14 12:35:00 2021] __filemap_fdatawrite_range+0xc5/0x100
[Thu Oct 14 12:35:00 2021] file_write_and_wait_range+0x61/0xb0
[Thu Oct 14 12:35:00 2021] ext4_sync_file+0x73/0x350 [ext4]
[Thu Oct 14 12:35:00 2021] __x64_sys_fsync+0x34/0x60
[Thu Oct 14 12:35:00 2021] do_syscall_64+0x33/0x80
[Thu Oct 14 12:35:00 2021] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[Thu Oct 14 12:35:00 2021] RIP: 0033:0x7fcddd66aabb
[Thu Oct 14 12:35:00 2021] RSP: 002b:00007fcbc4bbf280 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
[Thu Oct 14 12:35:00 2021] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007fcddd66aabb
[Thu Oct 14 12:35:00 2021] RDX: 0000000000000032 RSI: 00007fcbc4bbf2c0 RDI: 0000000000000349
[Thu Oct 14 12:35:00 2021] RBP: 00007fcbc4bbf2b0 R08: 0000000000000000 R09: 000000060dfb3d30
[Thu Oct 14 12:35:00 2021] R10: 0000000000000d10 R11: 0000000000000293 R12: 00007fcc08105348
[Thu Oct 14 12:35:00 2021] R13: 000000084000efa0 R14: 00007fcbc4bbf2f0 R15: 00007fcc08105000
[Thu Oct 14 12:35:00 2021] INFO: task kworker/u49:4:5840 blocked for more than 120 seconds.
[Thu Oct 14 12:35:00 2021] Tainted: G I 5.10.0-9-amd64 #1 Debian 5.10.70-1
[Thu Oct 14 12:35:00 2021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Thu Oct 14 12:35:00 2021] task:kworker/u49:4 state:D stack: 0 pid: 5840 ppid: 2 flags:0x00004000
[Thu Oct 14 12:35:00 2021] Workqueue: writeback wb_workfn (flush-8:80)
[Thu Oct 14 12:35:00 2021] Call Trace:
[Thu Oct 14 12:35:00 2021] __schedule+0x282/0x870
[Thu Oct 14 12:35:00 2021] ? out_of_line_wait_on_bit_lock+0xb0/0xb0
[Thu Oct 14 12:35:00 2021] schedule+0x46/0xb0
[Thu Oct 14 12:35:00 2021] io_schedule+0x42/0x70
[Thu Oct 14 12:35:00 2021] bit_wait_io+0xd/0x50
[Thu Oct 14 12:35:00 2021] __wait_on_bit+0x2a/0x90
[Thu Oct 14 12:35:00 2021] out_of_line_wait_on_bit+0x92/0xb0
[Thu Oct 14 12:35:00 2021] ? var_wake_function+0x20/0x20
[Thu Oct 14 12:35:00 2021] do_get_write_access+0x276/0x3d0 [jbd2]
[Thu Oct 14 12:35:00 2021] jbd2_journal_get_write_access+0x63/0x80 [jbd2]
[Thu Oct 14 12:35:00 2021] __ext4_journal_get_write_access+0x77/0x120 [ext4]
[Thu Oct 14 12:35:00 2021] ext4_mb_mark_diskspace_used+0x7a/0x420 [ext4]
[Thu Oct 14 12:35:00 2021] ext4_mb_new_blocks+0x473/0xea0 [ext4]
[Thu Oct 14 12:35:00 2021] ? __read_extent_tree_block+0x6a/0x140 [ext4]
[Thu Oct 14 12:35:00 2021] ? ext4_find_extent+0x1af/0x450 [ext4]
[Thu Oct 14 12:35:00 2021] ext4_ext_map_blocks+0x85d/0x1890 [ext4]
[Thu Oct 14 12:35:00 2021] ? release_pages+0x3d8/0x450
[Thu Oct 14 12:35:00 2021] ? __pagevec_release+0x1c/0x50
[Thu Oct 14 12:35:00 2021] ext4_map_blocks+0x18e/0x590 [ext4]
[Thu Oct 14 12:35:00 2021] ext4_writepages+0x72e/0xfc0 [ext4]
[Thu Oct 14 12:35:00 2021] ? blk_mq_dispatch_rq_list+0x119/0x7c0
[Thu Oct 14 12:35:00 2021] do_writepages+0x34/0xc0
[Thu Oct 14 12:35:00 2021] ? fprop_reflect_period_percpu.isra.0+0x7b/0xc0
[Thu Oct 14 12:35:00 2021] __writeback_single_inode+0x39/0x2a0
[Thu Oct 14 12:35:00 2021] writeback_sb_inodes+0x200/0x470
[Thu Oct 14 12:35:00 2021] __writeback_inodes_wb+0x4c/0xe0
[Thu Oct 14 12:35:00 2021] wb_writeback+0x1d8/0x290
[Thu Oct 14 12:35:00 2021] wb_workfn+0x292/0x4d0
[Thu Oct 14 12:35:00 2021] ? __switch_to_asm+0x42/0x70
[Thu Oct 14 12:35:00 2021] process_one_work+0x1b6/0x350
[Thu Oct 14 12:35:00 2021] worker_thread+0x53/0x3e0
[Thu Oct 14 12:35:00 2021] ? process_one_work+0x350/0x350
[Thu Oct 14 12:35:00 2021] kthread+0x11b/0x140
[Thu Oct 14 12:35:00 2021] ? __kthread_bind_mask+0x60/0x60
[Thu Oct 14 12:35:00 2021] ret_from_fork+0x1f/0x30
[Thu Oct 14 12:35:00 2021] INFO: task kworker/u48:2:5884 blocked for more than 120 seconds.
[Thu Oct 14 12:35:00 2021] Tainted: G I 5.10.0-9-amd64 #1 Debian 5.10.70-1
[Thu Oct 14 12:35:00 2021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Thu Oct 14 12:35:00 2021] task:kworker/u48:2 state:D stack: 0 pid: 5884 ppid: 2 flags:0x00004000
[Thu Oct 14 12:35:00 2021] Workqueue: ext4-rsv-conversion ext4_end_io_rsv_work [ext4]
[Thu Oct 14 12:35:00 2021] Call Trace:
[Thu Oct 14 12:35:00 2021] __schedule+0x282/0x870
[Thu Oct 14 12:35:00 2021] schedule+0x46/0xb0
[Thu Oct 14 12:35:00 2021] rwsem_down_write_slowpath+0x242/0x4c0
[Thu Oct 14 12:35:00 2021] ext4_map_blocks+0x16c/0x590 [ext4]
[Thu Oct 14 12:35:00 2021] ext4_convert_unwritten_extents+0x15c/0x220 [ext4]
[Thu Oct 14 12:35:00 2021] ext4_convert_unwritten_io_end_vec+0x60/0xe0 [ext4]
[Thu Oct 14 12:35:00 2021] ext4_end_io_rsv_work+0xf6/0x190 [ext4]
[Thu Oct 14 12:35:00 2021] process_one_work+0x1b6/0x350
[Thu Oct 14 12:35:00 2021] worker_thread+0x53/0x3e0
[Thu Oct 14 12:35:00 2021] ? process_one_work+0x350/0x350
[Thu Oct 14 12:35:00 2021] kthread+0x11b/0x140
[Thu Oct 14 12:35:00 2021] ? __kthread_bind_mask+0x60/0x60
[Thu Oct 14 12:35:00 2021] ret_from_fork+0x1f/0x30
[Thu Oct 14 12:37:01 2021] INFO: task jbd2/sdf1-8:730 blocked for more than 241 seconds.
[Thu Oct 14 12:37:01 2021] Tainted: G I 5.10.0-9-amd64 #1 Debian 5.10.70-1
[Thu Oct 14 12:37:01 2021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Thu Oct 14 12:37:01 2021] task:jbd2/sdf1-8 state:D stack: 0 pid: 730 ppid: 2 flags:0x00004000
[Thu Oct 14 12:37:01 2021] Call Trace:
[Thu Oct 14 12:37:01 2021] __schedule+0x282/0x870
[Thu Oct 14 12:37:01 2021] ? out_of_line_wait_on_bit_lock+0xb0/0xb0
[Thu Oct 14 12:37:01 2021] schedule+0x46/0xb0
[Thu Oct 14 12:37:01 2021] io_schedule+0x42/0x70
[Thu Oct 14 12:37:01 2021] bit_wait_io+0xd/0x50
[Thu Oct 14 12:37:01 2021] __wait_on_bit+0x2a/0x90
[Thu Oct 14 12:37:01 2021] out_of_line_wait_on_bit+0x92/0xb0
[Thu Oct 14 12:37:01 2021] ? var_wake_function+0x20/0x20
[Thu Oct 14 12:37:01 2021] jbd2_journal_commit_transaction+0x11a7/0x1ad0 [jbd2]
[Thu Oct 14 12:37:01 2021] kjournald2+0xab/0x270 [jbd2]
[Thu Oct 14 12:37:01 2021] ? add_wait_queue_exclusive+0x70/0x70
[Thu Oct 14 12:37:01 2021] ? load_superblock.part.0+0xb0/0xb0 [jbd2]
[Thu Oct 14 12:37:01 2021] kthread+0x11b/0x140
[Thu Oct 14 12:37:01 2021] ? __kthread_bind_mask+0x60/0x60
[Thu Oct 14 12:37:01 2021] ret_from_fork+0x1f/0x30
[Thu Oct 14 12:37:01 2021] INFO: task elasticsearch[e:5874 blocked for more than 120 seconds.
[Thu Oct 14 12:37:01 2021] Tainted: G I 5.10.0-9-amd64 #1 Debian 5.10.70-1
[Thu Oct 14 12:37:01 2021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Thu Oct 14 12:37:01 2021] task:elasticsearch[e state:D stack: 0 pid: 5874 ppid: 1 flags:0x00000320
[Thu Oct 14 12:37:01 2021] Call Trace:
[Thu Oct 14 12:37:01 2021] __schedule+0x282/0x870
[Thu Oct 14 12:37:01 2021] ? __getblk_gfp+0x27/0x60
[Thu Oct 14 12:37:01 2021] ? out_of_line_wait_on_bit_lock+0xb0/0xb0
[Thu Oct 14 12:37:01 2021] schedule+0x46/0xb0
[Thu Oct 14 12:37:01 2021] io_schedule+0x42/0x70
[Thu Oct 14 12:37:01 2021] bit_wait_io+0xd/0x50
[Thu Oct 14 12:37:01 2021] __wait_on_bit+0x2a/0x90
[Thu Oct 14 12:37:01 2021] out_of_line_wait_on_bit+0x92/0xb0
[Thu Oct 14 12:37:01 2021] ? var_wake_function+0x20/0x20
[Thu Oct 14 12:37:01 2021] do_get_write_access+0x276/0x3d0 [jbd2]
[Thu Oct 14 12:37:01 2021] jbd2_journal_get_write_access+0x63/0x80 [jbd2]
[Thu Oct 14 12:37:01 2021] __ext4_journal_get_write_access+0x77/0x120 [ext4]
[Thu Oct 14 12:37:01 2021] __ext4_new_inode+0x49a/0x1690 [ext4]
[Thu Oct 14 12:37:01 2021] ext4_create+0x106/0x1b0 [ext4]
[Thu Oct 14 12:37:01 2021] path_openat+0xde1/0x1080
[Thu Oct 14 12:37:01 2021] do_filp_open+0x88/0x130
[Thu Oct 14 12:37:01 2021] ? getname_flags.part.0+0x29/0x1a0
[Thu Oct 14 12:37:01 2021] ? __check_object_size+0x136/0x150
[Thu Oct 14 12:37:01 2021] do_sys_openat2+0x97/0x150
[Thu Oct 14 12:37:01 2021] __x64_sys_openat+0x54/0x90
[Thu Oct 14 12:37:01 2021] do_syscall_64+0x33/0x80
[Thu Oct 14 12:37:01 2021] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[Thu Oct 14 12:37:01 2021] RIP: 0033:0x7fcddd663c64
[Thu Oct 14 12:37:01 2021] RSP: 002b:00007fcc9fcface0 EFLAGS: 00000293 ORIG_RAX: 0000000000000101
[Thu Oct 14 12:37:01 2021] RAX: ffffffffffffffda RBX: 00000000000001b6 RCX: 00007fcddd663c64
[Thu Oct 14 12:37:01 2021] RDX: 00000000000000c1 RSI: 00007fcc88214af0 RDI: 00000000ffffff9c
[Thu Oct 14 12:37:01 2021] RBP: 00007fcc88214af0 R08: 0000000000000000 R09: 000000000000003b
[Thu Oct 14 12:37:01 2021] R10: 00000000000001b6 R11: 0000000000000293 R12: 00000000000000c1
[Thu Oct 14 12:37:01 2021] R13: 00000000000000c1 R14: 00007fcc88214af0 R15: 00007fcc8c031b48
[Thu Oct 14 12:37:01 2021] INFO: task elasticsearch[e:5883 blocked for more than 120 seconds.
[Thu Oct 14 12:37:01 2021] Tainted: G I 5.10.0-9-amd64 #1 Debian 5.10.70-1
[Thu Oct 14 12:37:01 2021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Thu Oct 14 12:37:01 2021] task:elasticsearch[e state:D stack: 0 pid: 5883 ppid: 1 flags:0x00000320
[Thu Oct 14 12:37:01 2021] Call Trace:
[Thu Oct 14 12:37:01 2021] __schedule+0x282/0x870
[Thu Oct 14 12:37:01 2021] ? out_of_line_wait_on_bit_lock+0xb0/0xb0
[Thu Oct 14 12:37:01 2021] schedule+0x46/0xb0
[Thu Oct 14 12:37:01 2021] io_schedule+0x42/0x70
[Thu Oct 14 12:37:01 2021] bit_wait_io+0xd/0x50
[Thu Oct 14 12:37:01 2021] __wait_on_bit+0x2a/0x90
[Thu Oct 14 12:37:01 2021] out_of_line_wait_on_bit+0x92/0xb0
[Thu Oct 14 12:37:01 2021] ? var_wake_function+0x20/0x20
[Thu Oct 14 12:37:01 2021] do_get_write_access+0x276/0x3d0 [jbd2]
[Thu Oct 14 12:37:01 2021] jbd2_journal_get_write_access+0x63/0x80 [jbd2]
[Thu Oct 14 12:37:01 2021] __ext4_journal_get_write_access+0x77/0x120 [ext4]
[Thu Oct 14 12:37:01 2021] __ext4_new_inode+0x49a/0x1690 [ext4]
[Thu Oct 14 12:37:01 2021] ext4_create+0x106/0x1b0 [ext4]
[Thu Oct 14 12:37:01 2021] path_openat+0xde1/0x1080
[Thu Oct 14 12:37:01 2021] do_filp_open+0x88/0x130
[Thu Oct 14 12:37:01 2021] ? getname_flags.part.0+0x29/0x1a0
[Thu Oct 14 12:37:01 2021] ? __check_object_size+0x136/0x150
[Thu Oct 14 12:37:01 2021] do_sys_openat2+0x97/0x150
[Thu Oct 14 12:37:01 2021] __x64_sys_openat+0x54/0x90
[Thu Oct 14 12:37:01 2021] do_syscall_64+0x33/0x80
[Thu Oct 14 12:37:01 2021] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[Thu Oct 14 12:37:01 2021] RIP: 0033:0x7fcddd663c64
[Thu Oct 14 12:37:01 2021] RSP: 002b:00007fcc9fffdf20 EFLAGS: 00000293 ORIG_RAX: 0000000000000101
[Thu Oct 14 12:37:01 2021] RAX: ffffffffffffffda RBX: 00000000000001b6 RCX: 00007fcddd663c64
[Thu Oct 14 12:37:01 2021] RDX: 00000000000000c1 RSI: 00007fcc98bbea60 RDI: 00000000ffffff9c
[Thu Oct 14 12:37:01 2021] RBP: 00007fcc98bbea60 R08: 0000000000000000 R09: 0000000000000061
[Thu Oct 14 12:37:01 2021] R10: 00000000000001b6 R11: 0000000000000293 R12: 00000000000000c1
[Thu Oct 14 12:37:01 2021] R13: 00000000000000c1 R14: 00007fcc98bbea60 R15: 00007fcc6811b348
[Thu Oct 14 12:37:01 2021] INFO: task elasticsearch[e:5892 blocked for more than 120 seconds.
[Thu Oct 14 12:37:01 2021] Tainted: G I 5.10.0-9-amd64 #1 Debian 5.10.70-1
[Thu Oct 14 12:37:01 2021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Thu Oct 14 12:37:01 2021] task:elasticsearch[e state:D stack: 0 pid: 5892 ppid: 1 flags:0x00000320
[Thu Oct 14 12:37:01 2021] Call Trace:
[Thu Oct 14 12:37:01 2021] __schedule+0x282/0x870
[Thu Oct 14 12:37:01 2021] ? elv_rb_del+0x1f/0x30
[Thu Oct 14 12:37:01 2021] ? out_of_line_wait_on_bit_lock+0xb0/0xb0
[Thu Oct 14 12:37:01 2021] schedule+0x46/0xb0
[Thu Oct 14 12:37:01 2021] io_schedule+0x42/0x70
[Thu Oct 14 12:37:01 2021] bit_wait_io+0xd/0x50
[Thu Oct 14 12:37:01 2021] __wait_on_bit+0x2a/0x90
[Thu Oct 14 12:37:01 2021] out_of_line_wait_on_bit+0x92/0xb0
[Thu Oct 14 12:37:01 2021] ? var_wake_function+0x20/0x20
[Thu Oct 14 12:37:01 2021] do_get_write_access+0x276/0x3d0 [jbd2]
[Thu Oct 14 12:37:01 2021] jbd2_journal_get_write_access+0x63/0x80 [jbd2]
[Thu Oct 14 12:37:01 2021] __ext4_journal_get_write_access+0x77/0x120 [ext4]
[Thu Oct 14 12:37:01 2021] __ext4_new_inode+0x49a/0x1690 [ext4]
[Thu Oct 14 12:37:01 2021] ext4_create+0x106/0x1b0 [ext4]
[Thu Oct 14 12:37:01 2021] path_openat+0xde1/0x1080
[Thu Oct 14 12:37:01 2021] do_filp_open+0x88/0x130
[Thu Oct 14 12:37:01 2021] ? getname_flags.part.0+0x29/0x1a0
[Thu Oct 14 12:37:01 2021] ? __check_object_size+0x136/0x150
[Thu Oct 14 12:37:01 2021] do_sys_openat2+0x97/0x150
[Thu Oct 14 12:37:01 2021] __x64_sys_openat+0x54/0x90
[Thu Oct 14 12:37:01 2021] do_syscall_64+0x33/0x80
[Thu Oct 14 12:37:01 2021] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[Thu Oct 14 12:37:01 2021] RIP: 0033:0x7fcddd663c64
[Thu Oct 14 12:37:01 2021] RSP: 002b:00007fcd6e90a280 EFLAGS: 00000293 ORIG_RAX: 0000000000000101
[Thu Oct 14 12:37:01 2021] RAX: ffffffffffffffda RBX: 00000000000001b6 RCX: 00007fcddd663c64
[Thu Oct 14 12:37:01 2021] RDX: 00000000000000c1 RSI: 00007fccd415f620 RDI: 00000000ffffff9c
[Thu Oct 14 12:37:01 2021] RBP: 00007fccd415f620 R08: 0000000000000000 R09: 000000000000003a
[Thu Oct 14 12:37:01 2021] R10: 00000000000001b6 R11: 0000000000000293 R12: 00000000000000c1
[Thu Oct 14 12:37:01 2021] R13: 00000000000000c1 R14: 00007fccd415f620 R15: 00007fcc48101b48
[Thu Oct 14 12:37:01 2021] INFO: task elasticsearch[e:5914 blocked for more than 241 seconds.
[Thu Oct 14 12:37:01 2021] Tainted: G I 5.10.0-9-amd64 #1 Debian 5.10.70-1
[Thu Oct 14 12:37:01 2021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Thu Oct 14 12:37:01 2021] task:elasticsearch[e state:D stack: 0 pid: 5914 ppid: 1 flags:0x00004320
[Thu Oct 14 12:37:01 2021] Call Trace:
[Thu Oct 14 12:37:01 2021] __schedule+0x282/0x870
[Thu Oct 14 12:37:01 2021] ? blk_mq_flush_plug_list+0x100/0x190
[Thu Oct 14 12:37:01 2021] schedule+0x46/0xb0
[Thu Oct 14 12:37:01 2021] io_schedule+0x42/0x70
[Thu Oct 14 12:37:01 2021] wait_on_page_bit_common+0x116/0x3b0
[Thu Oct 14 12:37:01 2021] ? trace_event_raw_event_file_check_and_advance_wb_err+0xf0/0xf0
[Thu Oct 14 12:37:01 2021] mpage_prepare_extent_to_map+0x257/0x290 [ext4]
[Thu Oct 14 12:37:01 2021] ext4_writepages+0x325/0xfc0 [ext4]
[Thu Oct 14 12:37:01 2021] do_writepages+0x34/0xc0
[Thu Oct 14 12:37:01 2021] ? handle_mm_fault+0x1490/0x1bf0
[Thu Oct 14 12:37:01 2021] __filemap_fdatawrite_range+0xc5/0x100
[Thu Oct 14 12:37:01 2021] file_write_and_wait_range+0x61/0xb0
[Thu Oct 14 12:37:01 2021] ext4_sync_file+0x73/0x350 [ext4]
[Thu Oct 14 12:37:01 2021] __x64_sys_fsync+0x34/0x60
[Thu Oct 14 12:37:01 2021] do_syscall_64+0x33/0x80
[Thu Oct 14 12:37:01 2021] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[Thu Oct 14 12:37:01 2021] RIP: 0033:0x7fcddd66aabb
[Thu Oct 14 12:37:01 2021] RSP: 002b:00007fcbc4bbf280 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
[Thu Oct 14 12:37:01 2021] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007fcddd66aabb
[Thu Oct 14 12:37:01 2021] RDX: 0000000000000032 RSI: 00007fcbc4bbf2c0 RDI: 0000000000000349
[Thu Oct 14 12:37:01 2021] RBP: 00007fcbc4bbf2b0 R08: 0000000000000000 R09: 000000060dfb3d30
[Thu Oct 14 12:37:01 2021] R10: 0000000000000d10 R11: 0000000000000293 R12: 00007fcc08105348
[Thu Oct 14 12:37:01 2021] R13: 000000084000efa0 R14: 00007fcbc4bbf2f0 R15: 00007fcc08105000
[Thu Oct 14 12:37:01 2021] INFO: task kworker/u49:4:5840 blocked for more than 241 seconds.
[Thu Oct 14 12:37:01 2021] Tainted: G I 5.10.0-9-amd64 #1 Debian 5.10.70-1
[Thu Oct 14 12:37:01 2021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Thu Oct 14 12:37:01 2021] task:kworker/u49:4 state:D stack: 0 pid: 5840 ppid: 2 flags:0x00004000
[Thu Oct 14 12:37:01 2021] Workqueue: writeback wb_workfn (flush-8:80)
[Thu Oct 14 12:37:01 2021] Call Trace:
[Thu Oct 14 12:37:01 2021] __schedule+0x282/0x870
[Thu Oct 14 12:37:01 2021] ? out_of_line_wait_on_bit_lock+0xb0/0xb0
[Thu Oct 14 12:37:01 2021] schedule+0x46/0xb0
[Thu Oct 14 12:37:01 2021] io_schedule+0x42/0x70
[Thu Oct 14 12:37:01 2021] bit_wait_io+0xd/0x50
[Thu Oct 14 12:37:01 2021] __wait_on_bit+0x2a/0x90
[Thu Oct 14 12:37:01 2021] out_of_line_wait_on_bit+0x92/0xb0
[Thu Oct 14 12:37:01 2021] ? var_wake_function+0x20/0x20
[Thu Oct 14 12:37:01 2021] do_get_write_access+0x276/0x3d0 [jbd2]
[Thu Oct 14 12:37:01 2021] jbd2_journal_get_write_access+0x63/0x80 [jbd2]
[Thu Oct 14 12:37:01 2021] __ext4_journal_get_write_access+0x77/0x120 [ext4]
[Thu Oct 14 12:37:01 2021] ext4_mb_mark_diskspace_used+0x7a/0x420 [ext4]
[Thu Oct 14 12:37:01 2021] ext4_mb_new_blocks+0x473/0xea0 [ext4]
[Thu Oct 14 12:37:01 2021] ? __read_extent_tree_block+0x6a/0x140 [ext4]
[Thu Oct 14 12:37:01 2021] ? ext4_find_extent+0x1af/0x450 [ext4]
[Thu Oct 14 12:37:01 2021] ext4_ext_map_blocks+0x85d/0x1890 [ext4]
[Thu Oct 14 12:37:01 2021] ? release_pages+0x3d8/0x450
[Thu Oct 14 12:37:01 2021] ? __pagevec_release+0x1c/0x50
[Thu Oct 14 12:37:01 2021] ext4_map_blocks+0x18e/0x590 [ext4]
[Thu Oct 14 12:37:01 2021] ext4_writepages+0x72e/0xfc0 [ext4]
[Thu Oct 14 12:37:01 2021] ? blk_mq_dispatch_rq_list+0x119/0x7c0
[Thu Oct 14 12:37:01 2021] do_writepages+0x34/0xc0
[Thu Oct 14 12:37:01 2021] ? fprop_reflect_period_percpu.isra.0+0x7b/0xc0
[Thu Oct 14 12:37:01 2021] __writeback_single_inode+0x39/0x2a0
[Thu Oct 14 12:37:01 2021] writeback_sb_inodes+0x200/0x470
[Thu Oct 14 12:37:01 2021] __writeback_inodes_wb+0x4c/0xe0
[Thu Oct 14 12:37:01 2021] wb_writeback+0x1d8/0x290
[Thu Oct 14 12:37:01 2021] wb_workfn+0x292/0x4d0
[Thu Oct 14 12:37:01 2021] ? __switch_to_asm+0x42/0x70
[Thu Oct 14 12:37:01 2021] process_one_work+0x1b6/0x350
[Thu Oct 14 12:37:01 2021] worker_thread+0x53/0x3e0
[Thu Oct 14 12:37:01 2021] ? process_one_work+0x350/0x350
[Thu Oct 14 12:37:01 2021] kthread+0x11b/0x140
[Thu Oct 14 12:37:01 2021] ? __kthread_bind_mask+0x60/0x60
[Thu Oct 14 12:37:01 2021] ret_from_fork+0x1f/0x30
2021-10-14 12:27:45
Full thread dump OpenJDK 64-Bit Server VM (11.0.12+7-post-Debian-2 mixed mode, sharing):
Threads class SMR info:
_java_thread_list=0x00007fcd54001ef0, length=71, elements={
0x00007fcdd5f22800, 0x00007fcdd5f24800, 0x00007fcdd5f2a000, 0x00007fcdd5f2c000,
0x00007fcdd5f2e000, 0x00007fcdd5f30000, 0x00007fcdd5f32000, 0x00007fcdd5f65800,
0x00007fcdd6603800, 0x00007fcdd6620800, 0x00007fcdd7613000, 0x00007fcdd761b800,
0x00007fcca8415800, 0x00007fcdd670e000, 0x00007fcdd7a88000, 0x00007fcdd7a81800,
0x00007fcdd778f000, 0x00007fcca8dd1000, 0x00007fcca8dcd800, 0x00007fcca900e000,
0x00007fcc8c003800, 0x00007fcc8c005000, 0x00007fcc8c007000, 0x00007fcc8c009800,
0x00007fcc8c00b800, 0x00007fcc8c00d800, 0x00007fcca9015800, 0x00007fcc7c004000,
0x00007fcc70006000, 0x00007fcc74002800, 0x00007fcc70012000, 0x00007fcc74013800,
0x00007fcc8c019000, 0x00007fcc74018800, 0x00007fcc70017000, 0x00007fcc8c01a800,
0x00007fcc7401a000, 0x00007fcc70018800, 0x00007fcc8c01c000, 0x00007fcc7401c000,
0x00007fcc8c01e000, 0x00007fcc7001a000, 0x00007fcc7401e000, 0x00007fcc7001c000,
0x00007fcc8c020000, 0x00007fcc74020000, 0x00007fcc7001f000, 0x00007fcc8c023000,
0x00007fcc74021800, 0x00007fcc8c024800, 0x00007fcc74023800, 0x00007fcccc002800,
0x00007fcc8c031800, 0x00007fcc60121800, 0x00007fcc6806e000, 0x00007fcca902c000,
0x00007fcdd4019800, 0x00007fcc6811b000, 0x00007fcc24101800, 0x00007fcc20106800,
0x00007fcca0001800, 0x00007fcc24103000, 0x00007fcbf80d7800, 0x00007fcbf80d9000,
0x00007fcc48101800, 0x00007fcc10101800, 0x00007fcccc004800, 0x00007fccd415d800,
0x00007fcc08105000, 0x00007fcc7c027800, 0x00007fcd54001000
}
"Reference Handler" #2 daemon prio=10 os_prio=0 cpu=7.13ms elapsed=38.81s tid=0x00007fcdd5f22800 nid=0x163c waiting on condition [0x00007fcd6f7fe000]
java.lang.Thread.State: RUNNABLE
at java.lang.ref.Reference.waitForReferencePendingList(java.base@11.0.12/Native Method)
at java.lang.ref.Reference.processPendingReferences(java.base@11.0.12/Reference.java:241)
at java.lang.ref.Reference$ReferenceHandler.run(java.base@11.0.12/Reference.java:213)
"Finalizer" #3 daemon prio=8 os_prio=0 cpu=1.34ms elapsed=38.81s tid=0x00007fcdd5f24800 nid=0x163d in Object.wait() [0x00007fcd6f6fd000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(java.base@11.0.12/Native Method)
- waiting on <0x00000001c0001f68> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:155)
- waiting to re-lock in wait() <0x00000001c0001f68> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:176)
at java.lang.ref.Finalizer$FinalizerThread.run(java.base@11.0.12/Finalizer.java:170)
"Signal Dispatcher" #4 daemon prio=9 os_prio=0 cpu=0.24ms elapsed=38.81s tid=0x00007fcdd5f2a000 nid=0x163e runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Service Thread" #5 daemon prio=9 os_prio=0 cpu=0.17ms elapsed=38.81s tid=0x00007fcdd5f2c000 nid=0x163f runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"C2 CompilerThread0" #6 daemon prio=9 os_prio=0 cpu=15665.74ms elapsed=38.81s tid=0x00007fcdd5f2e000 nid=0x1640 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
No compile task
"C1 CompilerThread0" #14 daemon prio=9 os_prio=0 cpu=1863.11ms elapsed=38.81s tid=0x00007fcdd5f30000 nid=0x1641 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
No compile task
"Sweeper thread" #18 daemon prio=9 os_prio=0 cpu=85.40ms elapsed=38.81s tid=0x00007fcdd5f32000 nid=0x1642 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Common-Cleaner" #19 daemon prio=8 os_prio=0 cpu=8.06ms elapsed=38.79s tid=0x00007fcdd5f65800 nid=0x1645 in Object.wait() [0x00007fcd6ec0d000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(java.base@11.0.12/Native Method)
- waiting on <0x00000001c0005360> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:155)
- waiting to re-lock in wait() <0x00000001c0005360> (a java.lang.ref.ReferenceQueue$Lock)
at jdk.internal.ref.CleanerImpl.run(java.base@11.0.12/CleanerImpl.java:148)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
at jdk.internal.misc.InnocuousThread.run(java.base@11.0.12/InnocuousThread.java:134)
"process reaper" #24 daemon prio=10 os_prio=0 cpu=0.32ms elapsed=37.46s tid=0x00007fcdd6603800 nid=0x1658 runnable [0x00007fcd6ded2000]
java.lang.Thread.State: RUNNABLE
at java.lang.ProcessHandleImpl.waitForProcessExit0(java.base@11.0.12/Native Method)
at java.lang.ProcessHandleImpl$1.run(java.base@11.0.12/ProcessHandleImpl.java:138)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"process reaper" #26 daemon prio=10 os_prio=0 cpu=1.42ms elapsed=37.22s tid=0x00007fcdd6620800 nid=0x165b waiting on condition [0x00007fcd6dc97000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c00039c0> (a java.util.concurrent.SynchronousQueue$TransferStack)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.base@11.0.12/SynchronousQueue.java:462)
at java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.base@11.0.12/SynchronousQueue.java:361)
at java.util.concurrent.SynchronousQueue.poll(java.base@11.0.12/SynchronousQueue.java:937)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][[timer]]" #28 daemon prio=5 os_prio=0 cpu=4.09ms elapsed=32.47s tid=0x00007fcdd7613000 nid=0x16ac waiting on condition [0x00007fcd6e7d7000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.elasticsearch.threadpool.ThreadPool$CachedTimeThread.run(ThreadPool.java:595)
"elasticsearch[es-node05-a][scheduler][T#1]" #29 daemon prio=5 os_prio=0 cpu=52.41ms elapsed=32.46s tid=0x00007fcdd761b800 nid=0x16ad waiting on condition [0x00007fcd6e6d6000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c00049e0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"ml-cpp-log-tail-thread" #30 daemon prio=5 os_prio=0 cpu=7.66ms elapsed=29.43s tid=0x00007fcca8415800 nid=0x16b8 runnable [0x00007fcd6e4d4000]
java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.readBytes(java.base@11.0.12/Native Method)
at java.io.FileInputStream.read(java.base@11.0.12/FileInputStream.java:257)
at org.elasticsearch.xpack.ml.process.logging.CppLogMessageHandler.tailStream(CppLogMessageHandler.java:105)
at org.elasticsearch.xpack.ml.process.NativeController.lambda$tailLogsInThread$0(NativeController.java:74)
at org.elasticsearch.xpack.ml.process.NativeController$$Lambda$2826/0x000000084095b040.run(Unknown Source)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"Connection evictor" #31 daemon prio=5 os_prio=0 cpu=1.29ms elapsed=28.64s tid=0x00007fcdd670e000 nid=0x16be waiting on condition [0x00007fcd6e3d3000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[scheduler][T#1]" #32 daemon prio=5 os_prio=0 cpu=5.02ms elapsed=28.53s tid=0x00007fcdd7a88000 nid=0x16bf waiting on condition [0x00007fcce32fc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000007a81a2650> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"ticker-schedule-trigger-engine" #33 daemon prio=5 os_prio=0 cpu=5.89ms elapsed=28.53s tid=0x00007fcdd7a81800 nid=0x16c0 waiting on condition [0x00007fcce2cfa000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.elasticsearch.xpack.watcher.trigger.schedule.engine.TickerScheduleTriggerEngine$Ticker.run(TickerScheduleTriggerEngine.java:193)
"elasticsearch[scheduler][T#1]" #34 daemon prio=5 os_prio=0 cpu=0.99ms elapsed=28.51s tid=0x00007fcdd778f000 nid=0x16c1 waiting on condition [0x00007fcce17f9000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000007a8139d18> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#1]" #35 daemon prio=5 os_prio=0 cpu=223.34ms elapsed=27.08s tid=0x00007fcca8dd1000 nid=0x16c2 runnable [0x00007fcd6e5d5000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a80d1740> (a sun.nio.ch.Util$2)
- locked <0x00000007a80d16e8> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#2]" #36 daemon prio=5 os_prio=0 cpu=306.11ms elapsed=27.06s tid=0x00007fcca8dcd800 nid=0x16c3 runnable [0x00007fcce04f8000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a8273918> (a sun.nio.ch.Util$2)
- locked <0x00000007a82738c0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#1]" #37 daemon prio=5 os_prio=0 cpu=1095.39ms elapsed=22.14s tid=0x00007fcca900e000 nid=0x16d1 waiting on condition [0x00007fcc9f7fa000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][[unicast_configured_hosts_resolver]][T#1]" #38 daemon prio=5 os_prio=0 cpu=1.58ms elapsed=22.13s tid=0x00007fcc8c003800 nid=0x16d2 waiting on condition [0x00007fcc9f6f9000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000007a8344b30> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][[unicast_configured_hosts_resolver]][T#2]" #39 daemon prio=5 os_prio=0 cpu=1.61ms elapsed=22.13s tid=0x00007fcc8c005000 nid=0x16d3 waiting on condition [0x00007fcc9f5f8000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000007a8344b30> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][[unicast_configured_hosts_resolver]][T#3]" #40 daemon prio=5 os_prio=0 cpu=1.19ms elapsed=22.13s tid=0x00007fcc8c007000 nid=0x16d4 waiting on condition [0x00007fcc9f4f7000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000007a8344b30> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#2]" #41 daemon prio=5 os_prio=0 cpu=1045.32ms elapsed=22.13s tid=0x00007fcc8c009800 nid=0x16d5 waiting on condition [0x00007fcc9f3f6000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at java.lang.Thread.sleep(java.base@11.0.12/Thread.java:334)
at org.apache.lucene.store.RateLimiter$SimpleRateLimiter.pause(RateLimiter.java:155)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:473)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#3]" #42 daemon prio=5 os_prio=0 cpu=1109.93ms elapsed=22.13s tid=0x00007fcc8c00b800 nid=0x16d6 waiting on condition [0x00007fcc9f2f5000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#4]" #43 daemon prio=5 os_prio=0 cpu=1075.69ms elapsed=22.13s tid=0x00007fcc8c00d800 nid=0x16d7 waiting on condition [0x00007fcc9f1f4000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at java.lang.Thread.sleep(java.base@11.0.12/Thread.java:334)
at org.apache.lucene.store.RateLimiter$SimpleRateLimiter.pause(RateLimiter.java:155)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:473)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][clusterApplierService#updateTask][T#1]" #44 daemon prio=5 os_prio=0 cpu=1363.34ms elapsed=22.13s tid=0x00007fcca9015800 nid=0x16d8 waiting on condition [0x00007fcc9f0f3000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000007a8069428> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@11.0.12/AbstractQueuedSynchronizer.java:2081)
at java.util.concurrent.PriorityBlockingQueue.take(java.base@11.0.12/PriorityBlockingQueue.java:546)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#4]" #47 daemon prio=5 os_prio=0 cpu=743.25ms elapsed=22.12s tid=0x00007fcc7c004000 nid=0x16d9 runnable [0x00007fcc9eff2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a80d1b50> (a sun.nio.ch.Util$2)
- locked <0x00000007a80d1af8> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#5]" #46 daemon prio=5 os_prio=0 cpu=57.28ms elapsed=22.12s tid=0x00007fcc70006000 nid=0x16da runnable [0x00007fcc9eef1000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a82dc270> (a sun.nio.ch.Util$2)
- locked <0x00000007a82dc218> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#3]" #45 daemon prio=5 os_prio=0 cpu=39.87ms elapsed=22.12s tid=0x00007fcc74002800 nid=0x16db runnable [0x00007fcc9edf0000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a8415fd8> (a sun.nio.ch.Util$2)
- locked <0x00000007a8415f80> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#5]" #48 daemon prio=5 os_prio=0 cpu=959.93ms elapsed=22.04s tid=0x00007fcc70012000 nid=0x16dc waiting on condition [0x00007fcc9eaef000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#6]" #49 daemon prio=5 os_prio=0 cpu=1010.58ms elapsed=22.04s tid=0x00007fcc74013800 nid=0x16dd waiting on condition [0x00007fcc9e9ee000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#7]" #50 daemon prio=5 os_prio=0 cpu=19.00ms elapsed=22.04s tid=0x00007fcc8c019000 nid=0x16de runnable [0x00007fcc9e8ed000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a8069780> (a sun.nio.ch.Util$2)
- locked <0x00000007a8069728> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#6]" #52 daemon prio=5 os_prio=0 cpu=17.92ms elapsed=22.04s tid=0x00007fcc74018800 nid=0x16df runnable [0x00007fcc9e7ec000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a81a2900> (a sun.nio.ch.Util$2)
- locked <0x00000007a81a28a8> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#8]" #51 daemon prio=5 os_prio=0 cpu=810.92ms elapsed=22.04s tid=0x00007fcc70017000 nid=0x16e0 runnable [0x00007fcc9e6eb000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a813a368> (a sun.nio.ch.Util$2)
- locked <0x00000007a813a310> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#9]" #53 daemon prio=5 os_prio=0 cpu=30.99ms elapsed=22.04s tid=0x00007fcc8c01a800 nid=0x16e1 runnable [0x00007fcc9e5ea000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a8344fc0> (a sun.nio.ch.Util$2)
- locked <0x00000007a8344f68> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#10]" #54 daemon prio=5 os_prio=0 cpu=745.92ms elapsed=22.03s tid=0x00007fcc7401a000 nid=0x16e2 runnable [0x00007fcc9e4e9000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a820b390> (a sun.nio.ch.Util$2)
- locked <0x00000007a820b338> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#11]" #55 daemon prio=5 os_prio=0 cpu=10.23ms elapsed=22.03s tid=0x00007fcc70018800 nid=0x16e3 runnable [0x00007fcc9e3e8000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a847e8d0> (a sun.nio.ch.Util$2)
- locked <0x00000007a847e878> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#12]" #56 daemon prio=5 os_prio=0 cpu=12.33ms elapsed=22.03s tid=0x00007fcc8c01c000 nid=0x16e4 runnable [0x00007fcc9e2e7000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a8069a18> (a sun.nio.ch.Util$2)
- locked <0x00000007a80699c0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#13]" #57 daemon prio=5 os_prio=0 cpu=17.13ms elapsed=22.03s tid=0x00007fcc7401c000 nid=0x16e5 runnable [0x00007fcc9e1e6000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a8273bb0> (a sun.nio.ch.Util$2)
- locked <0x00000007a8273b58> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#14]" #58 daemon prio=5 os_prio=0 cpu=745.10ms elapsed=22.03s tid=0x00007fcc8c01e000 nid=0x16e6 runnable [0x00007fcc9e0e5000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a843c678> (a sun.nio.ch.Util$2)
- locked <0x00000007a841a3f0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#15]" #59 daemon prio=5 os_prio=0 cpu=8.06ms elapsed=22.03s tid=0x00007fcc7001a000 nid=0x16e7 runnable [0x00007fcc9dfe4000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a8416270> (a sun.nio.ch.Util$2)
- locked <0x00000007a8416218> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#16]" #60 daemon prio=5 os_prio=0 cpu=10.65ms elapsed=22.03s tid=0x00007fcc7401e000 nid=0x16e8 runnable [0x00007fcc9dee3000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a8345258> (a sun.nio.ch.Util$2)
- locked <0x00000007a8345200> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#17]" #61 daemon prio=5 os_prio=0 cpu=21.11ms elapsed=22.03s tid=0x00007fcc7001c000 nid=0x16e9 runnable [0x00007fcc9dde2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a813a600> (a sun.nio.ch.Util$2)
- locked <0x00000007a813a5a8> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#18]" #62 daemon prio=5 os_prio=0 cpu=284.63ms elapsed=22.03s tid=0x00007fcc8c020000 nid=0x16ea runnable [0x00007fcc9dce1000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a82dc508> (a sun.nio.ch.Util$2)
- locked <0x00000007a82dc4b0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#19]" #63 daemon prio=5 os_prio=0 cpu=15.44ms elapsed=22.03s tid=0x00007fcc74020000 nid=0x16eb runnable [0x00007fcc9dbe0000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a8416508> (a sun.nio.ch.Util$2)
- locked <0x00000007a84164b0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#20]" #64 daemon prio=5 os_prio=0 cpu=14.01ms elapsed=22.03s tid=0x00007fcc7001f000 nid=0x16ec runnable [0x00007fcc9dadf000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a8363c40> (a sun.nio.ch.Util$2)
- locked <0x00000007a834e010> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#21]" #66 daemon prio=5 os_prio=0 cpu=39.16ms elapsed=22.03s tid=0x00007fcc8c023000 nid=0x16ed runnable [0x00007fcc9d9de000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a820b628> (a sun.nio.ch.Util$2)
- locked <0x00000007a820b5d0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#22]" #65 daemon prio=5 os_prio=0 cpu=849.58ms elapsed=22.03s tid=0x00007fcc74021800 nid=0x16ee runnable [0x00007fcc9d8dd000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x000000079fe63c30> (a sun.nio.ch.Util$2)
- locked <0x00000007a806bc80> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#23]" #67 daemon prio=5 os_prio=0 cpu=13.71ms elapsed=22.02s tid=0x00007fcc8c024800 nid=0x16ef runnable [0x00007fcc9d7dc000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x000000079e858178> (a sun.nio.ch.Util$2)
- locked <0x00000007a8345498> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#24]" #68 daemon prio=5 os_prio=0 cpu=755.03ms elapsed=22.02s tid=0x00007fcc74023800 nid=0x16f0 runnable [0x00007fcc9d6db000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000007a847eb68> (a sun.nio.ch.Util$2)
- locked <0x00000007a847eb10> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][ml_utility][T#1]" #69 daemon prio=5 os_prio=0 cpu=2.42ms elapsed=21.14s tid=0x00007fcccc002800 nid=0x16f1 waiting on condition [0x00007fcc9fefd000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0000e78> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][AsyncLucenePersistedState#updateTask][T#1]" #70 daemon prio=5 os_prio=0 cpu=342.68ms elapsed=21.13s tid=0x00007fcc8c031800 nid=0x16f2 waiting on condition [0x00007fcc9fcfb000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000007a82dc748> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][management][T#1]" #71 daemon prio=5 os_prio=0 cpu=195.40ms elapsed=20.74s tid=0x00007fcc60121800 nid=0x16f3 waiting on condition [0x00007fcc9fdfc000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0003928> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][trigger_engine_scheduler][T#1]" #72 daemon prio=5 os_prio=0 cpu=0.25ms elapsed=19.86s tid=0x00007fcc6806e000 nid=0x16f9 waiting on condition [0x00007fcc9ce0a000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000007a820b868> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[keepAlive/7.10.1]" #21 prio=5 os_prio=0 cpu=0.32ms elapsed=19.85s tid=0x00007fcca902c000 nid=0x16fa waiting on condition [0x00007fcc9cd09000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c001c548> (a java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(java.base@11.0.12/AbstractQueuedSynchronizer.java:885)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(java.base@11.0.12/AbstractQueuedSynchronizer.java:1039)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(java.base@11.0.12/AbstractQueuedSynchronizer.java:1345)
at java.util.concurrent.CountDownLatch.await(java.base@11.0.12/CountDownLatch.java:232)
at org.elasticsearch.bootstrap.Bootstrap$1.run(Bootstrap.java:89)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"DestroyJavaVM" #73 prio=5 os_prio=0 cpu=13626.15ms elapsed=19.85s tid=0x00007fcdd4019800 nid=0x1620 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"elasticsearch[es-node05-a][DanglingIndices#updateTask][T#1]" #74 daemon prio=5 os_prio=0 cpu=51.61ms elapsed=19.44s tid=0x00007fcc6811b000 nid=0x16fb waiting on condition [0x00007fcc9fffe000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c1917d18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#7]" #75 daemon prio=5 os_prio=0 cpu=996.09ms elapsed=19.37s tid=0x00007fcc24101800 nid=0x16fd waiting on condition [0x00007fcc9cf0b000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at java.lang.Thread.sleep(java.base@11.0.12/Thread.java:334)
at org.apache.lucene.store.RateLimiter$SimpleRateLimiter.pause(RateLimiter.java:155)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:473)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#8]" #76 daemon prio=5 os_prio=0 cpu=1056.96ms elapsed=19.35s tid=0x00007fcc20106800 nid=0x16fe waiting on condition [0x00007fcc9d1d6000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at java.lang.Thread.sleep(java.base@11.0.12/Thread.java:334)
at org.apache.lucene.store.RateLimiter$SimpleRateLimiter.pause(RateLimiter.java:155)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:473)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#9]" #77 daemon prio=5 os_prio=0 cpu=1001.76ms elapsed=19.35s tid=0x00007fcca0001800 nid=0x16ff waiting on condition [0x00007fcc9d0d5000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#10]" #78 daemon prio=5 os_prio=0 cpu=907.99ms elapsed=19.35s tid=0x00007fcc24103000 nid=0x1700 runnable [0x00007fcc9c93e000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.write0(java.base@11.0.12/Native Method)
at sun.nio.ch.FileDispatcherImpl.write(java.base@11.0.12/FileDispatcherImpl.java:62)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(java.base@11.0.12/IOUtil.java:113)
at sun.nio.ch.IOUtil.write(java.base@11.0.12/IOUtil.java:79)
at sun.nio.ch.FileChannelImpl.write(java.base@11.0.12/FileChannelImpl.java:280)
- locked <0x00000007ffece120> (a java.lang.Object)
at java.nio.channels.Channels.writeFullyImpl(java.base@11.0.12/Channels.java:74)
at java.nio.channels.Channels.writeFully(java.base@11.0.12/Channels.java:97)
at java.nio.channels.Channels$1.write(java.base@11.0.12/Channels.java:172)
- locked <0x00000007ffece180> (a java.nio.channels.Channels$1)
at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:416)
at java.util.zip.CheckedOutputStream.write(java.base@11.0.12/CheckedOutputStream.java:74)
at java.io.BufferedOutputStream.write(java.base@11.0.12/BufferedOutputStream.java:123)
- locked <0x00000007ffece1c0> (a java.io.BufferedOutputStream)
at org.apache.lucene.store.OutputStreamIndexOutput.writeBytes(OutputStreamIndexOutput.java:53)
at org.elasticsearch.common.lucene.store.FilterIndexOutput.writeBytes(FilterIndexOutput.java:59)
at org.elasticsearch.index.store.Store$LuceneVerifyingIndexOutput.writeBytes(Store.java:1223)
at org.elasticsearch.indices.recovery.MultiFileWriter.innerWriteFileChunk(MultiFileWriter.java:126)
at org.elasticsearch.indices.recovery.MultiFileWriter.access$000(MultiFileWriter.java:46)
at org.elasticsearch.indices.recovery.MultiFileWriter$FileChunkWriter.writeChunk(MultiFileWriter.java:213)
at org.elasticsearch.indices.recovery.MultiFileWriter.writeFileChunk(MultiFileWriter.java:74)
at org.elasticsearch.indices.recovery.RecoveryTarget.writeFileChunk(RecoveryTarget.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#1]" #79 daemon prio=5 os_prio=0 cpu=3.43ms elapsed=19.17s tid=0x00007fcbf80d7800 nid=0x1701 waiting on condition [0x00007fcc9c63d000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#2]" #80 daemon prio=5 os_prio=0 cpu=10.73ms elapsed=19.17s tid=0x00007fcbf80d9000 nid=0x1702 waiting on condition [0x00007fcc9c53c000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#11]" #81 daemon prio=5 os_prio=0 cpu=837.01ms elapsed=17.38s tid=0x00007fcc48101800 nid=0x1704 waiting on condition [0x00007fcd6e90a000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#12]" #82 daemon prio=5 os_prio=0 cpu=744.58ms elapsed=17.38s tid=0x00007fcc10101800 nid=0x1705 runnable [0x00007fcd6eb0b000]
java.lang.Thread.State: RUNNABLE
at org.elasticsearch.indices.recovery.RecoveryRequestTracker.markReceivedAndCreateListener(RecoveryRequestTracker.java:52)
- locked <0x00000007a82f7980> (a org.elasticsearch.indices.recovery.RecoveryRequestTracker)
at org.elasticsearch.indices.recovery.RecoveryTarget.markRequestReceivedAndCreateListener(RecoveryTarget.java:125)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.createOrFinishListener(PeerRecoveryTargetService.java:499)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.createOrFinishListener(PeerRecoveryTargetService.java:486)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.access$300(PeerRecoveryTargetService.java:84)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:457)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][refresh][T#1]" #83 daemon prio=5 os_prio=0 cpu=2.94ms elapsed=14.44s tid=0x00007fcccc004800 nid=0x170f waiting on condition [0x00007fcce01f7000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c00340f8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#1]" #84 daemon prio=5 os_prio=0 cpu=27.89ms elapsed=10.85s tid=0x00007fccd415d800 nid=0x1714 waiting on condition [0x00007fcce2dfb000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#13]" #85 daemon prio=5 os_prio=0 cpu=341.02ms elapsed=8.17s tid=0x00007fcc08105000 nid=0x171a waiting on condition [0x00007fcbc4bbf000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at java.lang.Thread.sleep(java.base@11.0.12/Thread.java:334)
at org.apache.lucene.store.RateLimiter$SimpleRateLimiter.pause(RateLimiter.java:155)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:473)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#2]" #86 daemon prio=5 os_prio=0 cpu=1.01ms elapsed=0.87s tid=0x00007fcc7c027800 nid=0x1729 waiting on condition [0x00007fcd6ea0b000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"Attach Listener" #87 daemon prio=9 os_prio=0 cpu=0.38ms elapsed=0.10s tid=0x00007fcd54001000 nid=0x1743 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"VM Thread" os_prio=0 cpu=219.18ms elapsed=38.81s tid=0x00007fcdd5f1f800 nid=0x163b runnable
"GC Thread#0" os_prio=0 cpu=3290.50ms elapsed=41.96s tid=0x00007fcdd4033000 nid=0x1621 runnable
"GC Thread#1" os_prio=0 cpu=166.65ms elapsed=34.79s tid=0x00007fcd64001000 nid=0x168d runnable
"GC Thread#2" os_prio=0 cpu=190.12ms elapsed=34.79s tid=0x00007fcd64002000 nid=0x168e runnable
"GC Thread#3" os_prio=0 cpu=186.63ms elapsed=34.79s tid=0x00007fcd64003000 nid=0x168f runnable
"GC Thread#4" os_prio=0 cpu=167.11ms elapsed=34.79s tid=0x00007fcd64004000 nid=0x1690 runnable
"GC Thread#5" os_prio=0 cpu=179.08ms elapsed=34.79s tid=0x00007fcd64005000 nid=0x1691 runnable
"GC Thread#6" os_prio=0 cpu=168.47ms elapsed=34.79s tid=0x00007fcd64006800 nid=0x1692 runnable
"GC Thread#7" os_prio=0 cpu=180.26ms elapsed=34.79s tid=0x00007fcd64008000 nid=0x1693 runnable
"GC Thread#8" os_prio=0 cpu=190.21ms elapsed=34.79s tid=0x00007fcd64009800 nid=0x1694 runnable
"GC Thread#9" os_prio=0 cpu=179.40ms elapsed=34.79s tid=0x00007fcd6400b000 nid=0x1695 runnable
"GC Thread#10" os_prio=0 cpu=178.53ms elapsed=34.79s tid=0x00007fcd6400c800 nid=0x1696 runnable
"GC Thread#11" os_prio=0 cpu=163.43ms elapsed=34.79s tid=0x00007fcd6400e000 nid=0x1697 runnable
"GC Thread#12" os_prio=0 cpu=191.87ms elapsed=34.79s tid=0x00007fcd6400f800 nid=0x1698 runnable
"GC Thread#13" os_prio=0 cpu=193.08ms elapsed=34.79s tid=0x00007fcd64011000 nid=0x1699 runnable
"GC Thread#14" os_prio=0 cpu=192.66ms elapsed=34.79s tid=0x00007fcd64012800 nid=0x169a runnable
"GC Thread#15" os_prio=0 cpu=317.19ms elapsed=34.79s tid=0x00007fcd64014000 nid=0x169b runnable
"GC Thread#16" os_prio=0 cpu=161.28ms elapsed=34.79s tid=0x00007fcd64015800 nid=0x169c runnable
"GC Thread#17" os_prio=0 cpu=163.11ms elapsed=34.79s tid=0x00007fcd64017000 nid=0x169d runnable
"G1 Main Marker" os_prio=0 cpu=26.56ms elapsed=41.96s tid=0x00007fcdd4069000 nid=0x1622 runnable
"G1 Conc#0" os_prio=0 cpu=197.29ms elapsed=41.96s tid=0x00007fcdd406b000 nid=0x1623 runnable
"G1 Conc#1" os_prio=0 cpu=200.65ms elapsed=33.71s tid=0x00007fcd78001000 nid=0x169e runnable
"G1 Conc#2" os_prio=0 cpu=212.28ms elapsed=33.71s tid=0x00007fcd78002000 nid=0x169f runnable
"G1 Conc#3" os_prio=0 cpu=195.82ms elapsed=33.70s tid=0x00007fcd78003800 nid=0x16a0 runnable
"G1 Conc#4" os_prio=0 cpu=192.30ms elapsed=33.70s tid=0x00007fcd78005000 nid=0x16a1 runnable
"G1 Refine#0" os_prio=0 cpu=11.69ms elapsed=38.82s tid=0x00007fcdd5ee4800 nid=0x1639 runnable
"G1 Refine#1" os_prio=0 cpu=3.07ms elapsed=20.60s tid=0x00007fcd74001000 nid=0x16f4 runnable
"G1 Refine#2" os_prio=0 cpu=1.48ms elapsed=20.60s tid=0x00007fcc0c001000 nid=0x16f5 runnable
"G1 Young RemSet Sampling" os_prio=0 cpu=9.55ms elapsed=38.82s tid=0x00007fcdd5ee6800 nid=0x163a runnable
"VM Periodic Task Thread" os_prio=0 cpu=25.85ms elapsed=38.80s tid=0x00007fcdd5f63000 nid=0x1644 waiting on condition
JNI global refs: 42, weak refs: 45
2021-10-14 12:35:15
Full thread dump OpenJDK 64-Bit Server VM (11.0.12+7-post-Debian-2 mixed mode, sharing):
Threads class SMR info:
_java_thread_list=0x00007fc6001804c0, length=135, elements={
0x00007fcdd5f22800, 0x00007fcdd5f24800, 0x00007fcdd5f2a000, 0x00007fcdd5f2c000,
0x00007fcdd5f2e000, 0x00007fcdd5f30000, 0x00007fcdd5f32000, 0x00007fcdd5f65800,
0x00007fcdd6603800, 0x00007fcdd7613000, 0x00007fcdd761b800, 0x00007fcca8415800,
0x00007fcdd670e000, 0x00007fcdd7a88000, 0x00007fcdd7a81800, 0x00007fcdd778f000,
0x00007fcca8dd1000, 0x00007fcca8dcd800, 0x00007fcca900e000, 0x00007fcc8c009800,
0x00007fcc8c00b800, 0x00007fcc8c00d800, 0x00007fcca9015800, 0x00007fcc7c004000,
0x00007fcc70006000, 0x00007fcc74002800, 0x00007fcc70012000, 0x00007fcc74013800,
0x00007fcc8c019000, 0x00007fcc74018800, 0x00007fcc70017000, 0x00007fcc8c01a800,
0x00007fcc7401a000, 0x00007fcc70018800, 0x00007fcc8c01c000, 0x00007fcc7401c000,
0x00007fcc8c01e000, 0x00007fcc7001a000, 0x00007fcc7401e000, 0x00007fcc7001c000,
0x00007fcc8c020000, 0x00007fcc74020000, 0x00007fcc7001f000, 0x00007fcc8c023000,
0x00007fcc74021800, 0x00007fcc8c024800, 0x00007fcc74023800, 0x00007fcccc002800,
0x00007fcc8c031800, 0x00007fcc60121800, 0x00007fcc6806e000, 0x00007fcca902c000,
0x00007fcdd4019800, 0x00007fcc6811b000, 0x00007fcc24101800, 0x00007fcc20106800,
0x00007fcca0001800, 0x00007fcc24103000, 0x00007fcbf80d7800, 0x00007fcc48101800,
0x00007fcc10101800, 0x00007fcccc004800, 0x00007fccd415d800, 0x00007fcc08105000,
0x00007fcc7c027800, 0x00007fcd54001000, 0x00007fcc7403f000, 0x00007fcc94063000,
0x00007fcc7c01e800, 0x00007fcbe805a000, 0x00007fcc70072800, 0x00007fccd4022800,
0x00007fcc5c05e800, 0x00007fcc5c05f000, 0x00007fcc7c074000, 0x00007fcbe8060800,
0x00007fcc7404e000, 0x00007fcc04027000, 0x00007fcbf81f4000, 0x00007fcc94066800,
0x00007fcc70076800, 0x00007fcc8c003800, 0x00007fcbf80d4000, 0x00007fcc04028000,
0x00007fcbfc224800, 0x00007fcbfc21f800, 0x00007fcc04025000, 0x00007fcbe8062000,
0x00007fcbfc221000, 0x00007fcc7007e800, 0x00007fcccc00e800, 0x00007fcccc00f800,
0x00007fcccc010800, 0x00007fcccc012000, 0x00007fcc700ae000, 0x00007fcc70095800,
0x00007fcc700a8800, 0x00007fcc700a9000, 0x00007fcc4011d000, 0x00007fcc60144800,
0x00007fcc60146000, 0x00007fcc64123000, 0x00007fcc50101800, 0x00007fcc10105800,
0x00007fcc38106000, 0x00007fcc1c104800, 0x00007fcc4011e000, 0x00007fcc6412c000,
0x00007fcc60147800, 0x00007fcc4c101000, 0x00007fcc1010b000, 0x00007fcc50105000,
0x00007fcc4011f800, 0x00007fcc1c106000, 0x00007fcc50107000, 0x00007fcc1010d000,
0x00007fcc28105000, 0x00007fcc38108000, 0x00007fcc1010f000, 0x00007fcca0006800,
0x00007fcc44101800, 0x00007fcc641ad800, 0x00007fcc10111000, 0x00007fcc1c107800,
0x00007fcc4c103000, 0x00007fcc28107000, 0x00007fcc28108800, 0x00007fcc4c113800,
0x00007fcc1c110000, 0x00007fcc10113000, 0x00007fcca010e800, 0x00007fcc40121800,
0x00007fcca0110000, 0x00007fcc4c115800, 0x00007fcc64177800
}
"Reference Handler" #2 daemon prio=10 os_prio=0 cpu=8.05ms elapsed=489.15s tid=0x00007fcdd5f22800 nid=0x163c waiting on condition [0x00007fcd6f7fe000]
java.lang.Thread.State: RUNNABLE
at java.lang.ref.Reference.waitForReferencePendingList(java.base@11.0.12/Native Method)
at java.lang.ref.Reference.processPendingReferences(java.base@11.0.12/Reference.java:241)
at java.lang.ref.Reference$ReferenceHandler.run(java.base@11.0.12/Reference.java:213)
"Finalizer" #3 daemon prio=8 os_prio=0 cpu=1.34ms elapsed=489.15s tid=0x00007fcdd5f24800 nid=0x163d in Object.wait() [0x00007fcd6f6fd000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(java.base@11.0.12/Native Method)
- waiting on <0x00000001c0001f68> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:155)
- waiting to re-lock in wait() <0x00000001c0001f68> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:176)
at java.lang.ref.Finalizer$FinalizerThread.run(java.base@11.0.12/Finalizer.java:170)
"Signal Dispatcher" #4 daemon prio=9 os_prio=0 cpu=0.24ms elapsed=489.15s tid=0x00007fcdd5f2a000 nid=0x163e runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Service Thread" #5 daemon prio=9 os_prio=0 cpu=0.17ms elapsed=489.15s tid=0x00007fcdd5f2c000 nid=0x163f runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"C2 CompilerThread0" #6 daemon prio=9 os_prio=0 cpu=56033.07ms elapsed=489.15s tid=0x00007fcdd5f2e000 nid=0x1640 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
No compile task
"C1 CompilerThread0" #14 daemon prio=9 os_prio=0 cpu=4079.80ms elapsed=489.15s tid=0x00007fcdd5f30000 nid=0x1641 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
No compile task
"Sweeper thread" #18 daemon prio=9 os_prio=0 cpu=199.72ms elapsed=489.15s tid=0x00007fcdd5f32000 nid=0x1642 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Common-Cleaner" #19 daemon prio=8 os_prio=0 cpu=8.83ms elapsed=489.13s tid=0x00007fcdd5f65800 nid=0x1645 in Object.wait() [0x00007fcd6ec0d000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(java.base@11.0.12/Native Method)
- waiting on <0x00000001c0005360> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:155)
- waiting to re-lock in wait() <0x00000001c0005360> (a java.lang.ref.ReferenceQueue$Lock)
at jdk.internal.ref.CleanerImpl.run(java.base@11.0.12/CleanerImpl.java:148)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
at jdk.internal.misc.InnocuousThread.run(java.base@11.0.12/InnocuousThread.java:134)
"process reaper" #24 daemon prio=10 os_prio=0 cpu=0.32ms elapsed=487.80s tid=0x00007fcdd6603800 nid=0x1658 runnable [0x00007fcd6ded2000]
java.lang.Thread.State: RUNNABLE
at java.lang.ProcessHandleImpl.waitForProcessExit0(java.base@11.0.12/Native Method)
at java.lang.ProcessHandleImpl$1.run(java.base@11.0.12/ProcessHandleImpl.java:138)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][[timer]]" #28 daemon prio=5 os_prio=0 cpu=53.67ms elapsed=482.81s tid=0x00007fcdd7613000 nid=0x16ac waiting on condition [0x00007fcd6e7d7000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.elasticsearch.threadpool.ThreadPool$CachedTimeThread.run(ThreadPool.java:595)
"elasticsearch[es-node05-a][scheduler][T#1]" #29 daemon prio=5 os_prio=0 cpu=543.15ms elapsed=482.81s tid=0x00007fcdd761b800 nid=0x16ad waiting on condition [0x00007fcd6e6d6000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c00049e0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"ml-cpp-log-tail-thread" #30 daemon prio=5 os_prio=0 cpu=7.66ms elapsed=479.77s tid=0x00007fcca8415800 nid=0x16b8 runnable [0x00007fcd6e4d4000]
java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.readBytes(java.base@11.0.12/Native Method)
at java.io.FileInputStream.read(java.base@11.0.12/FileInputStream.java:257)
at org.elasticsearch.xpack.ml.process.logging.CppLogMessageHandler.tailStream(CppLogMessageHandler.java:105)
at org.elasticsearch.xpack.ml.process.NativeController.lambda$tailLogsInThread$0(NativeController.java:74)
at org.elasticsearch.xpack.ml.process.NativeController$$Lambda$2826/0x000000084095b040.run(Unknown Source)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"Connection evictor" #31 daemon prio=5 os_prio=0 cpu=5.29ms elapsed=478.98s tid=0x00007fcdd670e000 nid=0x16be waiting on condition [0x00007fcd6e3d3000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[scheduler][T#1]" #32 daemon prio=5 os_prio=0 cpu=34.55ms elapsed=478.87s tid=0x00007fcdd7a88000 nid=0x16bf waiting on condition [0x00007fcce32fc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c35cfc40> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"ticker-schedule-trigger-engine" #33 daemon prio=5 os_prio=0 cpu=53.09ms elapsed=478.87s tid=0x00007fcdd7a81800 nid=0x16c0 waiting on condition [0x00007fcce2cfa000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.elasticsearch.xpack.watcher.trigger.schedule.engine.TickerScheduleTriggerEngine$Ticker.run(TickerScheduleTriggerEngine.java:193)
"elasticsearch[scheduler][T#1]" #34 daemon prio=5 os_prio=0 cpu=6.76ms elapsed=478.86s tid=0x00007fcdd778f000 nid=0x16c1 waiting on condition [0x00007fcce17f9000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c35c61c0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#1]" #35 daemon prio=5 os_prio=0 cpu=254.20ms elapsed=477.43s tid=0x00007fcca8dd1000 nid=0x16c2 runnable [0x00007fcd6e5d5000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c7238> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c71e0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#2]" #36 daemon prio=5 os_prio=0 cpu=416.24ms elapsed=477.40s tid=0x00007fcca8dcd800 nid=0x16c3 runnable [0x00007fcce04f8000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2678> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2588> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#1]" #37 daemon prio=5 os_prio=0 cpu=17939.89ms elapsed=472.48s tid=0x00007fcca900e000 nid=0x16d1 waiting on condition [0x00007fcc9f7fa000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#2]" #41 daemon prio=5 os_prio=0 cpu=18503.25ms elapsed=472.47s tid=0x00007fcc8c009800 nid=0x16d5 waiting on condition [0x00007fcc9f3f6000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#3]" #42 daemon prio=5 os_prio=0 cpu=30137.67ms elapsed=472.47s tid=0x00007fcc8c00b800 nid=0x16d6 waiting on condition [0x00007fcc9f2f5000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#4]" #43 daemon prio=5 os_prio=0 cpu=19775.56ms elapsed=472.47s tid=0x00007fcc8c00d800 nid=0x16d7 waiting on condition [0x00007fcc9f1f4000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][clusterApplierService#updateTask][T#1]" #44 daemon prio=5 os_prio=0 cpu=1736.31ms elapsed=472.47s tid=0x00007fcca9015800 nid=0x16d8 waiting on condition [0x00007fcc9f0f3000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c42638f8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@11.0.12/AbstractQueuedSynchronizer.java:2081)
at java.util.concurrent.PriorityBlockingQueue.take(java.base@11.0.12/PriorityBlockingQueue.java:546)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#4]" #47 daemon prio=5 os_prio=0 cpu=21844.19ms elapsed=472.46s tid=0x00007fcc7c004000 nid=0x16d9 runnable [0x00007fcc9eff2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3e10> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3d20> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#5]" #46 daemon prio=5 os_prio=0 cpu=111.96ms elapsed=472.46s tid=0x00007fcc70006000 nid=0x16da runnable [0x00007fcc9eef1000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c1dc0> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c1d68> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#3]" #45 daemon prio=5 os_prio=0 cpu=113.92ms elapsed=472.46s tid=0x00007fcc74002800 nid=0x16db runnable [0x00007fcc9edf0000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3568> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3510> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#5]" #48 daemon prio=5 os_prio=0 cpu=18174.02ms elapsed=472.38s tid=0x00007fcc70012000 nid=0x16dc waiting on condition [0x00007fcc9eaef000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#6]" #49 daemon prio=5 os_prio=0 cpu=18075.81ms elapsed=472.38s tid=0x00007fcc74013800 nid=0x16dd waiting on condition [0x00007fcc9e9ee000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#7]" #50 daemon prio=5 os_prio=0 cpu=57.37ms elapsed=472.38s tid=0x00007fcc8c019000 nid=0x16de runnable [0x00007fcc9e8ed000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2dc8> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2d70> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#6]" #52 daemon prio=5 os_prio=0 cpu=53.30ms elapsed=472.38s tid=0x00007fcc74018800 nid=0x16df runnable [0x00007fcc9e7ec000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c6a28> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c69d0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#8]" #51 daemon prio=5 os_prio=0 cpu=19574.91ms elapsed=472.38s tid=0x00007fcc70017000 nid=0x16e0 runnable [0x00007fcc9e6eb000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c5920> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c5830> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#9]" #53 daemon prio=5 os_prio=0 cpu=82.14ms elapsed=472.38s tid=0x00007fcc8c01a800 nid=0x16e1 runnable [0x00007fcc9e5ea000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3748> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c36f0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#10]" #54 daemon prio=5 os_prio=0 cpu=22860.96ms elapsed=472.38s tid=0x00007fcc7401a000 nid=0x16e2 runnable [0x00007fcc9e4e9000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2f60> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2e70> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#11]" #55 daemon prio=5 os_prio=0 cpu=32.62ms elapsed=472.38s tid=0x00007fcc70018800 nid=0x16e3 runnable [0x00007fcc9e3e8000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c1f58> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c1f00> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#12]" #56 daemon prio=5 os_prio=0 cpu=41.40ms elapsed=472.38s tid=0x00007fcc8c01c000 nid=0x16e4 runnable [0x00007fcc9e2e7000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2fc8> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2f70> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#13]" #57 daemon prio=5 os_prio=0 cpu=25.94ms elapsed=472.38s tid=0x00007fcc7401c000 nid=0x16e5 runnable [0x00007fcc9e1e6000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2058> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2000> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#14]" #58 daemon prio=5 os_prio=0 cpu=12165.07ms elapsed=472.38s tid=0x00007fcc8c01e000 nid=0x16e6 runnable [0x00007fcc9e0e5000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c27c0> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c26d0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#15]" #59 daemon prio=5 os_prio=0 cpu=15.73ms elapsed=472.38s tid=0x00007fcc7001a000 nid=0x16e7 runnable [0x00007fcc9dfe4000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c5988> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c5930> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#16]" #60 daemon prio=5 os_prio=0 cpu=43.82ms elapsed=472.38s tid=0x00007fcc7401e000 nid=0x16e8 runnable [0x00007fcc9dee3000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3e78> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3e20> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#17]" #61 daemon prio=5 os_prio=0 cpu=139.37ms elapsed=472.37s tid=0x00007fcc7001c000 nid=0x16e9 runnable [0x00007fcc9dde2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c8b00> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c8a10> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#18]" #62 daemon prio=5 os_prio=0 cpu=352.73ms elapsed=472.37s tid=0x00007fcc8c020000 nid=0x16ea runnable [0x00007fcc9dce1000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c9278> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c9220> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#19]" #63 daemon prio=5 os_prio=0 cpu=53.08ms elapsed=472.37s tid=0x00007fcc74020000 nid=0x16eb runnable [0x00007fcc9dbe0000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c7360> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c7308> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#20]" #64 daemon prio=5 os_prio=0 cpu=55.77ms elapsed=472.37s tid=0x00007fcc7001f000 nid=0x16ec runnable [0x00007fcc9dadf000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c6b28> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c6ad0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#21]" #66 daemon prio=5 os_prio=0 cpu=157.65ms elapsed=472.37s tid=0x00007fcc8c023000 nid=0x16ed runnable [0x00007fcc9d9de000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2828> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c27d0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#22]" #65 daemon prio=5 os_prio=0 cpu=20647.74ms elapsed=472.37s tid=0x00007fcc74021800 nid=0x16ee runnable [0x00007fcc9d8dd000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c3c861b0> (a sun.nio.ch.Util$2)
- locked <0x00000001c35d3910> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#23]" #67 daemon prio=5 os_prio=0 cpu=50.78ms elapsed=472.37s tid=0x00007fcc8c024800 nid=0x16ef runnable [0x00007fcc9d7dc000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c7460> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c7408> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#24]" #68 daemon prio=5 os_prio=0 cpu=13118.22ms elapsed=472.37s tid=0x00007fcc74023800 nid=0x16f0 runnable [0x00007fcc9d6db000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3f78> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3f20> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][ml_utility][T#1]" #69 daemon prio=5 os_prio=0 cpu=29.30ms elapsed=471.48s tid=0x00007fcccc002800 nid=0x16f1 waiting on condition [0x00007fcc9fefd000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0000e78> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][AsyncLucenePersistedState#updateTask][T#1]" #70 daemon prio=5 os_prio=0 cpu=926.06ms elapsed=471.47s tid=0x00007fcc8c031800 nid=0x16f2 runnable [0x00007fcc9fcfa000]
java.lang.Thread.State: RUNNABLE
at sun.nio.fs.UnixNativeDispatcher.open0(java.base@11.0.12/Native Method)
at sun.nio.fs.UnixNativeDispatcher.open(java.base@11.0.12/UnixNativeDispatcher.java:71)
at sun.nio.fs.UnixChannelFactory.open(java.base@11.0.12/UnixChannelFactory.java:267)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:143)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:156)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(java.base@11.0.12/UnixFileSystemProvider.java:217)
at java.nio.file.spi.FileSystemProvider.newOutputStream(java.base@11.0.12/FileSystemProvider.java:478)
at java.nio.file.Files.newOutputStream(java.base@11.0.12/Files.java:220)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:410)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:406)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:254)
at org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:44)
at org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.<init>(CompressingStoredFieldsWriter.java:118)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:130)
at org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat.fieldsWriter(Lucene87StoredFieldsFormat.java:141)
at org.apache.lucene.index.StoredFieldsConsumer.initStoredFieldsWriter(StoredFieldsConsumer.java:48)
at org.apache.lucene.index.StoredFieldsConsumer.startDocument(StoredFieldsConsumer.java:55)
at org.apache.lucene.index.DefaultIndexingChain.startStoredFields(DefaultIndexingChain.java:449)
at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:485)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:208)
at org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:419)
at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1471)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1757)
at org.elasticsearch.gateway.PersistedClusterStateService$MetadataIndexWriter.updateIndexMetadataDocument(PersistedClusterStateService.java:483)
at org.elasticsearch.gateway.PersistedClusterStateService$Writer.updateMetadata(PersistedClusterStateService.java:668)
at org.elasticsearch.gateway.PersistedClusterStateService$Writer.writeIncrementalStateAndCommit(PersistedClusterStateService.java:602)
at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setLastAcceptedState(GatewayMetaState.java:543)
at org.elasticsearch.gateway.GatewayMetaState$AsyncLucenePersistedState$1.doRun(GatewayMetaState.java:428)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][management][T#1]" #71 daemon prio=5 os_prio=0 cpu=1172.24ms elapsed=471.09s tid=0x00007fcc60121800 nid=0x16f3 waiting on condition [0x00007fcc9fdfc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0003928> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][trigger_engine_scheduler][T#1]" #72 daemon prio=5 os_prio=0 cpu=0.25ms elapsed=470.21s tid=0x00007fcc6806e000 nid=0x16f9 waiting on condition [0x00007fcc9ce0a000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c35c9320> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[keepAlive/7.10.1]" #21 prio=5 os_prio=0 cpu=0.32ms elapsed=470.19s tid=0x00007fcca902c000 nid=0x16fa waiting on condition [0x00007fcc9cd09000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c001c548> (a java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(java.base@11.0.12/AbstractQueuedSynchronizer.java:885)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(java.base@11.0.12/AbstractQueuedSynchronizer.java:1039)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(java.base@11.0.12/AbstractQueuedSynchronizer.java:1345)
at java.util.concurrent.CountDownLatch.await(java.base@11.0.12/CountDownLatch.java:232)
at org.elasticsearch.bootstrap.Bootstrap$1.run(Bootstrap.java:89)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"DestroyJavaVM" #73 prio=5 os_prio=0 cpu=13626.15ms elapsed=470.19s tid=0x00007fcdd4019800 nid=0x1620 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"elasticsearch[es-node05-a][DanglingIndices#updateTask][T#1]" #74 daemon prio=5 os_prio=0 cpu=118.86ms elapsed=469.79s tid=0x00007fcc6811b000 nid=0x16fb runnable [0x00007fcc9fffd000]
java.lang.Thread.State: RUNNABLE
at sun.nio.fs.UnixNativeDispatcher.open0(java.base@11.0.12/Native Method)
at sun.nio.fs.UnixNativeDispatcher.open(java.base@11.0.12/UnixNativeDispatcher.java:71)
at sun.nio.fs.UnixChannelFactory.open(java.base@11.0.12/UnixChannelFactory.java:267)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:143)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:156)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(java.base@11.0.12/UnixFileSystemProvider.java:217)
at java.nio.file.spi.FileSystemProvider.newOutputStream(java.base@11.0.12/FileSystemProvider.java:478)
at java.nio.file.Files.newOutputStream(java.base@11.0.12/Files.java:220)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:410)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:406)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:254)
at org.apache.lucene.store.Directory.copyFrom(Directory.java:183)
at org.elasticsearch.gateway.MetadataStateFormat.copyStateToExtraLocations(MetadataStateFormat.java:140)
at org.elasticsearch.gateway.MetadataStateFormat.write(MetadataStateFormat.java:244)
at org.elasticsearch.gateway.MetadataStateFormat.writeAndCleanup(MetadataStateFormat.java:185)
at org.elasticsearch.index.IndexService.writeDanglingIndicesInfo(IndexService.java:353)
- locked <0x000000043f8a3490> (a org.elasticsearch.index.IndexService)
at org.elasticsearch.indices.IndicesService$6.doRun(IndicesService.java:1581)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#7]" #75 daemon prio=5 os_prio=0 cpu=18379.43ms elapsed=469.72s tid=0x00007fcc24101800 nid=0x16fd runnable [0x00007fcc9cf0a000]
java.lang.Thread.State: RUNNABLE
at java.util.zip.CheckedOutputStream.write(java.base@11.0.12/CheckedOutputStream.java:75)
at java.io.BufferedOutputStream.write(java.base@11.0.12/BufferedOutputStream.java:123)
- locked <0x00000007fe94cc70> (a java.io.BufferedOutputStream)
at org.apache.lucene.store.OutputStreamIndexOutput.writeBytes(OutputStreamIndexOutput.java:53)
at org.elasticsearch.common.lucene.store.FilterIndexOutput.writeBytes(FilterIndexOutput.java:59)
at org.elasticsearch.index.store.Store$LuceneVerifyingIndexOutput.writeBytes(Store.java:1223)
at org.elasticsearch.indices.recovery.MultiFileWriter.innerWriteFileChunk(MultiFileWriter.java:126)
at org.elasticsearch.indices.recovery.MultiFileWriter.access$000(MultiFileWriter.java:46)
at org.elasticsearch.indices.recovery.MultiFileWriter$FileChunkWriter.writeChunk(MultiFileWriter.java:213)
at org.elasticsearch.indices.recovery.MultiFileWriter.writeFileChunk(MultiFileWriter.java:74)
at org.elasticsearch.indices.recovery.RecoveryTarget.writeFileChunk(RecoveryTarget.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#8]" #76 daemon prio=5 os_prio=0 cpu=18352.62ms elapsed=469.69s tid=0x00007fcc20106800 nid=0x16fe runnable [0x00007fcc9d1d6000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.force0(java.base@11.0.12/Native Method)
at sun.nio.ch.FileDispatcherImpl.force(java.base@11.0.12/FileDispatcherImpl.java:82)
at sun.nio.ch.FileChannelImpl.force(java.base@11.0.12/FileChannelImpl.java:461)
at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:471)
at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:331)
at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:286)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.elasticsearch.indices.recovery.MultiFileWriter.innerWriteFileChunk(MultiFileWriter.java:139)
at org.elasticsearch.indices.recovery.MultiFileWriter.access$000(MultiFileWriter.java:46)
at org.elasticsearch.indices.recovery.MultiFileWriter$FileChunkWriter.writeChunk(MultiFileWriter.java:213)
at org.elasticsearch.indices.recovery.MultiFileWriter.writeFileChunk(MultiFileWriter.java:74)
at org.elasticsearch.indices.recovery.RecoveryTarget.writeFileChunk(RecoveryTarget.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#9]" #77 daemon prio=5 os_prio=0 cpu=15976.91ms elapsed=469.69s tid=0x00007fcca0001800 nid=0x16ff waiting on condition [0x00007fcc9d0d5000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#10]" #78 daemon prio=5 os_prio=0 cpu=18455.08ms elapsed=469.69s tid=0x00007fcc24103000 nid=0x1700 waiting on condition [0x00007fcc9c93e000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#1]" #79 daemon prio=5 os_prio=0 cpu=1727.94ms elapsed=469.52s tid=0x00007fcbf80d7800 nid=0x1701 waiting on condition [0x00007fcc9c63d000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#11]" #81 daemon prio=5 os_prio=0 cpu=13505.60ms elapsed=467.72s tid=0x00007fcc48101800 nid=0x1704 runnable [0x00007fcd6e90a000]
java.lang.Thread.State: RUNNABLE
at sun.nio.fs.UnixNativeDispatcher.open0(java.base@11.0.12/Native Method)
at sun.nio.fs.UnixNativeDispatcher.open(java.base@11.0.12/UnixNativeDispatcher.java:71)
at sun.nio.fs.UnixChannelFactory.open(java.base@11.0.12/UnixChannelFactory.java:267)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:143)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:156)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(java.base@11.0.12/UnixFileSystemProvider.java:217)
at java.nio.file.spi.FileSystemProvider.newOutputStream(java.base@11.0.12/FileSystemProvider.java:478)
at java.nio.file.Files.newOutputStream(java.base@11.0.12/Files.java:220)
at org.elasticsearch.monitor.fs.FsHealthService$FsHealthMonitor.monitorFSHealth(FsHealthService.java:171)
at org.elasticsearch.monitor.fs.FsHealthService$FsHealthMonitor.run(FsHealthService.java:146)
at org.elasticsearch.threadpool.Scheduler$ReschedulingRunnable.doRun(Scheduler.java:213)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#12]" #82 daemon prio=5 os_prio=0 cpu=17987.31ms elapsed=467.72s tid=0x00007fcc10101800 nid=0x1705 waiting on condition [0x00007fcd6eb0c000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][refresh][T#1]" #83 daemon prio=5 os_prio=0 cpu=142.20ms elapsed=464.78s tid=0x00007fcccc004800 nid=0x170f waiting on condition [0x00007fcce01f7000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c00340f8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#1]" #84 daemon prio=5 os_prio=0 cpu=29.13ms elapsed=461.19s tid=0x00007fccd415d800 nid=0x1714 waiting on condition [0x00007fcce2dfb000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#13]" #85 daemon prio=5 os_prio=0 cpu=11555.64ms elapsed=458.51s tid=0x00007fcc08105000 nid=0x171a runnable [0x00007fcbc4bbf000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.force0(java.base@11.0.12/Native Method)
at sun.nio.ch.FileDispatcherImpl.force(java.base@11.0.12/FileDispatcherImpl.java:82)
at sun.nio.ch.FileChannelImpl.force(java.base@11.0.12/FileChannelImpl.java:461)
at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:471)
at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:331)
at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:286)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.elasticsearch.indices.recovery.MultiFileWriter.innerWriteFileChunk(MultiFileWriter.java:139)
at org.elasticsearch.indices.recovery.MultiFileWriter.access$000(MultiFileWriter.java:46)
at org.elasticsearch.indices.recovery.MultiFileWriter$FileChunkWriter.writeChunk(MultiFileWriter.java:213)
at org.elasticsearch.indices.recovery.MultiFileWriter.writeFileChunk(MultiFileWriter.java:74)
at org.elasticsearch.indices.recovery.RecoveryTarget.writeFileChunk(RecoveryTarget.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#2]" #86 daemon prio=5 os_prio=0 cpu=2.68ms elapsed=451.21s tid=0x00007fcc7c027800 nid=0x1729 waiting on condition [0x00007fcd6ea0b000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"Attach Listener" #87 daemon prio=9 os_prio=0 cpu=1.38ms elapsed=450.45s tid=0x00007fcd54001000 nid=0x1743 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"elasticsearch[es-node05-a][write][T#3]" #88 daemon prio=5 os_prio=0 cpu=10.17ms elapsed=441.21s tid=0x00007fcc7403f000 nid=0x1753 waiting on condition [0x00007fcbc4cc0000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][management][T#2]" #89 daemon prio=5 os_prio=0 cpu=1144.07ms elapsed=439.76s tid=0x00007fcc94063000 nid=0x1754 waiting on condition [0x00007fcc9c43b000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0003928> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#4]" #90 daemon prio=5 os_prio=0 cpu=3.93ms elapsed=431.22s tid=0x00007fcc7c01e800 nid=0x1774 waiting on condition [0x00007fcbc4abe000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#5]" #91 daemon prio=5 os_prio=0 cpu=2.07ms elapsed=421.20s tid=0x00007fcbe805a000 nid=0x1787 waiting on condition [0x00007fcbc49bd000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#6]" #92 daemon prio=5 os_prio=0 cpu=1.13ms elapsed=411.22s tid=0x00007fcc70072800 nid=0x1793 waiting on condition [0x00007fcc9f5f8000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#7]" #93 daemon prio=5 os_prio=0 cpu=1.73ms elapsed=401.21s tid=0x00007fccd4022800 nid=0x17d1 waiting on condition [0x00007fcc9f6f9000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#8]" #94 daemon prio=5 os_prio=0 cpu=2.22ms elapsed=391.21s tid=0x00007fcc5c05e800 nid=0x17fa waiting on condition [0x00007fcb85eff000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#9]" #95 daemon prio=5 os_prio=0 cpu=1.05ms elapsed=381.21s tid=0x00007fcc5c05f000 nid=0x1816 waiting on condition [0x00007fcc9f4f7000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#10]" #96 daemon prio=5 os_prio=0 cpu=4.04ms elapsed=371.21s tid=0x00007fcc7c074000 nid=0x181d waiting on condition [0x00007fcbc46bc000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#11]" #100 daemon prio=5 os_prio=0 cpu=0.95ms elapsed=361.22s tid=0x00007fcbe8060800 nid=0x1828 waiting on condition [0x00007fcb85dfe000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#12]" #101 daemon prio=5 os_prio=0 cpu=2.05ms elapsed=351.20s tid=0x00007fcc7404e000 nid=0x1835 waiting on condition [0x00007fcaf0434000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#13]" #102 daemon prio=5 os_prio=0 cpu=2.99ms elapsed=341.21s tid=0x00007fcc04027000 nid=0x184e waiting on condition [0x00007fcb50d76000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#14]" #103 daemon prio=5 os_prio=0 cpu=1.97ms elapsed=331.20s tid=0x00007fcbf81f4000 nid=0x1861 waiting on condition [0x00007fcaf2f87000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][management][T#3]" #104 daemon prio=5 os_prio=0 cpu=649.41ms elapsed=324.29s tid=0x00007fcc94066800 nid=0x1869 waiting on condition [0x00007fca33a0c000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0003928> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#15]" #105 daemon prio=5 os_prio=0 cpu=1.26ms elapsed=321.21s tid=0x00007fcc70076800 nid=0x186c waiting on condition [0x00007fca3390b000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#16]" #106 daemon prio=5 os_prio=0 cpu=8.99ms elapsed=311.22s tid=0x00007fcc8c003800 nid=0x1878 waiting on condition [0x00007fca3380a000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#17]" #107 daemon prio=5 os_prio=0 cpu=4.95ms elapsed=301.21s tid=0x00007fcbf80d4000 nid=0x1886 waiting on condition [0x00007fca33709000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#18]" #108 daemon prio=5 os_prio=0 cpu=1.68ms elapsed=291.21s tid=0x00007fcc04028000 nid=0x188d waiting on condition [0x00007fca33608000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#19]" #109 daemon prio=5 os_prio=0 cpu=9.49ms elapsed=281.21s tid=0x00007fcbfc224800 nid=0x1895 waiting on condition [0x00007fca33507000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#20]" #110 daemon prio=5 os_prio=0 cpu=1.08ms elapsed=271.20s tid=0x00007fcbfc21f800 nid=0x1898 waiting on condition [0x00007fca33406000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#21]" #111 daemon prio=5 os_prio=0 cpu=5.47ms elapsed=261.20s tid=0x00007fcc04025000 nid=0x18a1 waiting on condition [0x00007fca33305000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#22]" #112 daemon prio=5 os_prio=0 cpu=1.01ms elapsed=251.22s tid=0x00007fcbe8062000 nid=0x18bc waiting on condition [0x00007fca33002000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#23]" #113 daemon prio=5 os_prio=0 cpu=1.58ms elapsed=241.21s tid=0x00007fcbfc221000 nid=0x18ce waiting on condition [0x00007fca33103000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#24]" #114 daemon prio=5 os_prio=0 cpu=1.67ms elapsed=231.21s tid=0x00007fcc7007e800 nid=0x18d1 waiting on condition [0x00007fca33204000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][flush][T#1]" #115 daemon prio=5 os_prio=0 cpu=1.71ms elapsed=165.03s tid=0x00007fcccc00e800 nid=0x190b waiting on condition [0x00007fca32f01000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c003e7a8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][flush][T#2]" #116 daemon prio=5 os_prio=0 cpu=1.99ms elapsed=165.03s tid=0x00007fcccc00f800 nid=0x190c waiting on condition [0x00007fca32e00000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c003e7a8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][flush][T#3]" #117 daemon prio=5 os_prio=0 cpu=0.78ms elapsed=165.03s tid=0x00007fcccc010800 nid=0x190d waiting on condition [0x00007fca32cff000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c003e7a8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][refresh][T#2]" #118 daemon prio=5 os_prio=0 cpu=11.51ms elapsed=79.77s tid=0x00007fcccc012000 nid=0x1979 waiting on condition [0x00007fc6aa8fc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c00340f8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#6]" #119 daemon prio=5 os_prio=0 cpu=66.18ms elapsed=56.61s tid=0x00007fcc700ae000 nid=0x19ae waiting on condition [0x00007fcaf0535000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#7]" #120 daemon prio=5 os_prio=0 cpu=66.32ms elapsed=56.61s tid=0x00007fcc70095800 nid=0x19af waiting on condition [0x00007fcaf0636000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#8]" #121 daemon prio=5 os_prio=0 cpu=88.36ms elapsed=56.61s tid=0x00007fcc700a8800 nid=0x19b0 waiting on condition [0x00007fc6aaafe000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#9]" #122 daemon prio=5 os_prio=0 cpu=1504.97ms elapsed=56.61s tid=0x00007fcc700a9000 nid=0x19b1 waiting on condition [0x00007fc6aa9fd000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#1]" #123 daemon prio=5 os_prio=0 cpu=217.31ms elapsed=54.77s tid=0x00007fcc4011d000 nid=0x19b3 waiting on condition [0x00007fcc9c53c000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#2]" #124 daemon prio=5 os_prio=0 cpu=4651.91ms elapsed=54.77s tid=0x00007fcc60144800 nid=0x19b4 waiting on condition [0x00007fc6aa029000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#3]" #125 daemon prio=5 os_prio=0 cpu=75.85ms elapsed=54.76s tid=0x00007fcc60146000 nid=0x19b5 waiting on condition [0x00007fca301c7000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#4]" #126 daemon prio=5 os_prio=0 cpu=115.14ms elapsed=54.72s tid=0x00007fcc64123000 nid=0x19b6 waiting on condition [0x00007fc6a9f28000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#5]" #127 daemon prio=5 os_prio=0 cpu=446.65ms elapsed=54.56s tid=0x00007fcc50101800 nid=0x19b8 waiting on condition [0x00007fc6a9d26000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#6]" #128 daemon prio=5 os_prio=0 cpu=570.46ms elapsed=54.56s tid=0x00007fcc10105800 nid=0x19b9 waiting on condition [0x00007fc6a9c25000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#7]" #129 daemon prio=5 os_prio=0 cpu=17.14ms elapsed=54.55s tid=0x00007fcc38106000 nid=0x19ba waiting on condition [0x00007fc6a9b24000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#8]" #130 daemon prio=5 os_prio=0 cpu=193.51ms elapsed=54.51s tid=0x00007fcc1c104800 nid=0x19bb waiting on condition [0x00007fc6a9a23000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#9]" #131 daemon prio=5 os_prio=0 cpu=143.56ms elapsed=54.49s tid=0x00007fcc4011e000 nid=0x19bc waiting on condition [0x00007fc6a9722000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#10]" #132 daemon prio=5 os_prio=0 cpu=349.87ms elapsed=47.72s tid=0x00007fcc6412c000 nid=0x19bf waiting on condition [0x00007fcaf0737000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#11]" #133 daemon prio=5 os_prio=0 cpu=760.15ms elapsed=46.50s tid=0x00007fcc60147800 nid=0x19c6 waiting on condition [0x00007fc6a9e27000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#12]" #134 daemon prio=5 os_prio=0 cpu=481.10ms elapsed=42.66s tid=0x00007fcc4c101000 nid=0x19cc waiting on condition [0x00007fc6a9621000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#13]" #135 daemon prio=5 os_prio=0 cpu=93.92ms elapsed=41.94s tid=0x00007fcc1010b000 nid=0x19cd waiting on condition [0x00007fc6a9520000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#14]" #136 daemon prio=5 os_prio=0 cpu=7.59ms elapsed=29.11s tid=0x00007fcc50105000 nid=0x19da waiting on condition [0x00007fc6a941f000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#15]" #137 daemon prio=5 os_prio=0 cpu=29.38ms elapsed=28.55s tid=0x00007fcc4011f800 nid=0x19db waiting on condition [0x00007fc6a931e000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#16]" #138 daemon prio=5 os_prio=0 cpu=555.11ms elapsed=22.66s tid=0x00007fcc1c106000 nid=0x19dc waiting on condition [0x00007fc6a921d000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#17]" #139 daemon prio=5 os_prio=0 cpu=777.49ms elapsed=22.60s tid=0x00007fcc50107000 nid=0x19dd waiting on condition [0x00007fc6a911c000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#18]" #140 daemon prio=5 os_prio=0 cpu=86.86ms elapsed=22.58s tid=0x00007fcc1010d000 nid=0x19de waiting on condition [0x00007fc6a901b000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#19]" #141 daemon prio=5 os_prio=0 cpu=48.35ms elapsed=22.58s tid=0x00007fcc28105000 nid=0x19df waiting on condition [0x00007fc6a8f1a000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#20]" #142 daemon prio=5 os_prio=0 cpu=638.37ms elapsed=22.56s tid=0x00007fcc38108000 nid=0x19e0 waiting on condition [0x00007fc6a8e19000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#21]" #143 daemon prio=5 os_prio=0 cpu=537.57ms elapsed=22.50s tid=0x00007fcc1010f000 nid=0x19e1 waiting on condition [0x00007fc6a8d18000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#22]" #144 daemon prio=5 os_prio=0 cpu=31.48ms elapsed=22.30s tid=0x00007fcca0006800 nid=0x19e2 waiting on condition [0x00007fc6a8c17000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#23]" #145 daemon prio=5 os_prio=0 cpu=20.27ms elapsed=22.17s tid=0x00007fcc44101800 nid=0x19e5 waiting on condition [0x00007fc6a8914000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#24]" #146 daemon prio=5 os_prio=0 cpu=60.50ms elapsed=22.05s tid=0x00007fcc641ad800 nid=0x19e6 waiting on condition [0x00007fc6a8813000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#25]" #147 daemon prio=5 os_prio=0 cpu=6.00ms elapsed=22.01s tid=0x00007fcc10111000 nid=0x19e7 waiting on condition [0x00007fc6a8712000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#26]" #148 daemon prio=5 os_prio=0 cpu=25.96ms elapsed=21.96s tid=0x00007fcc1c107800 nid=0x19e8 waiting on condition [0x00007fc6a8611000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#27]" #149 daemon prio=5 os_prio=0 cpu=13.42ms elapsed=21.42s tid=0x00007fcc4c103000 nid=0x19e9 waiting on condition [0x00007fc6a8510000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#28]" #150 daemon prio=5 os_prio=0 cpu=10.43ms elapsed=21.16s tid=0x00007fcc28107000 nid=0x19f0 waiting on condition [0x00007fc6a8b16000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#29]" #151 daemon prio=5 os_prio=0 cpu=167.70ms elapsed=10.30s tid=0x00007fcc28108800 nid=0x19f4 waiting on condition [0x00007fc6a8a15000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#30]" #152 daemon prio=5 os_prio=0 cpu=172.83ms elapsed=10.29s tid=0x00007fcc4c113800 nid=0x19f5 waiting on condition [0x00007fc6a840f000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#31]" #153 daemon prio=5 os_prio=0 cpu=3.82ms elapsed=10.29s tid=0x00007fcc1c110000 nid=0x19f6 waiting on condition [0x00007fc6a830e000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#32]" #154 daemon prio=5 os_prio=0 cpu=2.31ms elapsed=10.29s tid=0x00007fcc10113000 nid=0x19f7 waiting on condition [0x00007fc6a820d000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#33]" #155 daemon prio=5 os_prio=0 cpu=3.45ms elapsed=10.27s tid=0x00007fcca010e800 nid=0x19f8 waiting on condition [0x00007fc6a810c000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#34]" #156 daemon prio=5 os_prio=0 cpu=2.49ms elapsed=10.27s tid=0x00007fcc40121800 nid=0x19f9 waiting on condition [0x00007fc697cba000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#35]" #157 daemon prio=5 os_prio=0 cpu=983.40ms elapsed=10.26s tid=0x00007fcca0110000 nid=0x19fa waiting on condition [0x00007fc697bb9000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#36]" #158 daemon prio=5 os_prio=0 cpu=957.36ms elapsed=10.24s tid=0x00007fcc4c115800 nid=0x19fb waiting on condition [0x00007fc697ab8000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#37]" #159 daemon prio=5 os_prio=0 cpu=629.56ms elapsed=10.24s tid=0x00007fcc64177800 nid=0x19fc waiting on condition [0x00007fc6979b7000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"VM Thread" os_prio=0 cpu=508.64ms elapsed=489.16s tid=0x00007fcdd5f1f800 nid=0x163b runnable
"GC Thread#0" os_prio=0 cpu=3488.74ms elapsed=492.31s tid=0x00007fcdd4033000 nid=0x1621 runnable
"GC Thread#1" os_prio=0 cpu=367.82ms elapsed=485.14s tid=0x00007fcd64001000 nid=0x168d runnable
"GC Thread#2" os_prio=0 cpu=390.10ms elapsed=485.14s tid=0x00007fcd64002000 nid=0x168e runnable
"GC Thread#3" os_prio=0 cpu=355.04ms elapsed=485.14s tid=0x00007fcd64003000 nid=0x168f runnable
"GC Thread#4" os_prio=0 cpu=366.01ms elapsed=485.14s tid=0x00007fcd64004000 nid=0x1690 runnable
"GC Thread#5" os_prio=0 cpu=379.13ms elapsed=485.14s tid=0x00007fcd64005000 nid=0x1691 runnable
"GC Thread#6" os_prio=0 cpu=345.89ms elapsed=485.14s tid=0x00007fcd64006800 nid=0x1692 runnable
"GC Thread#7" os_prio=0 cpu=380.02ms elapsed=485.14s tid=0x00007fcd64008000 nid=0x1693 runnable
"GC Thread#8" os_prio=0 cpu=374.43ms elapsed=485.14s tid=0x00007fcd64009800 nid=0x1694 runnable
"GC Thread#9" os_prio=0 cpu=355.54ms elapsed=485.14s tid=0x00007fcd6400b000 nid=0x1695 runnable
"GC Thread#10" os_prio=0 cpu=369.65ms elapsed=485.14s tid=0x00007fcd6400c800 nid=0x1696 runnable
"GC Thread#11" os_prio=0 cpu=355.71ms elapsed=485.14s tid=0x00007fcd6400e000 nid=0x1697 runnable
"GC Thread#12" os_prio=0 cpu=392.07ms elapsed=485.14s tid=0x00007fcd6400f800 nid=0x1698 runnable
"GC Thread#13" os_prio=0 cpu=378.80ms elapsed=485.14s tid=0x00007fcd64011000 nid=0x1699 runnable
"GC Thread#14" os_prio=0 cpu=384.93ms elapsed=485.14s tid=0x00007fcd64012800 nid=0x169a runnable
"GC Thread#15" os_prio=0 cpu=518.45ms elapsed=485.14s tid=0x00007fcd64014000 nid=0x169b runnable
"GC Thread#16" os_prio=0 cpu=358.10ms elapsed=485.13s tid=0x00007fcd64015800 nid=0x169c runnable
"GC Thread#17" os_prio=0 cpu=352.70ms elapsed=485.13s tid=0x00007fcd64017000 nid=0x169d runnable
"G1 Main Marker" os_prio=0 cpu=26.56ms elapsed=492.30s tid=0x00007fcdd4069000 nid=0x1622 runnable
"G1 Conc#0" os_prio=0 cpu=197.29ms elapsed=492.30s tid=0x00007fcdd406b000 nid=0x1623 runnable
"G1 Conc#1" os_prio=0 cpu=200.65ms elapsed=484.05s tid=0x00007fcd78001000 nid=0x169e runnable
"G1 Conc#2" os_prio=0 cpu=212.28ms elapsed=484.05s tid=0x00007fcd78002000 nid=0x169f runnable
"G1 Conc#3" os_prio=0 cpu=195.82ms elapsed=484.05s tid=0x00007fcd78003800 nid=0x16a0 runnable
"G1 Conc#4" os_prio=0 cpu=192.30ms elapsed=484.05s tid=0x00007fcd78005000 nid=0x16a1 runnable
"G1 Refine#0" os_prio=0 cpu=13.32ms elapsed=489.16s tid=0x00007fcdd5ee4800 nid=0x1639 runnable
"G1 Refine#1" os_prio=0 cpu=3.07ms elapsed=470.94s tid=0x00007fcd74001000 nid=0x16f4 runnable
"G1 Refine#2" os_prio=0 cpu=1.48ms elapsed=470.94s tid=0x00007fcc0c001000 nid=0x16f5 runnable
"G1 Young RemSet Sampling" os_prio=0 cpu=783.74ms elapsed=489.16s tid=0x00007fcdd5ee6800 nid=0x163a runnable
"VM Periodic Task Thread" os_prio=0 cpu=221.19ms elapsed=489.14s tid=0x00007fcdd5f63000 nid=0x1644 waiting on condition
JNI global refs: 42, weak refs: 45
2021-10-14 12:41:03
Full thread dump OpenJDK 64-Bit Server VM (11.0.12+7-post-Debian-2 mixed mode, sharing):
Threads class SMR info:
_java_thread_list=0x00007fc6a0002be0, length=135, elements={
0x00007fcdd5f22800, 0x00007fcdd5f24800, 0x00007fcdd5f2a000, 0x00007fcdd5f2c000,
0x00007fcdd5f2e000, 0x00007fcdd5f30000, 0x00007fcdd5f32000, 0x00007fcdd5f65800,
0x00007fcdd6603800, 0x00007fcdd7613000, 0x00007fcdd761b800, 0x00007fcca8415800,
0x00007fcdd670e000, 0x00007fcdd7a88000, 0x00007fcdd7a81800, 0x00007fcdd778f000,
0x00007fcca8dd1000, 0x00007fcca8dcd800, 0x00007fcca900e000, 0x00007fcc8c009800,
0x00007fcc8c00b800, 0x00007fcc8c00d800, 0x00007fcca9015800, 0x00007fcc7c004000,
0x00007fcc70006000, 0x00007fcc74002800, 0x00007fcc70012000, 0x00007fcc74013800,
0x00007fcc8c019000, 0x00007fcc74018800, 0x00007fcc70017000, 0x00007fcc8c01a800,
0x00007fcc7401a000, 0x00007fcc70018800, 0x00007fcc8c01c000, 0x00007fcc7401c000,
0x00007fcc8c01e000, 0x00007fcc7001a000, 0x00007fcc7401e000, 0x00007fcc7001c000,
0x00007fcc8c020000, 0x00007fcc74020000, 0x00007fcc7001f000, 0x00007fcc8c023000,
0x00007fcc74021800, 0x00007fcc8c024800, 0x00007fcc74023800, 0x00007fcccc002800,
0x00007fcc8c031800, 0x00007fcc60121800, 0x00007fcc6806e000, 0x00007fcca902c000,
0x00007fcdd4019800, 0x00007fcc6811b000, 0x00007fcc24101800, 0x00007fcc20106800,
0x00007fcca0001800, 0x00007fcc24103000, 0x00007fcbf80d7800, 0x00007fcc48101800,
0x00007fcc10101800, 0x00007fcccc004800, 0x00007fccd415d800, 0x00007fcc08105000,
0x00007fcc7c027800, 0x00007fcd54001000, 0x00007fcc7403f000, 0x00007fcc94063000,
0x00007fcc7c01e800, 0x00007fcbe805a000, 0x00007fcc70072800, 0x00007fccd4022800,
0x00007fcc5c05e800, 0x00007fcc5c05f000, 0x00007fcc7c074000, 0x00007fcbe8060800,
0x00007fcc7404e000, 0x00007fcc04027000, 0x00007fcbf81f4000, 0x00007fcc94066800,
0x00007fcc70076800, 0x00007fcc8c003800, 0x00007fcbf80d4000, 0x00007fcc04028000,
0x00007fcbfc224800, 0x00007fcbfc21f800, 0x00007fcc04025000, 0x00007fcbe8062000,
0x00007fcbfc221000, 0x00007fcc7007e800, 0x00007fcccc00f800, 0x00007fcccc012000,
0x00007fcc700ae000, 0x00007fcc70095800, 0x00007fcc700a8800, 0x00007fcc700a9000,
0x00007fcc4011d000, 0x00007fcc60144800, 0x00007fcc60146000, 0x00007fcc64123000,
0x00007fcc50101800, 0x00007fcc10105800, 0x00007fcc38106000, 0x00007fcc1c104800,
0x00007fcc4011e000, 0x00007fcc6412c000, 0x00007fcc60147800, 0x00007fcc4c101000,
0x00007fcc1010b000, 0x00007fcc50105000, 0x00007fcc4011f800, 0x00007fcc1c106000,
0x00007fcc50107000, 0x00007fcc1010d000, 0x00007fcc28105000, 0x00007fcc38108000,
0x00007fcc1010f000, 0x00007fcca0006800, 0x00007fcc44101800, 0x00007fcc641ad800,
0x00007fcc10111000, 0x00007fcc1c107800, 0x00007fcc4c103000, 0x00007fcc28107000,
0x00007fcc28108800, 0x00007fcc4c113800, 0x00007fcc1c110000, 0x00007fcc10113000,
0x00007fcca010e800, 0x00007fcc40121800, 0x00007fcca0110000, 0x00007fcc4c115800,
0x00007fcc64177800, 0x00007fcc14103000, 0x00007fc6a0001800
}
"Reference Handler" #2 daemon prio=10 os_prio=0 cpu=12.38ms elapsed=836.79s tid=0x00007fcdd5f22800 nid=0x163c waiting on condition [0x00007fcd6f7fe000]
java.lang.Thread.State: RUNNABLE
at java.lang.ref.Reference.waitForReferencePendingList(java.base@11.0.12/Native Method)
at java.lang.ref.Reference.processPendingReferences(java.base@11.0.12/Reference.java:241)
at java.lang.ref.Reference$ReferenceHandler.run(java.base@11.0.12/Reference.java:213)
"Finalizer" #3 daemon prio=8 os_prio=0 cpu=1.34ms elapsed=836.79s tid=0x00007fcdd5f24800 nid=0x163d in Object.wait() [0x00007fcd6f6fd000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(java.base@11.0.12/Native Method)
- waiting on <0x00000001c0001f68> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:155)
- waiting to re-lock in wait() <0x00000001c0001f68> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:176)
at java.lang.ref.Finalizer$FinalizerThread.run(java.base@11.0.12/Finalizer.java:170)
"Signal Dispatcher" #4 daemon prio=9 os_prio=0 cpu=0.24ms elapsed=836.79s tid=0x00007fcdd5f2a000 nid=0x163e runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Service Thread" #5 daemon prio=9 os_prio=0 cpu=0.17ms elapsed=836.79s tid=0x00007fcdd5f2c000 nid=0x163f runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"C2 CompilerThread0" #6 daemon prio=9 os_prio=0 cpu=92228.51ms elapsed=836.79s tid=0x00007fcdd5f2e000 nid=0x1640 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
No compile task
"C1 CompilerThread0" #14 daemon prio=9 os_prio=0 cpu=5609.38ms elapsed=836.79s tid=0x00007fcdd5f30000 nid=0x1641 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
No compile task
"Sweeper thread" #18 daemon prio=9 os_prio=0 cpu=257.74ms elapsed=836.79s tid=0x00007fcdd5f32000 nid=0x1642 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Common-Cleaner" #19 daemon prio=8 os_prio=0 cpu=9.20ms elapsed=836.77s tid=0x00007fcdd5f65800 nid=0x1645 in Object.wait() [0x00007fcd6ec0d000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(java.base@11.0.12/Native Method)
- waiting on <0x00000001c0005360> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:155)
- waiting to re-lock in wait() <0x00000001c0005360> (a java.lang.ref.ReferenceQueue$Lock)
at jdk.internal.ref.CleanerImpl.run(java.base@11.0.12/CleanerImpl.java:148)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
at jdk.internal.misc.InnocuousThread.run(java.base@11.0.12/InnocuousThread.java:134)
"process reaper" #24 daemon prio=10 os_prio=0 cpu=0.32ms elapsed=835.44s tid=0x00007fcdd6603800 nid=0x1658 runnable [0x00007fcd6ded2000]
java.lang.Thread.State: RUNNABLE
at java.lang.ProcessHandleImpl.waitForProcessExit0(java.base@11.0.12/Native Method)
at java.lang.ProcessHandleImpl$1.run(java.base@11.0.12/ProcessHandleImpl.java:138)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][[timer]]" #28 daemon prio=5 os_prio=0 cpu=98.17ms elapsed=830.45s tid=0x00007fcdd7613000 nid=0x16ac waiting on condition [0x00007fcd6e7d7000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.elasticsearch.threadpool.ThreadPool$CachedTimeThread.run(ThreadPool.java:595)
"elasticsearch[es-node05-a][scheduler][T#1]" #29 daemon prio=5 os_prio=0 cpu=882.94ms elapsed=830.44s tid=0x00007fcdd761b800 nid=0x16ad waiting on condition [0x00007fcd6e6d6000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c00049e0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"ml-cpp-log-tail-thread" #30 daemon prio=5 os_prio=0 cpu=7.66ms elapsed=827.41s tid=0x00007fcca8415800 nid=0x16b8 runnable [0x00007fcd6e4d4000]
java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.readBytes(java.base@11.0.12/Native Method)
at java.io.FileInputStream.read(java.base@11.0.12/FileInputStream.java:257)
at org.elasticsearch.xpack.ml.process.logging.CppLogMessageHandler.tailStream(CppLogMessageHandler.java:105)
at org.elasticsearch.xpack.ml.process.NativeController.lambda$tailLogsInThread$0(NativeController.java:74)
at org.elasticsearch.xpack.ml.process.NativeController$$Lambda$2826/0x000000084095b040.run(Unknown Source)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"Connection evictor" #31 daemon prio=5 os_prio=0 cpu=7.46ms elapsed=826.62s tid=0x00007fcdd670e000 nid=0x16be waiting on condition [0x00007fcd6e3d3000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[scheduler][T#1]" #32 daemon prio=5 os_prio=0 cpu=46.68ms elapsed=826.51s tid=0x00007fcdd7a88000 nid=0x16bf waiting on condition [0x00007fcce32fc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c35cfc40> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"ticker-schedule-trigger-engine" #33 daemon prio=5 os_prio=0 cpu=83.60ms elapsed=826.51s tid=0x00007fcdd7a81800 nid=0x16c0 waiting on condition [0x00007fcce2cfa000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.elasticsearch.xpack.watcher.trigger.schedule.engine.TickerScheduleTriggerEngine$Ticker.run(TickerScheduleTriggerEngine.java:193)
"elasticsearch[scheduler][T#1]" #34 daemon prio=5 os_prio=0 cpu=10.34ms elapsed=826.50s tid=0x00007fcdd778f000 nid=0x16c1 waiting on condition [0x00007fcce17f9000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c35c61c0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#1]" #35 daemon prio=5 os_prio=0 cpu=295.24ms elapsed=825.07s tid=0x00007fcca8dd1000 nid=0x16c2 runnable [0x00007fcd6e5d5000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c7238> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c71e0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#2]" #36 daemon prio=5 os_prio=0 cpu=478.38ms elapsed=825.04s tid=0x00007fcca8dcd800 nid=0x16c3 runnable [0x00007fcce04f8000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2678> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2588> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#1]" #37 daemon prio=5 os_prio=0 cpu=30368.25ms elapsed=820.12s tid=0x00007fcca900e000 nid=0x16d1 waiting on condition [0x00007fcc9f7fa000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#2]" #41 daemon prio=5 os_prio=0 cpu=31569.42ms elapsed=820.11s tid=0x00007fcc8c009800 nid=0x16d5 waiting on condition [0x00007fcc9f3f6000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#3]" #42 daemon prio=5 os_prio=0 cpu=42978.65ms elapsed=820.11s tid=0x00007fcc8c00b800 nid=0x16d6 waiting on condition [0x00007fcc9f2f5000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#4]" #43 daemon prio=5 os_prio=0 cpu=24087.08ms elapsed=820.11s tid=0x00007fcc8c00d800 nid=0x16d7 runnable [0x00007fcc9f1f4000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.force0(java.base@11.0.12/Native Method)
at sun.nio.ch.FileDispatcherImpl.force(java.base@11.0.12/FileDispatcherImpl.java:82)
at sun.nio.ch.FileChannelImpl.force(java.base@11.0.12/FileChannelImpl.java:461)
at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:471)
at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:331)
at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:286)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.elasticsearch.indices.recovery.MultiFileWriter.innerWriteFileChunk(MultiFileWriter.java:139)
at org.elasticsearch.indices.recovery.MultiFileWriter.access$000(MultiFileWriter.java:46)
at org.elasticsearch.indices.recovery.MultiFileWriter$FileChunkWriter.writeChunk(MultiFileWriter.java:213)
at org.elasticsearch.indices.recovery.MultiFileWriter.writeFileChunk(MultiFileWriter.java:74)
at org.elasticsearch.indices.recovery.RecoveryTarget.writeFileChunk(RecoveryTarget.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][clusterApplierService#updateTask][T#1]" #44 daemon prio=5 os_prio=0 cpu=2034.28ms elapsed=820.11s tid=0x00007fcca9015800 nid=0x16d8 runnable [0x00007fcc9f0f2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.fs.UnixNativeDispatcher.mkdir0(java.base@11.0.12/Native Method)
at sun.nio.fs.UnixNativeDispatcher.mkdir(java.base@11.0.12/UnixNativeDispatcher.java:229)
at sun.nio.fs.UnixFileSystemProvider.createDirectory(java.base@11.0.12/UnixFileSystemProvider.java:385)
at java.nio.file.Files.createDirectory(java.base@11.0.12/Files.java:690)
at java.nio.file.Files.createAndCheckIsDirectory(java.base@11.0.12/Files.java:797)
at java.nio.file.Files.createDirectories(java.base@11.0.12/Files.java:783)
at org.elasticsearch.index.store.FsDirectoryFactory.newDirectory(FsDirectoryFactory.java:66)
at org.elasticsearch.index.IndexService.createShard(IndexService.java:460)
- locked <0x00000007634c3078> (a org.elasticsearch.index.IndexService)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:766)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:177)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:593)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:570)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:248)
- locked <0x00000001c3b41748> (a org.elasticsearch.indices.cluster.IndicesClusterStateService)
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:510)
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:500)
at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:471)
at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:418)
at org.elasticsearch.cluster.service.ClusterApplierService.access$000(ClusterApplierService.java:68)
at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:162)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#4]" #47 daemon prio=5 os_prio=0 cpu=31998.42ms elapsed=820.10s tid=0x00007fcc7c004000 nid=0x16d9 runnable [0x00007fcc9eff2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3e10> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3d20> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#5]" #46 daemon prio=5 os_prio=0 cpu=176.99ms elapsed=820.10s tid=0x00007fcc70006000 nid=0x16da runnable [0x00007fcc9eef1000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c1dc0> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c1d68> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#3]" #45 daemon prio=5 os_prio=0 cpu=188.00ms elapsed=820.10s tid=0x00007fcc74002800 nid=0x16db runnable [0x00007fcc9edf0000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3568> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3510> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#5]" #48 daemon prio=5 os_prio=0 cpu=31984.02ms elapsed=820.02s tid=0x00007fcc70012000 nid=0x16dc waiting on condition [0x00007fcc9eaef000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#6]" #49 daemon prio=5 os_prio=0 cpu=31222.72ms elapsed=820.02s tid=0x00007fcc74013800 nid=0x16dd waiting on condition [0x00007fcc9e9ee000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#7]" #50 daemon prio=5 os_prio=0 cpu=136.13ms elapsed=820.02s tid=0x00007fcc8c019000 nid=0x16de runnable [0x00007fcc9e8ed000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2dc8> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2d70> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#6]" #52 daemon prio=5 os_prio=0 cpu=85.39ms elapsed=820.02s tid=0x00007fcc74018800 nid=0x16df runnable [0x00007fcc9e7ec000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c6a28> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c69d0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#8]" #51 daemon prio=5 os_prio=0 cpu=42270.67ms elapsed=820.02s tid=0x00007fcc70017000 nid=0x16e0 runnable [0x00007fcc9e6ea000]
java.lang.Thread.State: RUNNABLE
at java.util.HashMap$HashIterator.<init>(java.base@11.0.12/HashMap.java:1481)
at java.util.HashMap$KeyIterator.<init>(java.base@11.0.12/HashMap.java:1514)
at java.util.HashMap$KeySet.iterator(java.base@11.0.12/HashMap.java:912)
at java.util.HashSet.iterator(java.base@11.0.12/HashSet.java:173)
at sun.nio.ch.Util$2.iterator(java.base@11.0.12/Util.java:352)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:608)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#9]" #53 daemon prio=5 os_prio=0 cpu=180.99ms elapsed=820.02s tid=0x00007fcc8c01a800 nid=0x16e1 runnable [0x00007fcc9e5ea000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3748> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c36f0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#10]" #54 daemon prio=5 os_prio=0 cpu=32967.34ms elapsed=820.02s tid=0x00007fcc7401a000 nid=0x16e2 runnable [0x00007fcc9e4e9000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2f60> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2e70> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#11]" #55 daemon prio=5 os_prio=0 cpu=82.98ms elapsed=820.02s tid=0x00007fcc70018800 nid=0x16e3 runnable [0x00007fcc9e3e8000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c1f58> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c1f00> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#12]" #56 daemon prio=5 os_prio=0 cpu=95.17ms elapsed=820.02s tid=0x00007fcc8c01c000 nid=0x16e4 runnable [0x00007fcc9e2e7000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2fc8> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2f70> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#13]" #57 daemon prio=5 os_prio=0 cpu=36.60ms elapsed=820.01s tid=0x00007fcc7401c000 nid=0x16e5 runnable [0x00007fcc9e1e6000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2058> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2000> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#14]" #58 daemon prio=5 os_prio=0 cpu=12177.04ms elapsed=820.01s tid=0x00007fcc8c01e000 nid=0x16e6 runnable [0x00007fcc9e0e5000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c27c0> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c26d0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#15]" #59 daemon prio=5 os_prio=0 cpu=26.74ms elapsed=820.01s tid=0x00007fcc7001a000 nid=0x16e7 runnable [0x00007fcc9dfe4000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c5988> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c5930> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#16]" #60 daemon prio=5 os_prio=0 cpu=102.84ms elapsed=820.01s tid=0x00007fcc7401e000 nid=0x16e8 runnable [0x00007fcc9dee3000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3e78> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3e20> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#17]" #61 daemon prio=5 os_prio=0 cpu=209.72ms elapsed=820.01s tid=0x00007fcc7001c000 nid=0x16e9 runnable [0x00007fcc9dde2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c8b00> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c8a10> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#18]" #62 daemon prio=5 os_prio=0 cpu=375.49ms elapsed=820.01s tid=0x00007fcc8c020000 nid=0x16ea runnable [0x00007fcc9dce1000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c9278> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c9220> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#19]" #63 daemon prio=5 os_prio=0 cpu=76.82ms elapsed=820.01s tid=0x00007fcc74020000 nid=0x16eb runnable [0x00007fcc9dbe0000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c7360> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c7308> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#20]" #64 daemon prio=5 os_prio=0 cpu=70.84ms elapsed=820.01s tid=0x00007fcc7001f000 nid=0x16ec runnable [0x00007fcc9dadf000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c6b28> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c6ad0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#21]" #66 daemon prio=5 os_prio=0 cpu=297.89ms elapsed=820.01s tid=0x00007fcc8c023000 nid=0x16ed runnable [0x00007fcc9d9de000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2828> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c27d0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#22]" #65 daemon prio=5 os_prio=0 cpu=43925.76ms elapsed=820.01s tid=0x00007fcc74021800 nid=0x16ee runnable [0x00007fcc9d8dd000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c3c861b0> (a sun.nio.ch.Util$2)
- locked <0x00000001c35d3910> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#23]" #67 daemon prio=5 os_prio=0 cpu=92.04ms elapsed=820.01s tid=0x00007fcc8c024800 nid=0x16ef runnable [0x00007fcc9d7dc000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c7460> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c7408> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#24]" #68 daemon prio=5 os_prio=0 cpu=13150.90ms elapsed=820.01s tid=0x00007fcc74023800 nid=0x16f0 runnable [0x00007fcc9d6db000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3f78> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3f20> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][ml_utility][T#1]" #69 daemon prio=5 os_prio=0 cpu=50.56ms elapsed=819.12s tid=0x00007fcccc002800 nid=0x16f1 waiting on condition [0x00007fcc9fefd000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0000e78> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][AsyncLucenePersistedState#updateTask][T#1]" #70 daemon prio=5 os_prio=0 cpu=926.06ms elapsed=819.11s tid=0x00007fcc8c031800 nid=0x16f2 runnable [0x00007fcc9fcfa000]
java.lang.Thread.State: RUNNABLE
at sun.nio.fs.UnixNativeDispatcher.open0(java.base@11.0.12/Native Method)
at sun.nio.fs.UnixNativeDispatcher.open(java.base@11.0.12/UnixNativeDispatcher.java:71)
at sun.nio.fs.UnixChannelFactory.open(java.base@11.0.12/UnixChannelFactory.java:267)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:143)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:156)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(java.base@11.0.12/UnixFileSystemProvider.java:217)
at java.nio.file.spi.FileSystemProvider.newOutputStream(java.base@11.0.12/FileSystemProvider.java:478)
at java.nio.file.Files.newOutputStream(java.base@11.0.12/Files.java:220)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:410)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:406)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:254)
at org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:44)
at org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.<init>(CompressingStoredFieldsWriter.java:118)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:130)
at org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat.fieldsWriter(Lucene87StoredFieldsFormat.java:141)
at org.apache.lucene.index.StoredFieldsConsumer.initStoredFieldsWriter(StoredFieldsConsumer.java:48)
at org.apache.lucene.index.StoredFieldsConsumer.startDocument(StoredFieldsConsumer.java:55)
at org.apache.lucene.index.DefaultIndexingChain.startStoredFields(DefaultIndexingChain.java:449)
at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:485)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:208)
at org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:419)
at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1471)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1757)
at org.elasticsearch.gateway.PersistedClusterStateService$MetadataIndexWriter.updateIndexMetadataDocument(PersistedClusterStateService.java:483)
at org.elasticsearch.gateway.PersistedClusterStateService$Writer.updateMetadata(PersistedClusterStateService.java:668)
at org.elasticsearch.gateway.PersistedClusterStateService$Writer.writeIncrementalStateAndCommit(PersistedClusterStateService.java:602)
at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setLastAcceptedState(GatewayMetaState.java:543)
at org.elasticsearch.gateway.GatewayMetaState$AsyncLucenePersistedState$1.doRun(GatewayMetaState.java:428)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][management][T#1]" #71 daemon prio=5 os_prio=0 cpu=1920.75ms elapsed=818.73s tid=0x00007fcc60121800 nid=0x16f3 waiting on condition [0x00007fcc9fdfc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0003928> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][trigger_engine_scheduler][T#1]" #72 daemon prio=5 os_prio=0 cpu=0.25ms elapsed=817.85s tid=0x00007fcc6806e000 nid=0x16f9 waiting on condition [0x00007fcc9ce0a000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c35c9320> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[keepAlive/7.10.1]" #21 prio=5 os_prio=0 cpu=0.32ms elapsed=817.83s tid=0x00007fcca902c000 nid=0x16fa waiting on condition [0x00007fcc9cd09000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c001c548> (a java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(java.base@11.0.12/AbstractQueuedSynchronizer.java:885)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(java.base@11.0.12/AbstractQueuedSynchronizer.java:1039)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(java.base@11.0.12/AbstractQueuedSynchronizer.java:1345)
at java.util.concurrent.CountDownLatch.await(java.base@11.0.12/CountDownLatch.java:232)
at org.elasticsearch.bootstrap.Bootstrap$1.run(Bootstrap.java:89)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"DestroyJavaVM" #73 prio=5 os_prio=0 cpu=13626.15ms elapsed=817.83s tid=0x00007fcdd4019800 nid=0x1620 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"elasticsearch[es-node05-a][DanglingIndices#updateTask][T#1]" #74 daemon prio=5 os_prio=0 cpu=118.86ms elapsed=817.43s tid=0x00007fcc6811b000 nid=0x16fb runnable [0x00007fcc9fffd000]
java.lang.Thread.State: RUNNABLE
at sun.nio.fs.UnixNativeDispatcher.open0(java.base@11.0.12/Native Method)
at sun.nio.fs.UnixNativeDispatcher.open(java.base@11.0.12/UnixNativeDispatcher.java:71)
at sun.nio.fs.UnixChannelFactory.open(java.base@11.0.12/UnixChannelFactory.java:267)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:143)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:156)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(java.base@11.0.12/UnixFileSystemProvider.java:217)
at java.nio.file.spi.FileSystemProvider.newOutputStream(java.base@11.0.12/FileSystemProvider.java:478)
at java.nio.file.Files.newOutputStream(java.base@11.0.12/Files.java:220)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:410)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:406)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:254)
at org.apache.lucene.store.Directory.copyFrom(Directory.java:183)
at org.elasticsearch.gateway.MetadataStateFormat.copyStateToExtraLocations(MetadataStateFormat.java:140)
at org.elasticsearch.gateway.MetadataStateFormat.write(MetadataStateFormat.java:244)
at org.elasticsearch.gateway.MetadataStateFormat.writeAndCleanup(MetadataStateFormat.java:185)
at org.elasticsearch.index.IndexService.writeDanglingIndicesInfo(IndexService.java:353)
- locked <0x00000001ca336060> (a org.elasticsearch.index.IndexService)
at org.elasticsearch.indices.IndicesService$6.doRun(IndicesService.java:1581)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#7]" #75 daemon prio=5 os_prio=0 cpu=31166.98ms elapsed=817.36s tid=0x00007fcc24101800 nid=0x16fd waiting on condition [0x00007fcc9cf0b000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#8]" #76 daemon prio=5 os_prio=0 cpu=31748.27ms elapsed=817.33s tid=0x00007fcc20106800 nid=0x16fe waiting on condition [0x00007fcc9d1d6000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#9]" #77 daemon prio=5 os_prio=0 cpu=29433.05ms elapsed=817.33s tid=0x00007fcca0001800 nid=0x16ff runnable [0x00007fcc9d0d5000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.IOUtil.write1(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.wakeup(java.base@11.0.12/EPollSelectorImpl.java:254)
- locked <0x00000001c42ac228> (a java.lang.Object)
at io.netty.channel.nio.NioEventLoop.wakeup(NioEventLoop.java:777)
at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:849)
at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:818)
at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:989)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:796)
at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:758)
at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1020)
at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:299)
at org.elasticsearch.transport.netty4.Netty4TcpChannel.sendMessage(Netty4TcpChannel.java:146)
at org.elasticsearch.transport.OutboundHandler.internalSend(OutboundHandler.java:133)
at org.elasticsearch.transport.OutboundHandler.sendMessage(OutboundHandler.java:125)
at org.elasticsearch.transport.OutboundHandler.sendResponse(OutboundHandler.java:105)
at org.elasticsearch.transport.TcpTransportChannel.sendResponse(TcpTransportChannel.java:63)
at org.elasticsearch.transport.TaskTransportChannel.sendResponse(TaskTransportChannel.java:52)
at org.elasticsearch.action.support.ChannelActionListener.onResponse(ChannelActionListener.java:43)
at org.elasticsearch.action.support.ChannelActionListener.onResponse(ChannelActionListener.java:27)
at org.elasticsearch.action.ActionListener$4.onResponse(ActionListener.java:163)
at org.elasticsearch.indices.recovery.RecoveryRequestTracker$1.onResponse(RecoveryRequestTracker.java:68)
at org.elasticsearch.indices.recovery.RecoveryRequestTracker$1.onResponse(RecoveryRequestTracker.java:64)
at org.elasticsearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:112)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:224)
at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106)
at org.elasticsearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:98)
at org.elasticsearch.common.util.concurrent.ListenableFuture$$Lambda$4998/0x0000000840f01040.accept(Unknown Source)
at java.util.ArrayList.forEach(java.base@11.0.12/ArrayList.java:1541)
at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:98)
- locked <0x0000000698a07dd0> (a org.elasticsearch.common.util.concurrent.ListenableFuture)
at org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:144)
at org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:127)
at org.elasticsearch.indices.recovery.RecoveryTarget.writeFileChunk(RecoveryTarget.java:479)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#10]" #78 daemon prio=5 os_prio=0 cpu=31541.52ms elapsed=817.33s tid=0x00007fcc24103000 nid=0x1700 waiting on condition [0x00007fcc9c93e000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#1]" #79 daemon prio=5 os_prio=0 cpu=10702.30ms elapsed=817.16s tid=0x00007fcbf80d7800 nid=0x1701 waiting on condition [0x00007fcc9c63d000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#11]" #81 daemon prio=5 os_prio=0 cpu=13505.60ms elapsed=815.36s tid=0x00007fcc48101800 nid=0x1704 runnable [0x00007fcd6e90a000]
java.lang.Thread.State: RUNNABLE
at sun.nio.fs.UnixNativeDispatcher.open0(java.base@11.0.12/Native Method)
at sun.nio.fs.UnixNativeDispatcher.open(java.base@11.0.12/UnixNativeDispatcher.java:71)
at sun.nio.fs.UnixChannelFactory.open(java.base@11.0.12/UnixChannelFactory.java:267)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:143)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:156)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(java.base@11.0.12/UnixFileSystemProvider.java:217)
at java.nio.file.spi.FileSystemProvider.newOutputStream(java.base@11.0.12/FileSystemProvider.java:478)
at java.nio.file.Files.newOutputStream(java.base@11.0.12/Files.java:220)
at org.elasticsearch.monitor.fs.FsHealthService$FsHealthMonitor.monitorFSHealth(FsHealthService.java:171)
at org.elasticsearch.monitor.fs.FsHealthService$FsHealthMonitor.run(FsHealthService.java:146)
at org.elasticsearch.threadpool.Scheduler$ReschedulingRunnable.doRun(Scheduler.java:213)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#12]" #82 daemon prio=5 os_prio=0 cpu=30613.15ms elapsed=815.36s tid=0x00007fcc10101800 nid=0x1705 waiting on condition [0x00007fcd6eb0c000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][refresh][T#1]" #83 daemon prio=5 os_prio=0 cpu=467.62ms elapsed=812.42s tid=0x00007fcccc004800 nid=0x170f waiting on condition [0x00007fcce01f7000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c00340f8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#1]" #84 daemon prio=5 os_prio=0 cpu=42.20ms elapsed=808.83s tid=0x00007fccd415d800 nid=0x1714 waiting on condition [0x00007fcce2dfb000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#13]" #85 daemon prio=5 os_prio=0 cpu=11555.64ms elapsed=806.15s tid=0x00007fcc08105000 nid=0x171a runnable [0x00007fcbc4bbf000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.force0(java.base@11.0.12/Native Method)
at sun.nio.ch.FileDispatcherImpl.force(java.base@11.0.12/FileDispatcherImpl.java:82)
at sun.nio.ch.FileChannelImpl.force(java.base@11.0.12/FileChannelImpl.java:461)
at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:471)
at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:331)
at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:286)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.elasticsearch.indices.recovery.MultiFileWriter.innerWriteFileChunk(MultiFileWriter.java:139)
at org.elasticsearch.indices.recovery.MultiFileWriter.access$000(MultiFileWriter.java:46)
at org.elasticsearch.indices.recovery.MultiFileWriter$FileChunkWriter.writeChunk(MultiFileWriter.java:213)
at org.elasticsearch.indices.recovery.MultiFileWriter.writeFileChunk(MultiFileWriter.java:74)
at org.elasticsearch.indices.recovery.RecoveryTarget.writeFileChunk(RecoveryTarget.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#2]" #86 daemon prio=5 os_prio=0 cpu=14.51ms elapsed=798.85s tid=0x00007fcc7c027800 nid=0x1729 waiting on condition [0x00007fcd6ea0b000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"Attach Listener" #87 daemon prio=9 os_prio=0 cpu=2.65ms elapsed=798.09s tid=0x00007fcd54001000 nid=0x1743 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"elasticsearch[es-node05-a][write][T#3]" #88 daemon prio=5 os_prio=0 cpu=24.86ms elapsed=788.85s tid=0x00007fcc7403f000 nid=0x1753 waiting on condition [0x00007fcbc4cc0000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][management][T#2]" #89 daemon prio=5 os_prio=0 cpu=1910.61ms elapsed=787.40s tid=0x00007fcc94063000 nid=0x1754 waiting on condition [0x00007fcc9c43b000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0003928> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#4]" #90 daemon prio=5 os_prio=0 cpu=24.96ms elapsed=778.86s tid=0x00007fcc7c01e800 nid=0x1774 waiting on condition [0x00007fcbc4abe000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#5]" #91 daemon prio=5 os_prio=0 cpu=14.55ms elapsed=768.84s tid=0x00007fcbe805a000 nid=0x1787 waiting on condition [0x00007fcbc49bd000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#6]" #92 daemon prio=5 os_prio=0 cpu=8.29ms elapsed=758.86s tid=0x00007fcc70072800 nid=0x1793 waiting on condition [0x00007fcc9f5f8000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#7]" #93 daemon prio=5 os_prio=0 cpu=15.41ms elapsed=748.85s tid=0x00007fccd4022800 nid=0x17d1 waiting on condition [0x00007fcc9f6f9000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#8]" #94 daemon prio=5 os_prio=0 cpu=18.53ms elapsed=738.85s tid=0x00007fcc5c05e800 nid=0x17fa waiting on condition [0x00007fcb85eff000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#9]" #95 daemon prio=5 os_prio=0 cpu=6.71ms elapsed=728.85s tid=0x00007fcc5c05f000 nid=0x1816 waiting on condition [0x00007fcc9f4f7000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#10]" #96 daemon prio=5 os_prio=0 cpu=20.78ms elapsed=718.85s tid=0x00007fcc7c074000 nid=0x181d waiting on condition [0x00007fcbc46bc000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#11]" #100 daemon prio=5 os_prio=0 cpu=14.90ms elapsed=708.86s tid=0x00007fcbe8060800 nid=0x1828 waiting on condition [0x00007fcb85dfe000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#12]" #101 daemon prio=5 os_prio=0 cpu=13.44ms elapsed=698.84s tid=0x00007fcc7404e000 nid=0x1835 waiting on condition [0x00007fcaf0434000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#13]" #102 daemon prio=5 os_prio=0 cpu=16.84ms elapsed=688.85s tid=0x00007fcc04027000 nid=0x184e waiting on condition [0x00007fcb50d76000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#14]" #103 daemon prio=5 os_prio=0 cpu=15.74ms elapsed=678.84s tid=0x00007fcbf81f4000 nid=0x1861 waiting on condition [0x00007fcaf2f87000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][management][T#3]" #104 daemon prio=5 os_prio=0 cpu=1537.61ms elapsed=671.93s tid=0x00007fcc94066800 nid=0x1869 waiting on condition [0x00007fca33a0c000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0003928> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#15]" #105 daemon prio=5 os_prio=0 cpu=9.76ms elapsed=668.85s tid=0x00007fcc70076800 nid=0x186c waiting on condition [0x00007fca3390b000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#16]" #106 daemon prio=5 os_prio=0 cpu=14.20ms elapsed=658.86s tid=0x00007fcc8c003800 nid=0x1878 waiting on condition [0x00007fca3380a000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#17]" #107 daemon prio=5 os_prio=0 cpu=12.13ms elapsed=648.85s tid=0x00007fcbf80d4000 nid=0x1886 waiting on condition [0x00007fca33709000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#18]" #108 daemon prio=5 os_prio=0 cpu=31.30ms elapsed=638.85s tid=0x00007fcc04028000 nid=0x188d waiting on condition [0x00007fca33608000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#19]" #109 daemon prio=5 os_prio=0 cpu=19.64ms elapsed=628.85s tid=0x00007fcbfc224800 nid=0x1895 waiting on condition [0x00007fca33507000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#20]" #110 daemon prio=5 os_prio=0 cpu=8.91ms elapsed=618.84s tid=0x00007fcbfc21f800 nid=0x1898 waiting on condition [0x00007fca33406000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#21]" #111 daemon prio=5 os_prio=0 cpu=15.89ms elapsed=608.84s tid=0x00007fcc04025000 nid=0x18a1 waiting on condition [0x00007fca33305000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#22]" #112 daemon prio=5 os_prio=0 cpu=7.51ms elapsed=598.86s tid=0x00007fcbe8062000 nid=0x18bc waiting on condition [0x00007fca33002000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#23]" #113 daemon prio=5 os_prio=0 cpu=10.06ms elapsed=588.85s tid=0x00007fcbfc221000 nid=0x18ce waiting on condition [0x00007fca33103000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#24]" #114 daemon prio=5 os_prio=0 cpu=6.42ms elapsed=578.85s tid=0x00007fcc7007e800 nid=0x18d1 waiting on condition [0x00007fca33204000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][flush][T#2]" #116 daemon prio=5 os_prio=0 cpu=2.15ms elapsed=512.66s tid=0x00007fcccc00f800 nid=0x190c waiting on condition [0x00007fca32e00000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c003e7a8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][refresh][T#2]" #118 daemon prio=5 os_prio=0 cpu=364.71ms elapsed=427.41s tid=0x00007fcccc012000 nid=0x1979 waiting on condition [0x00007fc6aa8fc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c00340f8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#6]" #119 daemon prio=5 os_prio=0 cpu=5848.49ms elapsed=404.25s tid=0x00007fcc700ae000 nid=0x19ae waiting on condition [0x00007fcaf0535000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#7]" #120 daemon prio=5 os_prio=0 cpu=11485.40ms elapsed=404.25s tid=0x00007fcc70095800 nid=0x19af waiting on condition [0x00007fcaf0636000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#8]" #121 daemon prio=5 os_prio=0 cpu=6217.83ms elapsed=404.25s tid=0x00007fcc700a8800 nid=0x19b0 waiting on condition [0x00007fc6aaafe000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#9]" #122 daemon prio=5 os_prio=0 cpu=8990.16ms elapsed=404.25s tid=0x00007fcc700a9000 nid=0x19b1 waiting on condition [0x00007fc6aa9fd000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#1]" #123 daemon prio=5 os_prio=0 cpu=4439.86ms elapsed=402.41s tid=0x00007fcc4011d000 nid=0x19b3 waiting on condition [0x00007fcc9c53c000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#2]" #124 daemon prio=5 os_prio=0 cpu=5101.05ms elapsed=402.41s tid=0x00007fcc60144800 nid=0x19b4 waiting on condition [0x00007fc6aa029000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#3]" #125 daemon prio=5 os_prio=0 cpu=273.38ms elapsed=402.40s tid=0x00007fcc60146000 nid=0x19b5 waiting on condition [0x00007fca301c7000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#4]" #126 daemon prio=5 os_prio=0 cpu=843.60ms elapsed=402.36s tid=0x00007fcc64123000 nid=0x19b6 waiting on condition [0x00007fc6a9f28000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#5]" #127 daemon prio=5 os_prio=0 cpu=961.18ms elapsed=402.20s tid=0x00007fcc50101800 nid=0x19b8 waiting on condition [0x00007fc6a9d26000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#6]" #128 daemon prio=5 os_prio=0 cpu=905.57ms elapsed=402.20s tid=0x00007fcc10105800 nid=0x19b9 waiting on condition [0x00007fc6a9c25000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#7]" #129 daemon prio=5 os_prio=0 cpu=494.85ms elapsed=402.19s tid=0x00007fcc38106000 nid=0x19ba waiting on condition [0x00007fc6a9b24000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#8]" #130 daemon prio=5 os_prio=0 cpu=478.74ms elapsed=402.14s tid=0x00007fcc1c104800 nid=0x19bb waiting on condition [0x00007fc6a9a23000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#9]" #131 daemon prio=5 os_prio=0 cpu=282.05ms elapsed=402.13s tid=0x00007fcc4011e000 nid=0x19bc waiting on condition [0x00007fc6a9722000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#10]" #132 daemon prio=5 os_prio=0 cpu=4543.26ms elapsed=395.36s tid=0x00007fcc6412c000 nid=0x19bf waiting on condition [0x00007fcaf0737000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#11]" #133 daemon prio=5 os_prio=0 cpu=1457.51ms elapsed=394.14s tid=0x00007fcc60147800 nid=0x19c6 waiting on condition [0x00007fc6a9e27000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#12]" #134 daemon prio=5 os_prio=0 cpu=868.42ms elapsed=390.30s tid=0x00007fcc4c101000 nid=0x19cc waiting on condition [0x00007fc6a9621000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#13]" #135 daemon prio=5 os_prio=0 cpu=741.77ms elapsed=389.58s tid=0x00007fcc1010b000 nid=0x19cd waiting on condition [0x00007fc6a9520000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#14]" #136 daemon prio=5 os_prio=0 cpu=912.95ms elapsed=376.75s tid=0x00007fcc50105000 nid=0x19da waiting on condition [0x00007fc6a941f000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#15]" #137 daemon prio=5 os_prio=0 cpu=271.93ms elapsed=376.19s tid=0x00007fcc4011f800 nid=0x19db waiting on condition [0x00007fc6a931e000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#16]" #138 daemon prio=5 os_prio=0 cpu=866.59ms elapsed=370.30s tid=0x00007fcc1c106000 nid=0x19dc waiting on condition [0x00007fc6a921d000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#17]" #139 daemon prio=5 os_prio=0 cpu=3605.89ms elapsed=370.24s tid=0x00007fcc50107000 nid=0x19dd waiting on condition [0x00007fc6a911c000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#18]" #140 daemon prio=5 os_prio=0 cpu=423.05ms elapsed=370.22s tid=0x00007fcc1010d000 nid=0x19de waiting on condition [0x00007fc6a901b000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#19]" #141 daemon prio=5 os_prio=0 cpu=4236.67ms elapsed=370.22s tid=0x00007fcc28105000 nid=0x19df waiting on condition [0x00007fc6a8f1a000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#20]" #142 daemon prio=5 os_prio=0 cpu=782.02ms elapsed=370.20s tid=0x00007fcc38108000 nid=0x19e0 waiting on condition [0x00007fc6a8e19000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#21]" #143 daemon prio=5 os_prio=0 cpu=729.44ms elapsed=370.14s tid=0x00007fcc1010f000 nid=0x19e1 waiting on condition [0x00007fc6a8d18000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#22]" #144 daemon prio=5 os_prio=0 cpu=3981.50ms elapsed=369.94s tid=0x00007fcca0006800 nid=0x19e2 waiting on condition [0x00007fc6a8c17000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#23]" #145 daemon prio=5 os_prio=0 cpu=258.15ms elapsed=369.81s tid=0x00007fcc44101800 nid=0x19e5 waiting on condition [0x00007fc6a8914000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#24]" #146 daemon prio=5 os_prio=0 cpu=343.67ms elapsed=369.69s tid=0x00007fcc641ad800 nid=0x19e6 waiting on condition [0x00007fc6a8813000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#25]" #147 daemon prio=5 os_prio=0 cpu=450.42ms elapsed=369.65s tid=0x00007fcc10111000 nid=0x19e7 waiting on condition [0x00007fc6a8712000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#26]" #148 daemon prio=5 os_prio=0 cpu=376.39ms elapsed=369.60s tid=0x00007fcc1c107800 nid=0x19e8 waiting on condition [0x00007fc6a8611000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#27]" #149 daemon prio=5 os_prio=0 cpu=314.53ms elapsed=369.06s tid=0x00007fcc4c103000 nid=0x19e9 waiting on condition [0x00007fc6a8510000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#28]" #150 daemon prio=5 os_prio=0 cpu=538.15ms elapsed=368.79s tid=0x00007fcc28107000 nid=0x19f0 waiting on condition [0x00007fc6a8b16000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#29]" #151 daemon prio=5 os_prio=0 cpu=547.58ms elapsed=357.94s tid=0x00007fcc28108800 nid=0x19f4 waiting on condition [0x00007fc6a8a15000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#30]" #152 daemon prio=5 os_prio=0 cpu=355.21ms elapsed=357.93s tid=0x00007fcc4c113800 nid=0x19f5 waiting on condition [0x00007fc6a840f000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#31]" #153 daemon prio=5 os_prio=0 cpu=245.54ms elapsed=357.93s tid=0x00007fcc1c110000 nid=0x19f6 waiting on condition [0x00007fc6a830e000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#32]" #154 daemon prio=5 os_prio=0 cpu=603.85ms elapsed=357.93s tid=0x00007fcc10113000 nid=0x19f7 waiting on condition [0x00007fc6a820d000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#33]" #155 daemon prio=5 os_prio=0 cpu=239.24ms elapsed=357.91s tid=0x00007fcca010e800 nid=0x19f8 waiting on condition [0x00007fc6a810c000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#34]" #156 daemon prio=5 os_prio=0 cpu=419.07ms elapsed=357.91s tid=0x00007fcc40121800 nid=0x19f9 waiting on condition [0x00007fc697cba000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#35]" #157 daemon prio=5 os_prio=0 cpu=1764.16ms elapsed=357.90s tid=0x00007fcca0110000 nid=0x19fa waiting on condition [0x00007fc697bb9000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#36]" #158 daemon prio=5 os_prio=0 cpu=4857.75ms elapsed=357.88s tid=0x00007fcc4c115800 nid=0x19fb waiting on condition [0x00007fc697ab8000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#37]" #159 daemon prio=5 os_prio=0 cpu=815.78ms elapsed=357.88s tid=0x00007fcc64177800 nid=0x19fc waiting on condition [0x00007fc6979b7000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][management][T#4]" #162 daemon prio=5 os_prio=0 cpu=196.40ms elapsed=129.49s tid=0x00007fcc14103000 nid=0x1b74 waiting on condition [0x00007fc6977b5000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0003928> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][[groot_news_bucket_0_v3][0]: Lucene Merge Thread #3]" #164 daemon prio=5 os_prio=0 cpu=2206.06ms elapsed=2.55s tid=0x00007fc6a0001800 nid=0x1bee runnable [0x00007fca32f00000]
java.lang.Thread.State: RUNNABLE
at org.apache.lucene.store.ByteBuffersDataOutput.writeBytes(ByteBuffersDataOutput.java:174)
at org.apache.lucene.util.compress.LZ4.encodeLiterals(LZ4.java:159)
at org.apache.lucene.util.compress.LZ4.encodeSequence(LZ4.java:172)
at org.apache.lucene.util.compress.LZ4.compressWithDictionary(LZ4.java:479)
at org.apache.lucene.codecs.lucene87.LZ4WithPresetDictCompressionMode$LZ4WithPresetDictCompressor.doCompress(LZ4WithPresetDictCompressionMode.java:162)
at org.apache.lucene.codecs.lucene87.LZ4WithPresetDictCompressionMode$LZ4WithPresetDictCompressor.compress(LZ4WithPresetDictCompressionMode.java:185)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.flush(CompressingStoredFieldsWriter.java:248)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.finishDocument(CompressingStoredFieldsWriter.java:169)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.merge(CompressingStoredFieldsWriter.java:654)
at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:228)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:105)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4760)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4364)
at org.apache.lucene.index.IndexWriter$IndexWriterMergeSource.merge(IndexWriter.java:5923)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:624)
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:100)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:682)
"VM Thread" os_prio=0 cpu=707.43ms elapsed=836.80s tid=0x00007fcdd5f1f800 nid=0x163b runnable
"GC Thread#0" os_prio=0 cpu=3766.85ms elapsed=839.95s tid=0x00007fcdd4033000 nid=0x1621 runnable
"GC Thread#1" os_prio=0 cpu=661.16ms elapsed=832.78s tid=0x00007fcd64001000 nid=0x168d runnable
"GC Thread#2" os_prio=0 cpu=662.13ms elapsed=832.78s tid=0x00007fcd64002000 nid=0x168e runnable
"GC Thread#3" os_prio=0 cpu=640.72ms elapsed=832.77s tid=0x00007fcd64003000 nid=0x168f runnable
"GC Thread#4" os_prio=0 cpu=657.77ms elapsed=832.77s tid=0x00007fcd64004000 nid=0x1690 runnable
"GC Thread#5" os_prio=0 cpu=675.04ms elapsed=832.77s tid=0x00007fcd64005000 nid=0x1691 runnable
"GC Thread#6" os_prio=0 cpu=644.23ms elapsed=832.77s tid=0x00007fcd64006800 nid=0x1692 runnable
"GC Thread#7" os_prio=0 cpu=640.00ms elapsed=832.77s tid=0x00007fcd64008000 nid=0x1693 runnable
"GC Thread#8" os_prio=0 cpu=660.49ms elapsed=832.77s tid=0x00007fcd64009800 nid=0x1694 runnable
"GC Thread#9" os_prio=0 cpu=643.25ms elapsed=832.77s tid=0x00007fcd6400b000 nid=0x1695 runnable
"GC Thread#10" os_prio=0 cpu=668.77ms elapsed=832.77s tid=0x00007fcd6400c800 nid=0x1696 runnable
"GC Thread#11" os_prio=0 cpu=638.30ms elapsed=832.77s tid=0x00007fcd6400e000 nid=0x1697 runnable
"GC Thread#12" os_prio=0 cpu=667.65ms elapsed=832.77s tid=0x00007fcd6400f800 nid=0x1698 runnable
"GC Thread#13" os_prio=0 cpu=645.43ms elapsed=832.77s tid=0x00007fcd64011000 nid=0x1699 runnable
"GC Thread#14" os_prio=0 cpu=658.74ms elapsed=832.77s tid=0x00007fcd64012800 nid=0x169a runnable
"GC Thread#15" os_prio=0 cpu=812.89ms elapsed=832.77s tid=0x00007fcd64014000 nid=0x169b runnable
"GC Thread#16" os_prio=0 cpu=635.27ms elapsed=832.77s tid=0x00007fcd64015800 nid=0x169c runnable
"GC Thread#17" os_prio=0 cpu=649.69ms elapsed=832.77s tid=0x00007fcd64017000 nid=0x169d runnable
"G1 Main Marker" os_prio=0 cpu=26.56ms elapsed=839.94s tid=0x00007fcdd4069000 nid=0x1622 runnable
"G1 Conc#0" os_prio=0 cpu=197.29ms elapsed=839.94s tid=0x00007fcdd406b000 nid=0x1623 runnable
"G1 Conc#1" os_prio=0 cpu=200.65ms elapsed=831.69s tid=0x00007fcd78001000 nid=0x169e runnable
"G1 Conc#2" os_prio=0 cpu=212.28ms elapsed=831.69s tid=0x00007fcd78002000 nid=0x169f runnable
"G1 Conc#3" os_prio=0 cpu=195.82ms elapsed=831.69s tid=0x00007fcd78003800 nid=0x16a0 runnable
"G1 Conc#4" os_prio=0 cpu=192.30ms elapsed=831.69s tid=0x00007fcd78005000 nid=0x16a1 runnable
"G1 Refine#0" os_prio=0 cpu=13.55ms elapsed=836.80s tid=0x00007fcdd5ee4800 nid=0x1639 runnable
"G1 Refine#1" os_prio=0 cpu=3.07ms elapsed=818.58s tid=0x00007fcd74001000 nid=0x16f4 runnable
"G1 Refine#2" os_prio=0 cpu=1.48ms elapsed=818.58s tid=0x00007fcc0c001000 nid=0x16f5 runnable
"G1 Young RemSet Sampling" os_prio=0 cpu=1505.87ms elapsed=836.80s tid=0x00007fcdd5ee6800 nid=0x163a runnable
"VM Periodic Task Thread" os_prio=0 cpu=380.89ms elapsed=836.78s tid=0x00007fcdd5f63000 nid=0x1644 waiting on condition
JNI global refs: 42, weak refs: 45
2021-10-14 12:41:15
Full thread dump OpenJDK 64-Bit Server VM (11.0.12+7-post-Debian-2 mixed mode, sharing):
Threads class SMR info:
_java_thread_list=0x00007fc6a0002be0, length=135, elements={
0x00007fcdd5f22800, 0x00007fcdd5f24800, 0x00007fcdd5f2a000, 0x00007fcdd5f2c000,
0x00007fcdd5f2e000, 0x00007fcdd5f30000, 0x00007fcdd5f32000, 0x00007fcdd5f65800,
0x00007fcdd6603800, 0x00007fcdd7613000, 0x00007fcdd761b800, 0x00007fcca8415800,
0x00007fcdd670e000, 0x00007fcdd7a88000, 0x00007fcdd7a81800, 0x00007fcdd778f000,
0x00007fcca8dd1000, 0x00007fcca8dcd800, 0x00007fcca900e000, 0x00007fcc8c009800,
0x00007fcc8c00b800, 0x00007fcc8c00d800, 0x00007fcca9015800, 0x00007fcc7c004000,
0x00007fcc70006000, 0x00007fcc74002800, 0x00007fcc70012000, 0x00007fcc74013800,
0x00007fcc8c019000, 0x00007fcc74018800, 0x00007fcc70017000, 0x00007fcc8c01a800,
0x00007fcc7401a000, 0x00007fcc70018800, 0x00007fcc8c01c000, 0x00007fcc7401c000,
0x00007fcc8c01e000, 0x00007fcc7001a000, 0x00007fcc7401e000, 0x00007fcc7001c000,
0x00007fcc8c020000, 0x00007fcc74020000, 0x00007fcc7001f000, 0x00007fcc8c023000,
0x00007fcc74021800, 0x00007fcc8c024800, 0x00007fcc74023800, 0x00007fcccc002800,
0x00007fcc8c031800, 0x00007fcc60121800, 0x00007fcc6806e000, 0x00007fcca902c000,
0x00007fcdd4019800, 0x00007fcc6811b000, 0x00007fcc24101800, 0x00007fcc20106800,
0x00007fcca0001800, 0x00007fcc24103000, 0x00007fcbf80d7800, 0x00007fcc48101800,
0x00007fcc10101800, 0x00007fcccc004800, 0x00007fccd415d800, 0x00007fcc08105000,
0x00007fcc7c027800, 0x00007fcd54001000, 0x00007fcc7403f000, 0x00007fcc94063000,
0x00007fcc7c01e800, 0x00007fcbe805a000, 0x00007fcc70072800, 0x00007fccd4022800,
0x00007fcc5c05e800, 0x00007fcc5c05f000, 0x00007fcc7c074000, 0x00007fcbe8060800,
0x00007fcc7404e000, 0x00007fcc04027000, 0x00007fcbf81f4000, 0x00007fcc94066800,
0x00007fcc70076800, 0x00007fcc8c003800, 0x00007fcbf80d4000, 0x00007fcc04028000,
0x00007fcbfc224800, 0x00007fcbfc21f800, 0x00007fcc04025000, 0x00007fcbe8062000,
0x00007fcbfc221000, 0x00007fcc7007e800, 0x00007fcccc00f800, 0x00007fcccc012000,
0x00007fcc700ae000, 0x00007fcc70095800, 0x00007fcc700a8800, 0x00007fcc700a9000,
0x00007fcc4011d000, 0x00007fcc60144800, 0x00007fcc60146000, 0x00007fcc64123000,
0x00007fcc50101800, 0x00007fcc10105800, 0x00007fcc38106000, 0x00007fcc1c104800,
0x00007fcc4011e000, 0x00007fcc6412c000, 0x00007fcc60147800, 0x00007fcc4c101000,
0x00007fcc1010b000, 0x00007fcc50105000, 0x00007fcc4011f800, 0x00007fcc1c106000,
0x00007fcc50107000, 0x00007fcc1010d000, 0x00007fcc28105000, 0x00007fcc38108000,
0x00007fcc1010f000, 0x00007fcca0006800, 0x00007fcc44101800, 0x00007fcc641ad800,
0x00007fcc10111000, 0x00007fcc1c107800, 0x00007fcc4c103000, 0x00007fcc28107000,
0x00007fcc28108800, 0x00007fcc4c113800, 0x00007fcc1c110000, 0x00007fcc10113000,
0x00007fcca010e800, 0x00007fcc40121800, 0x00007fcca0110000, 0x00007fcc4c115800,
0x00007fcc64177800, 0x00007fcc14103000, 0x00007fc6a0001800
}
"Reference Handler" #2 daemon prio=10 os_prio=0 cpu=12.38ms elapsed=849.41s tid=0x00007fcdd5f22800 nid=0x163c waiting on condition [0x00007fcd6f7fe000]
java.lang.Thread.State: RUNNABLE
at java.lang.ref.Reference.waitForReferencePendingList(java.base@11.0.12/Native Method)
at java.lang.ref.Reference.processPendingReferences(java.base@11.0.12/Reference.java:241)
at java.lang.ref.Reference$ReferenceHandler.run(java.base@11.0.12/Reference.java:213)
"Finalizer" #3 daemon prio=8 os_prio=0 cpu=1.34ms elapsed=849.41s tid=0x00007fcdd5f24800 nid=0x163d in Object.wait() [0x00007fcd6f6fd000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(java.base@11.0.12/Native Method)
- waiting on <0x00000001c0001f68> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:155)
- waiting to re-lock in wait() <0x00000001c0001f68> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:176)
at java.lang.ref.Finalizer$FinalizerThread.run(java.base@11.0.12/Finalizer.java:170)
"Signal Dispatcher" #4 daemon prio=9 os_prio=0 cpu=0.24ms elapsed=849.41s tid=0x00007fcdd5f2a000 nid=0x163e runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Service Thread" #5 daemon prio=9 os_prio=0 cpu=0.17ms elapsed=849.41s tid=0x00007fcdd5f2c000 nid=0x163f runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"C2 CompilerThread0" #6 daemon prio=9 os_prio=0 cpu=95140.96ms elapsed=849.41s tid=0x00007fcdd5f2e000 nid=0x1640 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Compiling: 26279 % 4 org.apache.lucene.codecs.lucene80.Lucene80DocValuesConsumer::addTermsDict @ 115 (442 bytes)
"C1 CompilerThread0" #14 daemon prio=9 os_prio=0 cpu=5685.67ms elapsed=849.41s tid=0x00007fcdd5f30000 nid=0x1641 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
No compile task
"Sweeper thread" #18 daemon prio=9 os_prio=0 cpu=295.45ms elapsed=849.40s tid=0x00007fcdd5f32000 nid=0x1642 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Common-Cleaner" #19 daemon prio=8 os_prio=0 cpu=9.20ms elapsed=849.39s tid=0x00007fcdd5f65800 nid=0x1645 in Object.wait() [0x00007fcd6ec0d000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(java.base@11.0.12/Native Method)
- waiting on <0x00000001c0005360> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:155)
- waiting to re-lock in wait() <0x00000001c0005360> (a java.lang.ref.ReferenceQueue$Lock)
at jdk.internal.ref.CleanerImpl.run(java.base@11.0.12/CleanerImpl.java:148)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
at jdk.internal.misc.InnocuousThread.run(java.base@11.0.12/InnocuousThread.java:134)
"process reaper" #24 daemon prio=10 os_prio=0 cpu=0.32ms elapsed=848.06s tid=0x00007fcdd6603800 nid=0x1658 runnable [0x00007fcd6ded2000]
java.lang.Thread.State: RUNNABLE
at java.lang.ProcessHandleImpl.waitForProcessExit0(java.base@11.0.12/Native Method)
at java.lang.ProcessHandleImpl$1.run(java.base@11.0.12/ProcessHandleImpl.java:138)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][[timer]]" #28 daemon prio=5 os_prio=0 cpu=99.88ms elapsed=843.07s tid=0x00007fcdd7613000 nid=0x16ac waiting on condition [0x00007fcd6e7d7000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.elasticsearch.threadpool.ThreadPool$CachedTimeThread.run(ThreadPool.java:595)
"elasticsearch[es-node05-a][scheduler][T#1]" #29 daemon prio=5 os_prio=0 cpu=893.48ms elapsed=843.06s tid=0x00007fcdd761b800 nid=0x16ad waiting on condition [0x00007fcd6e6d6000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c00049e0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"ml-cpp-log-tail-thread" #30 daemon prio=5 os_prio=0 cpu=7.66ms elapsed=840.03s tid=0x00007fcca8415800 nid=0x16b8 runnable [0x00007fcd6e4d4000]
java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.readBytes(java.base@11.0.12/Native Method)
at java.io.FileInputStream.read(java.base@11.0.12/FileInputStream.java:257)
at org.elasticsearch.xpack.ml.process.logging.CppLogMessageHandler.tailStream(CppLogMessageHandler.java:105)
at org.elasticsearch.xpack.ml.process.NativeController.lambda$tailLogsInThread$0(NativeController.java:74)
at org.elasticsearch.xpack.ml.process.NativeController$$Lambda$2826/0x000000084095b040.run(Unknown Source)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"Connection evictor" #31 daemon prio=5 os_prio=0 cpu=7.49ms elapsed=839.24s tid=0x00007fcdd670e000 nid=0x16be waiting on condition [0x00007fcd6e3d3000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[scheduler][T#1]" #32 daemon prio=5 os_prio=0 cpu=47.09ms elapsed=839.13s tid=0x00007fcdd7a88000 nid=0x16bf waiting on condition [0x00007fcce32fc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c35cfc40> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"ticker-schedule-trigger-engine" #33 daemon prio=5 os_prio=0 cpu=85.13ms elapsed=839.12s tid=0x00007fcdd7a81800 nid=0x16c0 waiting on condition [0x00007fcce2cfa000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.elasticsearch.xpack.watcher.trigger.schedule.engine.TickerScheduleTriggerEngine$Ticker.run(TickerScheduleTriggerEngine.java:193)
"elasticsearch[scheduler][T#1]" #34 daemon prio=5 os_prio=0 cpu=10.50ms elapsed=839.11s tid=0x00007fcdd778f000 nid=0x16c1 waiting on condition [0x00007fcce17f9000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c35c61c0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#1]" #35 daemon prio=5 os_prio=0 cpu=297.36ms elapsed=837.68s tid=0x00007fcca8dd1000 nid=0x16c2 runnable [0x00007fcd6e5d5000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c7238> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c71e0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#2]" #36 daemon prio=5 os_prio=0 cpu=480.27ms elapsed=837.66s tid=0x00007fcca8dcd800 nid=0x16c3 runnable [0x00007fcce04f8000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2678> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2588> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#1]" #37 daemon prio=5 os_prio=0 cpu=30453.11ms elapsed=832.73s tid=0x00007fcca900e000 nid=0x16d1 waiting on condition [0x00007fcc9f7fa000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#2]" #41 daemon prio=5 os_prio=0 cpu=31643.98ms elapsed=832.73s tid=0x00007fcc8c009800 nid=0x16d5 waiting on condition [0x00007fcc9f3f6000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#3]" #42 daemon prio=5 os_prio=0 cpu=43089.63ms elapsed=832.73s tid=0x00007fcc8c00b800 nid=0x16d6 waiting on condition [0x00007fcc9f2f5000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#4]" #43 daemon prio=5 os_prio=0 cpu=24087.08ms elapsed=832.73s tid=0x00007fcc8c00d800 nid=0x16d7 runnable [0x00007fcc9f1f4000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.force0(java.base@11.0.12/Native Method)
at sun.nio.ch.FileDispatcherImpl.force(java.base@11.0.12/FileDispatcherImpl.java:82)
at sun.nio.ch.FileChannelImpl.force(java.base@11.0.12/FileChannelImpl.java:461)
at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:471)
at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:331)
at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:286)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.elasticsearch.indices.recovery.MultiFileWriter.innerWriteFileChunk(MultiFileWriter.java:139)
at org.elasticsearch.indices.recovery.MultiFileWriter.access$000(MultiFileWriter.java:46)
at org.elasticsearch.indices.recovery.MultiFileWriter$FileChunkWriter.writeChunk(MultiFileWriter.java:213)
at org.elasticsearch.indices.recovery.MultiFileWriter.writeFileChunk(MultiFileWriter.java:74)
at org.elasticsearch.indices.recovery.RecoveryTarget.writeFileChunk(RecoveryTarget.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][clusterApplierService#updateTask][T#1]" #44 daemon prio=5 os_prio=0 cpu=2034.28ms elapsed=832.73s tid=0x00007fcca9015800 nid=0x16d8 runnable [0x00007fcc9f0f2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.fs.UnixNativeDispatcher.mkdir0(java.base@11.0.12/Native Method)
at sun.nio.fs.UnixNativeDispatcher.mkdir(java.base@11.0.12/UnixNativeDispatcher.java:229)
at sun.nio.fs.UnixFileSystemProvider.createDirectory(java.base@11.0.12/UnixFileSystemProvider.java:385)
at java.nio.file.Files.createDirectory(java.base@11.0.12/Files.java:690)
at java.nio.file.Files.createAndCheckIsDirectory(java.base@11.0.12/Files.java:797)
at java.nio.file.Files.createDirectories(java.base@11.0.12/Files.java:783)
at org.elasticsearch.index.store.FsDirectoryFactory.newDirectory(FsDirectoryFactory.java:66)
at org.elasticsearch.index.IndexService.createShard(IndexService.java:460)
- locked <0x00000007634c3078> (a org.elasticsearch.index.IndexService)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:766)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:177)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:593)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:570)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:248)
- locked <0x00000001c3b41748> (a org.elasticsearch.indices.cluster.IndicesClusterStateService)
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:510)
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:500)
at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:471)
at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:418)
at org.elasticsearch.cluster.service.ClusterApplierService.access$000(ClusterApplierService.java:68)
at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:162)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#4]" #47 daemon prio=5 os_prio=0 cpu=31999.68ms elapsed=832.72s tid=0x00007fcc7c004000 nid=0x16d9 runnable [0x00007fcc9eff2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3e10> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3d20> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#5]" #46 daemon prio=5 os_prio=0 cpu=182.93ms elapsed=832.72s tid=0x00007fcc70006000 nid=0x16da runnable [0x00007fcc9eef1000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c1dc0> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c1d68> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#3]" #45 daemon prio=5 os_prio=0 cpu=191.50ms elapsed=832.72s tid=0x00007fcc74002800 nid=0x16db runnable [0x00007fcc9edf0000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3568> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3510> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#5]" #48 daemon prio=5 os_prio=0 cpu=32114.27ms elapsed=832.64s tid=0x00007fcc70012000 nid=0x16dc waiting on condition [0x00007fcc9eaef000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#6]" #49 daemon prio=5 os_prio=0 cpu=31302.29ms elapsed=832.64s tid=0x00007fcc74013800 nid=0x16dd waiting on condition [0x00007fcc9e9ee000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#7]" #50 daemon prio=5 os_prio=0 cpu=142.89ms elapsed=832.63s tid=0x00007fcc8c019000 nid=0x16de runnable [0x00007fcc9e8ed000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2dc8> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2d70> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#6]" #52 daemon prio=5 os_prio=0 cpu=91.51ms elapsed=832.63s tid=0x00007fcc74018800 nid=0x16df runnable [0x00007fcc9e7ec000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c6a28> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c69d0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#8]" #51 daemon prio=5 os_prio=0 cpu=42547.55ms elapsed=832.63s tid=0x00007fcc70017000 nid=0x16e0 runnable [0x00007fcc9e6eb000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c5920> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c5830> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#9]" #53 daemon prio=5 os_prio=0 cpu=184.63ms elapsed=832.63s tid=0x00007fcc8c01a800 nid=0x16e1 runnable [0x00007fcc9e5ea000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3748> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c36f0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#10]" #54 daemon prio=5 os_prio=0 cpu=32970.85ms elapsed=832.63s tid=0x00007fcc7401a000 nid=0x16e2 runnable [0x00007fcc9e4e9000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2f60> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2e70> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#11]" #55 daemon prio=5 os_prio=0 cpu=86.07ms elapsed=832.63s tid=0x00007fcc70018800 nid=0x16e3 runnable [0x00007fcc9e3e8000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c1f58> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c1f00> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#12]" #56 daemon prio=5 os_prio=0 cpu=97.12ms elapsed=832.63s tid=0x00007fcc8c01c000 nid=0x16e4 runnable [0x00007fcc9e2e7000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2fc8> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2f70> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#13]" #57 daemon prio=5 os_prio=0 cpu=37.25ms elapsed=832.63s tid=0x00007fcc7401c000 nid=0x16e5 runnable [0x00007fcc9e1e6000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2058> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2000> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#14]" #58 daemon prio=5 os_prio=0 cpu=12177.04ms elapsed=832.63s tid=0x00007fcc8c01e000 nid=0x16e6 runnable [0x00007fcc9e0e5000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c27c0> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c26d0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#15]" #59 daemon prio=5 os_prio=0 cpu=28.20ms elapsed=832.63s tid=0x00007fcc7001a000 nid=0x16e7 runnable [0x00007fcc9dfe4000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c5988> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c5930> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#16]" #60 daemon prio=5 os_prio=0 cpu=106.58ms elapsed=832.63s tid=0x00007fcc7401e000 nid=0x16e8 runnable [0x00007fcc9dee3000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3e78> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3e20> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#17]" #61 daemon prio=5 os_prio=0 cpu=215.44ms elapsed=832.63s tid=0x00007fcc7001c000 nid=0x16e9 runnable [0x00007fcc9dde2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c8b00> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c8a10> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#18]" #62 daemon prio=5 os_prio=0 cpu=379.70ms elapsed=832.63s tid=0x00007fcc8c020000 nid=0x16ea runnable [0x00007fcc9dce1000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c9278> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c9220> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#19]" #63 daemon prio=5 os_prio=0 cpu=90.98ms elapsed=832.63s tid=0x00007fcc74020000 nid=0x16eb runnable [0x00007fcc9dbe0000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c7360> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c7308> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#20]" #64 daemon prio=5 os_prio=0 cpu=77.98ms elapsed=832.63s tid=0x00007fcc7001f000 nid=0x16ec runnable [0x00007fcc9dadf000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c6b28> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c6ad0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#21]" #66 daemon prio=5 os_prio=0 cpu=310.74ms elapsed=832.63s tid=0x00007fcc8c023000 nid=0x16ed runnable [0x00007fcc9d9de000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2828> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c27d0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#22]" #65 daemon prio=5 os_prio=0 cpu=44195.30ms elapsed=832.63s tid=0x00007fcc74021800 nid=0x16ee runnable [0x00007fcc9d8dd000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c3c861b0> (a sun.nio.ch.Util$2)
- locked <0x00000001c35d3910> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#23]" #67 daemon prio=5 os_prio=0 cpu=93.91ms elapsed=832.62s tid=0x00007fcc8c024800 nid=0x16ef runnable [0x00007fcc9d7dc000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c7460> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c7408> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#24]" #68 daemon prio=5 os_prio=0 cpu=13152.07ms elapsed=832.62s tid=0x00007fcc74023800 nid=0x16f0 runnable [0x00007fcc9d6db000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3f78> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3f20> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][ml_utility][T#1]" #69 daemon prio=5 os_prio=0 cpu=51.52ms elapsed=831.74s tid=0x00007fcccc002800 nid=0x16f1 waiting on condition [0x00007fcc9fefd000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0000e78> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][AsyncLucenePersistedState#updateTask][T#1]" #70 daemon prio=5 os_prio=0 cpu=926.06ms elapsed=831.73s tid=0x00007fcc8c031800 nid=0x16f2 runnable [0x00007fcc9fcfa000]
java.lang.Thread.State: RUNNABLE
at sun.nio.fs.UnixNativeDispatcher.open0(java.base@11.0.12/Native Method)
at sun.nio.fs.UnixNativeDispatcher.open(java.base@11.0.12/UnixNativeDispatcher.java:71)
at sun.nio.fs.UnixChannelFactory.open(java.base@11.0.12/UnixChannelFactory.java:267)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:143)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:156)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(java.base@11.0.12/UnixFileSystemProvider.java:217)
at java.nio.file.spi.FileSystemProvider.newOutputStream(java.base@11.0.12/FileSystemProvider.java:478)
at java.nio.file.Files.newOutputStream(java.base@11.0.12/Files.java:220)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:410)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:406)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:254)
at org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:44)
at org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.<init>(CompressingStoredFieldsWriter.java:118)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:130)
at org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat.fieldsWriter(Lucene87StoredFieldsFormat.java:141)
at org.apache.lucene.index.StoredFieldsConsumer.initStoredFieldsWriter(StoredFieldsConsumer.java:48)
at org.apache.lucene.index.StoredFieldsConsumer.startDocument(StoredFieldsConsumer.java:55)
at org.apache.lucene.index.DefaultIndexingChain.startStoredFields(DefaultIndexingChain.java:449)
at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:485)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:208)
at org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:419)
at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1471)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1757)
at org.elasticsearch.gateway.PersistedClusterStateService$MetadataIndexWriter.updateIndexMetadataDocument(PersistedClusterStateService.java:483)
at org.elasticsearch.gateway.PersistedClusterStateService$Writer.updateMetadata(PersistedClusterStateService.java:668)
at org.elasticsearch.gateway.PersistedClusterStateService$Writer.writeIncrementalStateAndCommit(PersistedClusterStateService.java:602)
at org.elasticsearch.gateway.GatewayMetaState$LucenePersistedState.setLastAcceptedState(GatewayMetaState.java:543)
at org.elasticsearch.gateway.GatewayMetaState$AsyncLucenePersistedState$1.doRun(GatewayMetaState.java:428)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][management][T#1]" #71 daemon prio=5 os_prio=0 cpu=1965.54ms elapsed=831.34s tid=0x00007fcc60121800 nid=0x16f3 waiting on condition [0x00007fcc9fdfc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0003928> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][trigger_engine_scheduler][T#1]" #72 daemon prio=5 os_prio=0 cpu=0.25ms elapsed=830.46s tid=0x00007fcc6806e000 nid=0x16f9 waiting on condition [0x00007fcc9ce0a000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c35c9320> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[keepAlive/7.10.1]" #21 prio=5 os_prio=0 cpu=0.32ms elapsed=830.45s tid=0x00007fcca902c000 nid=0x16fa waiting on condition [0x00007fcc9cd09000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c001c548> (a java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(java.base@11.0.12/AbstractQueuedSynchronizer.java:885)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(java.base@11.0.12/AbstractQueuedSynchronizer.java:1039)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(java.base@11.0.12/AbstractQueuedSynchronizer.java:1345)
at java.util.concurrent.CountDownLatch.await(java.base@11.0.12/CountDownLatch.java:232)
at org.elasticsearch.bootstrap.Bootstrap$1.run(Bootstrap.java:89)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"DestroyJavaVM" #73 prio=5 os_prio=0 cpu=13626.15ms elapsed=830.45s tid=0x00007fcdd4019800 nid=0x1620 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"elasticsearch[es-node05-a][DanglingIndices#updateTask][T#1]" #74 daemon prio=5 os_prio=0 cpu=118.86ms elapsed=830.04s tid=0x00007fcc6811b000 nid=0x16fb runnable [0x00007fcc9fffd000]
java.lang.Thread.State: RUNNABLE
at sun.nio.fs.UnixNativeDispatcher.open0(java.base@11.0.12/Native Method)
at sun.nio.fs.UnixNativeDispatcher.open(java.base@11.0.12/UnixNativeDispatcher.java:71)
at sun.nio.fs.UnixChannelFactory.open(java.base@11.0.12/UnixChannelFactory.java:267)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:143)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:156)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(java.base@11.0.12/UnixFileSystemProvider.java:217)
at java.nio.file.spi.FileSystemProvider.newOutputStream(java.base@11.0.12/FileSystemProvider.java:478)
at java.nio.file.Files.newOutputStream(java.base@11.0.12/Files.java:220)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:410)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:406)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:254)
at org.apache.lucene.store.Directory.copyFrom(Directory.java:183)
at org.elasticsearch.gateway.MetadataStateFormat.copyStateToExtraLocations(MetadataStateFormat.java:140)
at org.elasticsearch.gateway.MetadataStateFormat.write(MetadataStateFormat.java:244)
at org.elasticsearch.gateway.MetadataStateFormat.writeAndCleanup(MetadataStateFormat.java:185)
at org.elasticsearch.index.IndexService.writeDanglingIndicesInfo(IndexService.java:353)
- locked <0x00000001ca336060> (a org.elasticsearch.index.IndexService)
at org.elasticsearch.indices.IndicesService$6.doRun(IndicesService.java:1581)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#7]" #75 daemon prio=5 os_prio=0 cpu=31292.01ms elapsed=829.97s tid=0x00007fcc24101800 nid=0x16fd waiting on condition [0x00007fcc9cf0b000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#8]" #76 daemon prio=5 os_prio=0 cpu=31885.47ms elapsed=829.95s tid=0x00007fcc20106800 nid=0x16fe waiting on condition [0x00007fcc9d1d6000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#9]" #77 daemon prio=5 os_prio=0 cpu=29858.64ms elapsed=829.95s tid=0x00007fcca0001800 nid=0x16ff waiting on condition [0x00007fcc9d0d5000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#10]" #78 daemon prio=5 os_prio=0 cpu=31635.50ms elapsed=829.95s tid=0x00007fcc24103000 nid=0x1700 waiting on condition [0x00007fcc9c93e000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#1]" #79 daemon prio=5 os_prio=0 cpu=10906.69ms elapsed=829.77s tid=0x00007fcbf80d7800 nid=0x1701 waiting on condition [0x00007fcc9c63d000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#11]" #81 daemon prio=5 os_prio=0 cpu=13505.60ms elapsed=827.98s tid=0x00007fcc48101800 nid=0x1704 runnable [0x00007fcd6e90a000]
java.lang.Thread.State: RUNNABLE
at sun.nio.fs.UnixNativeDispatcher.open0(java.base@11.0.12/Native Method)
at sun.nio.fs.UnixNativeDispatcher.open(java.base@11.0.12/UnixNativeDispatcher.java:71)
at sun.nio.fs.UnixChannelFactory.open(java.base@11.0.12/UnixChannelFactory.java:267)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:143)
at sun.nio.fs.UnixChannelFactory.newFileChannel(java.base@11.0.12/UnixChannelFactory.java:156)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(java.base@11.0.12/UnixFileSystemProvider.java:217)
at java.nio.file.spi.FileSystemProvider.newOutputStream(java.base@11.0.12/FileSystemProvider.java:478)
at java.nio.file.Files.newOutputStream(java.base@11.0.12/Files.java:220)
at org.elasticsearch.monitor.fs.FsHealthService$FsHealthMonitor.monitorFSHealth(FsHealthService.java:171)
at org.elasticsearch.monitor.fs.FsHealthService$FsHealthMonitor.run(FsHealthService.java:146)
at org.elasticsearch.threadpool.Scheduler$ReschedulingRunnable.doRun(Scheduler.java:213)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#12]" #82 daemon prio=5 os_prio=0 cpu=30740.52ms elapsed=827.98s tid=0x00007fcc10101800 nid=0x1705 waiting on condition [0x00007fcd6eb0c000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][refresh][T#1]" #83 daemon prio=5 os_prio=0 cpu=557.80ms elapsed=825.04s tid=0x00007fcccc004800 nid=0x170f waiting on condition [0x00007fcce01f7000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c00340f8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#1]" #84 daemon prio=5 os_prio=0 cpu=42.20ms elapsed=821.45s tid=0x00007fccd415d800 nid=0x1714 waiting on condition [0x00007fcce2dfb000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#13]" #85 daemon prio=5 os_prio=0 cpu=11555.64ms elapsed=818.77s tid=0x00007fcc08105000 nid=0x171a runnable [0x00007fcbc4bbf000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.force0(java.base@11.0.12/Native Method)
at sun.nio.ch.FileDispatcherImpl.force(java.base@11.0.12/FileDispatcherImpl.java:82)
at sun.nio.ch.FileChannelImpl.force(java.base@11.0.12/FileChannelImpl.java:461)
at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:471)
at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:331)
at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:286)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.elasticsearch.indices.recovery.MultiFileWriter.innerWriteFileChunk(MultiFileWriter.java:139)
at org.elasticsearch.indices.recovery.MultiFileWriter.access$000(MultiFileWriter.java:46)
at org.elasticsearch.indices.recovery.MultiFileWriter$FileChunkWriter.writeChunk(MultiFileWriter.java:213)
at org.elasticsearch.indices.recovery.MultiFileWriter.writeFileChunk(MultiFileWriter.java:74)
at org.elasticsearch.indices.recovery.RecoveryTarget.writeFileChunk(RecoveryTarget.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#2]" #86 daemon prio=5 os_prio=0 cpu=14.51ms elapsed=811.47s tid=0x00007fcc7c027800 nid=0x1729 waiting on condition [0x00007fcd6ea0b000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"Attach Listener" #87 daemon prio=9 os_prio=0 cpu=3.53ms elapsed=810.70s tid=0x00007fcd54001000 nid=0x1743 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"elasticsearch[es-node05-a][write][T#3]" #88 daemon prio=5 os_prio=0 cpu=24.86ms elapsed=801.47s tid=0x00007fcc7403f000 nid=0x1753 waiting on condition [0x00007fcbc4cc0000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][management][T#2]" #89 daemon prio=5 os_prio=0 cpu=1938.04ms elapsed=800.02s tid=0x00007fcc94063000 nid=0x1754 waiting on condition [0x00007fcc9c43b000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0003928> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#4]" #90 daemon prio=5 os_prio=0 cpu=24.96ms elapsed=791.47s tid=0x00007fcc7c01e800 nid=0x1774 waiting on condition [0x00007fcbc4abe000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#5]" #91 daemon prio=5 os_prio=0 cpu=14.55ms elapsed=781.46s tid=0x00007fcbe805a000 nid=0x1787 waiting on condition [0x00007fcbc49bd000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#6]" #92 daemon prio=5 os_prio=0 cpu=8.29ms elapsed=771.48s tid=0x00007fcc70072800 nid=0x1793 waiting on condition [0x00007fcc9f5f8000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#7]" #93 daemon prio=5 os_prio=0 cpu=15.41ms elapsed=761.47s tid=0x00007fccd4022800 nid=0x17d1 waiting on condition [0x00007fcc9f6f9000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#8]" #94 daemon prio=5 os_prio=0 cpu=18.53ms elapsed=751.47s tid=0x00007fcc5c05e800 nid=0x17fa waiting on condition [0x00007fcb85eff000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#9]" #95 daemon prio=5 os_prio=0 cpu=6.71ms elapsed=741.46s tid=0x00007fcc5c05f000 nid=0x1816 waiting on condition [0x00007fcc9f4f7000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#10]" #96 daemon prio=5 os_prio=0 cpu=20.78ms elapsed=731.47s tid=0x00007fcc7c074000 nid=0x181d waiting on condition [0x00007fcbc46bc000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#11]" #100 daemon prio=5 os_prio=0 cpu=14.90ms elapsed=721.47s tid=0x00007fcbe8060800 nid=0x1828 waiting on condition [0x00007fcb85dfe000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#12]" #101 daemon prio=5 os_prio=0 cpu=13.44ms elapsed=711.46s tid=0x00007fcc7404e000 nid=0x1835 waiting on condition [0x00007fcaf0434000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#13]" #102 daemon prio=5 os_prio=0 cpu=16.84ms elapsed=701.47s tid=0x00007fcc04027000 nid=0x184e waiting on condition [0x00007fcb50d76000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#14]" #103 daemon prio=5 os_prio=0 cpu=16.89ms elapsed=691.45s tid=0x00007fcbf81f4000 nid=0x1861 waiting on condition [0x00007fcaf2f87000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][management][T#3]" #104 daemon prio=5 os_prio=0 cpu=1566.37ms elapsed=684.54s tid=0x00007fcc94066800 nid=0x1869 waiting on condition [0x00007fca33a0c000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0003928> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#15]" #105 daemon prio=5 os_prio=0 cpu=9.76ms elapsed=681.47s tid=0x00007fcc70076800 nid=0x186c waiting on condition [0x00007fca3390b000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#16]" #106 daemon prio=5 os_prio=0 cpu=15.81ms elapsed=671.47s tid=0x00007fcc8c003800 nid=0x1878 waiting on condition [0x00007fca3380a000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#17]" #107 daemon prio=5 os_prio=0 cpu=12.87ms elapsed=661.47s tid=0x00007fcbf80d4000 nid=0x1886 waiting on condition [0x00007fca33709000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#18]" #108 daemon prio=5 os_prio=0 cpu=32.69ms elapsed=651.47s tid=0x00007fcc04028000 nid=0x188d waiting on condition [0x00007fca33608000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#19]" #109 daemon prio=5 os_prio=0 cpu=19.64ms elapsed=641.47s tid=0x00007fcbfc224800 nid=0x1895 waiting on condition [0x00007fca33507000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#20]" #110 daemon prio=5 os_prio=0 cpu=8.91ms elapsed=631.45s tid=0x00007fcbfc21f800 nid=0x1898 waiting on condition [0x00007fca33406000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#21]" #111 daemon prio=5 os_prio=0 cpu=15.89ms elapsed=621.46s tid=0x00007fcc04025000 nid=0x18a1 waiting on condition [0x00007fca33305000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#22]" #112 daemon prio=5 os_prio=0 cpu=7.51ms elapsed=611.47s tid=0x00007fcbe8062000 nid=0x18bc waiting on condition [0x00007fca33002000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#23]" #113 daemon prio=5 os_prio=0 cpu=10.06ms elapsed=601.47s tid=0x00007fcbfc221000 nid=0x18ce waiting on condition [0x00007fca33103000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][write][T#24]" #114 daemon prio=5 os_prio=0 cpu=6.42ms elapsed=591.47s tid=0x00007fcc7007e800 nid=0x18d1 waiting on condition [0x00007fca33204000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0036a20> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][flush][T#2]" #116 daemon prio=5 os_prio=0 cpu=2.15ms elapsed=525.28s tid=0x00007fcccc00f800 nid=0x190c waiting on condition [0x00007fca32e00000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c003e7a8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][refresh][T#2]" #118 daemon prio=5 os_prio=0 cpu=366.73ms elapsed=440.03s tid=0x00007fcccc012000 nid=0x1979 waiting on condition [0x00007fc6aa8fc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c00340f8> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#6]" #119 daemon prio=5 os_prio=0 cpu=7942.40ms elapsed=416.87s tid=0x00007fcc700ae000 nid=0x19ae waiting on condition [0x00007fcaf0535000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#7]" #120 daemon prio=5 os_prio=0 cpu=12324.62ms elapsed=416.87s tid=0x00007fcc70095800 nid=0x19af waiting on condition [0x00007fcaf0636000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#8]" #121 daemon prio=5 os_prio=0 cpu=6272.17ms elapsed=416.87s tid=0x00007fcc700a8800 nid=0x19b0 waiting on condition [0x00007fc6aaafe000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][warmer][T#9]" #122 daemon prio=5 os_prio=0 cpu=9187.67ms elapsed=416.87s tid=0x00007fcc700a9000 nid=0x19b1 waiting on condition [0x00007fc6aa9fd000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0030e18> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#1]" #123 daemon prio=5 os_prio=0 cpu=4441.17ms elapsed=415.03s tid=0x00007fcc4011d000 nid=0x19b3 waiting on condition [0x00007fcc9c53c000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#2]" #124 daemon prio=5 os_prio=0 cpu=5104.18ms elapsed=415.03s tid=0x00007fcc60144800 nid=0x19b4 waiting on condition [0x00007fc6aa029000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#3]" #125 daemon prio=5 os_prio=0 cpu=278.16ms elapsed=415.02s tid=0x00007fcc60146000 nid=0x19b5 waiting on condition [0x00007fca301c7000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#4]" #126 daemon prio=5 os_prio=0 cpu=844.55ms elapsed=414.98s tid=0x00007fcc64123000 nid=0x19b6 waiting on condition [0x00007fc6a9f28000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#5]" #127 daemon prio=5 os_prio=0 cpu=967.64ms elapsed=414.82s tid=0x00007fcc50101800 nid=0x19b8 waiting on condition [0x00007fc6a9d26000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#6]" #128 daemon prio=5 os_prio=0 cpu=906.23ms elapsed=414.82s tid=0x00007fcc10105800 nid=0x19b9 waiting on condition [0x00007fc6a9c25000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#7]" #129 daemon prio=5 os_prio=0 cpu=497.45ms elapsed=414.81s tid=0x00007fcc38106000 nid=0x19ba waiting on condition [0x00007fc6a9b24000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#8]" #130 daemon prio=5 os_prio=0 cpu=479.39ms elapsed=414.76s tid=0x00007fcc1c104800 nid=0x19bb waiting on condition [0x00007fc6a9a23000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#9]" #131 daemon prio=5 os_prio=0 cpu=298.71ms elapsed=414.75s tid=0x00007fcc4011e000 nid=0x19bc waiting on condition [0x00007fc6a9722000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#10]" #132 daemon prio=5 os_prio=0 cpu=4545.83ms elapsed=407.98s tid=0x00007fcc6412c000 nid=0x19bf waiting on condition [0x00007fcaf0737000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#11]" #133 daemon prio=5 os_prio=0 cpu=1459.09ms elapsed=406.76s tid=0x00007fcc60147800 nid=0x19c6 waiting on condition [0x00007fc6a9e27000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#12]" #134 daemon prio=5 os_prio=0 cpu=872.27ms elapsed=402.92s tid=0x00007fcc4c101000 nid=0x19cc waiting on condition [0x00007fc6a9621000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#13]" #135 daemon prio=5 os_prio=0 cpu=742.37ms elapsed=402.20s tid=0x00007fcc1010b000 nid=0x19cd waiting on condition [0x00007fc6a9520000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#14]" #136 daemon prio=5 os_prio=0 cpu=914.92ms elapsed=389.37s tid=0x00007fcc50105000 nid=0x19da waiting on condition [0x00007fc6a941f000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#15]" #137 daemon prio=5 os_prio=0 cpu=273.22ms elapsed=388.81s tid=0x00007fcc4011f800 nid=0x19db waiting on condition [0x00007fc6a931e000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#16]" #138 daemon prio=5 os_prio=0 cpu=871.11ms elapsed=382.92s tid=0x00007fcc1c106000 nid=0x19dc waiting on condition [0x00007fc6a921d000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#17]" #139 daemon prio=5 os_prio=0 cpu=3608.22ms elapsed=382.86s tid=0x00007fcc50107000 nid=0x19dd waiting on condition [0x00007fc6a911c000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#18]" #140 daemon prio=5 os_prio=0 cpu=424.48ms elapsed=382.83s tid=0x00007fcc1010d000 nid=0x19de waiting on condition [0x00007fc6a901b000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#19]" #141 daemon prio=5 os_prio=0 cpu=4241.83ms elapsed=382.83s tid=0x00007fcc28105000 nid=0x19df waiting on condition [0x00007fc6a8f1a000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#20]" #142 daemon prio=5 os_prio=0 cpu=782.83ms elapsed=382.82s tid=0x00007fcc38108000 nid=0x19e0 waiting on condition [0x00007fc6a8e19000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#21]" #143 daemon prio=5 os_prio=0 cpu=731.16ms elapsed=382.76s tid=0x00007fcc1010f000 nid=0x19e1 waiting on condition [0x00007fc6a8d18000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#22]" #144 daemon prio=5 os_prio=0 cpu=4011.95ms elapsed=382.56s tid=0x00007fcca0006800 nid=0x19e2 waiting on condition [0x00007fc6a8c17000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#23]" #145 daemon prio=5 os_prio=0 cpu=259.10ms elapsed=382.43s tid=0x00007fcc44101800 nid=0x19e5 waiting on condition [0x00007fc6a8914000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#24]" #146 daemon prio=5 os_prio=0 cpu=347.08ms elapsed=382.31s tid=0x00007fcc641ad800 nid=0x19e6 waiting on condition [0x00007fc6a8813000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#25]" #147 daemon prio=5 os_prio=0 cpu=453.18ms elapsed=382.27s tid=0x00007fcc10111000 nid=0x19e7 waiting on condition [0x00007fc6a8712000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#26]" #148 daemon prio=5 os_prio=0 cpu=378.27ms elapsed=382.22s tid=0x00007fcc1c107800 nid=0x19e8 waiting on condition [0x00007fc6a8611000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#27]" #149 daemon prio=5 os_prio=0 cpu=316.44ms elapsed=381.67s tid=0x00007fcc4c103000 nid=0x19e9 waiting on condition [0x00007fc6a8510000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#28]" #150 daemon prio=5 os_prio=0 cpu=539.15ms elapsed=381.41s tid=0x00007fcc28107000 nid=0x19f0 waiting on condition [0x00007fc6a8b16000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#29]" #151 daemon prio=5 os_prio=0 cpu=551.64ms elapsed=370.56s tid=0x00007fcc28108800 nid=0x19f4 waiting on condition [0x00007fc6a8a15000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#30]" #152 daemon prio=5 os_prio=0 cpu=357.50ms elapsed=370.55s tid=0x00007fcc4c113800 nid=0x19f5 waiting on condition [0x00007fc6a840f000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#31]" #153 daemon prio=5 os_prio=0 cpu=248.40ms elapsed=370.55s tid=0x00007fcc1c110000 nid=0x19f6 waiting on condition [0x00007fc6a830e000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#32]" #154 daemon prio=5 os_prio=0 cpu=621.72ms elapsed=370.54s tid=0x00007fcc10113000 nid=0x19f7 waiting on condition [0x00007fc6a820d000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#33]" #155 daemon prio=5 os_prio=0 cpu=3895.54ms elapsed=370.53s tid=0x00007fcca010e800 nid=0x19f8 waiting on condition [0x00007fc6a810c000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#34]" #156 daemon prio=5 os_prio=0 cpu=2528.57ms elapsed=370.52s tid=0x00007fcc40121800 nid=0x19f9 waiting on condition [0x00007fc697cba000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#35]" #157 daemon prio=5 os_prio=0 cpu=1767.37ms elapsed=370.52s tid=0x00007fcca0110000 nid=0x19fa waiting on condition [0x00007fc697bb9000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#36]" #158 daemon prio=5 os_prio=0 cpu=4859.36ms elapsed=370.50s tid=0x00007fcc4c115800 nid=0x19fb waiting on condition [0x00007fc697ab8000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][search][T#37]" #159 daemon prio=5 os_prio=0 cpu=817.69ms elapsed=370.50s tid=0x00007fcc64177800 nid=0x19fc waiting on condition [0x00007fc6979b7000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0043610> (a java.util.concurrent.LinkedTransferQueue)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.12/LockSupport.java:194)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:743)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.take(java.base@11.0.12/LinkedTransferQueue.java:1366)
at org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:165)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][management][T#4]" #162 daemon prio=5 os_prio=0 cpu=218.53ms elapsed=142.10s tid=0x00007fcc14103000 nid=0x1b74 waiting on condition [0x00007fc6977b5000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0003928> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][[groot_news_bucket_0_v3][0]: Lucene Merge Thread #3]" #164 daemon prio=5 os_prio=0 cpu=14625.95ms elapsed=15.17s tid=0x00007fc6a0001800 nid=0x1bee runnable [0x00007fca32f00000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000006c1b67de8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at org.apache.lucene.index.MergePolicy$OneMergeProgress.pauseNanos(MergePolicy.java:164)
at org.apache.lucene.index.MergeRateLimiter.maybePause(MergeRateLimiter.java:148)
at org.apache.lucene.index.MergeRateLimiter.pause(MergeRateLimiter.java:93)
at org.apache.lucene.store.RateLimitedIndexOutput.checkRate(RateLimitedIndexOutput.java:78)
at org.apache.lucene.store.RateLimitedIndexOutput.writeBytes(RateLimitedIndexOutput.java:72)
at org.apache.lucene.store.ByteBuffersDataOutput.copyTo(ByteBuffersDataOutput.java:287)
at org.apache.lucene.codecs.lucene80.Lucene80DocValuesConsumer.writeBlock(Lucene80DocValuesConsumer.java:359)
at org.apache.lucene.codecs.lucene80.Lucene80DocValuesConsumer.writeValuesMultipleBlocks(Lucene80DocValuesConsumer.java:314)
at org.apache.lucene.codecs.lucene80.Lucene80DocValuesConsumer.writeValues(Lucene80DocValuesConsumer.java:276)
at org.apache.lucene.codecs.lucene80.Lucene80DocValuesConsumer.addSortedNumericField(Lucene80DocValuesConsumer.java:705)
at org.apache.lucene.codecs.DocValuesConsumer.mergeSortedNumericField(DocValuesConsumer.java:375)
at org.apache.lucene.codecs.DocValuesConsumer.merge(DocValuesConsumer.java:147)
at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.merge(PerFieldDocValuesFormat.java:155)
at org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:195)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:150)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4760)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4364)
at org.apache.lucene.index.IndexWriter$IndexWriterMergeSource.merge(IndexWriter.java:5923)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:624)
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:100)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:682)
"VM Thread" os_prio=0 cpu=735.53ms elapsed=849.41s tid=0x00007fcdd5f1f800 nid=0x163b runnable
"GC Thread#0" os_prio=0 cpu=3768.77ms elapsed=852.56s tid=0x00007fcdd4033000 nid=0x1621 runnable
"GC Thread#1" os_prio=0 cpu=662.42ms elapsed=845.39s tid=0x00007fcd64001000 nid=0x168d runnable
"GC Thread#2" os_prio=0 cpu=664.01ms elapsed=845.39s tid=0x00007fcd64002000 nid=0x168e runnable
"GC Thread#3" os_prio=0 cpu=642.23ms elapsed=845.39s tid=0x00007fcd64003000 nid=0x168f runnable
"GC Thread#4" os_prio=0 cpu=659.26ms elapsed=845.39s tid=0x00007fcd64004000 nid=0x1690 runnable
"GC Thread#5" os_prio=0 cpu=676.73ms elapsed=845.39s tid=0x00007fcd64005000 nid=0x1691 runnable
"GC Thread#6" os_prio=0 cpu=646.11ms elapsed=845.39s tid=0x00007fcd64006800 nid=0x1692 runnable
"GC Thread#7" os_prio=0 cpu=641.45ms elapsed=845.39s tid=0x00007fcd64008000 nid=0x1693 runnable
"GC Thread#8" os_prio=0 cpu=662.46ms elapsed=845.39s tid=0x00007fcd64009800 nid=0x1694 runnable
"GC Thread#9" os_prio=0 cpu=645.05ms elapsed=845.39s tid=0x00007fcd6400b000 nid=0x1695 runnable
"GC Thread#10" os_prio=0 cpu=670.32ms elapsed=845.39s tid=0x00007fcd6400c800 nid=0x1696 runnable
"GC Thread#11" os_prio=0 cpu=639.81ms elapsed=845.39s tid=0x00007fcd6400e000 nid=0x1697 runnable
"GC Thread#12" os_prio=0 cpu=669.76ms elapsed=845.39s tid=0x00007fcd6400f800 nid=0x1698 runnable
"GC Thread#13" os_prio=0 cpu=647.09ms elapsed=845.39s tid=0x00007fcd64011000 nid=0x1699 runnable
"GC Thread#14" os_prio=0 cpu=660.25ms elapsed=845.39s tid=0x00007fcd64012800 nid=0x169a runnable
"GC Thread#15" os_prio=0 cpu=814.61ms elapsed=845.39s tid=0x00007fcd64014000 nid=0x169b runnable
"GC Thread#16" os_prio=0 cpu=637.37ms elapsed=845.39s tid=0x00007fcd64015800 nid=0x169c runnable
"GC Thread#17" os_prio=0 cpu=651.56ms elapsed=845.39s tid=0x00007fcd64017000 nid=0x169d runnable
"G1 Main Marker" os_prio=0 cpu=26.56ms elapsed=852.56s tid=0x00007fcdd4069000 nid=0x1622 runnable
"G1 Conc#0" os_prio=0 cpu=197.29ms elapsed=852.56s tid=0x00007fcdd406b000 nid=0x1623 runnable
"G1 Conc#1" os_prio=0 cpu=200.65ms elapsed=844.31s tid=0x00007fcd78001000 nid=0x169e runnable
"G1 Conc#2" os_prio=0 cpu=212.28ms elapsed=844.31s tid=0x00007fcd78002000 nid=0x169f runnable
"G1 Conc#3" os_prio=0 cpu=195.82ms elapsed=844.30s tid=0x00007fcd78003800 nid=0x16a0 runnable
"G1 Conc#4" os_prio=0 cpu=192.30ms elapsed=844.30s tid=0x00007fcd78005000 nid=0x16a1 runnable
"G1 Refine#0" os_prio=0 cpu=13.55ms elapsed=849.42s tid=0x00007fcdd5ee4800 nid=0x1639 runnable
"G1 Refine#1" os_prio=0 cpu=3.07ms elapsed=831.20s tid=0x00007fcd74001000 nid=0x16f4 runnable
"G1 Refine#2" os_prio=0 cpu=1.48ms elapsed=831.20s tid=0x00007fcc0c001000 nid=0x16f5 runnable
"G1 Young RemSet Sampling" os_prio=0 cpu=1535.62ms elapsed=849.42s tid=0x00007fcdd5ee6800 nid=0x163a runnable
"VM Periodic Task Thread" os_prio=0 cpu=387.25ms elapsed=849.40s tid=0x00007fcdd5f63000 nid=0x1644 waiting on condition
JNI global refs: 43, weak refs: 51
This file has been truncated, but you can view the full file.
2021-10-14 12:41:34
Full thread dump OpenJDK 64-Bit Server VM (11.0.12+7-post-Debian-2 mixed mode, sharing):
Threads class SMR info:
_java_thread_list=0x00007fc9fc005240, length=134, elements={
0x00007fcdd5f22800, 0x00007fcdd5f24800, 0x00007fcdd5f2a000, 0x00007fcdd5f2c000,
0x00007fcdd5f2e000, 0x00007fcdd5f30000, 0x00007fcdd5f32000, 0x00007fcdd5f65800,
0x00007fcdd6603800, 0x00007fcdd7613000, 0x00007fcdd761b800, 0x00007fcca8415800,
0x00007fcdd670e000, 0x00007fcdd7a88000, 0x00007fcdd7a81800, 0x00007fcdd778f000,
0x00007fcca8dd1000, 0x00007fcca8dcd800, 0x00007fcca900e000, 0x00007fcc8c009800,
0x00007fcc8c00b800, 0x00007fcc8c00d800, 0x00007fcca9015800, 0x00007fcc7c004000,
0x00007fcc70006000, 0x00007fcc74002800, 0x00007fcc70012000, 0x00007fcc74013800,
0x00007fcc8c019000, 0x00007fcc74018800, 0x00007fcc70017000, 0x00007fcc8c01a800,
0x00007fcc7401a000, 0x00007fcc70018800, 0x00007fcc8c01c000, 0x00007fcc7401c000,
0x00007fcc8c01e000, 0x00007fcc7001a000, 0x00007fcc7401e000, 0x00007fcc7001c000,
0x00007fcc8c020000, 0x00007fcc74020000, 0x00007fcc7001f000, 0x00007fcc8c023000,
0x00007fcc74021800, 0x00007fcc8c024800, 0x00007fcc74023800, 0x00007fcccc002800,
0x00007fcc8c031800, 0x00007fcc60121800, 0x00007fcc6806e000, 0x00007fcca902c000,
0x00007fcdd4019800, 0x00007fcc6811b000, 0x00007fcc24101800, 0x00007fcc20106800,
0x00007fcca0001800, 0x00007fcc24103000, 0x00007fcbf80d7800, 0x00007fcc48101800,
0x00007fcc10101800, 0x00007fcccc004800, 0x00007fccd415d800, 0x00007fcc08105000,
0x00007fcc7c027800, 0x00007fcd54001000, 0x00007fcc7403f000, 0x00007fcc94063000,
0x00007fcc7c01e800, 0x00007fcbe805a000, 0x00007fcc70072800, 0x00007fccd4022800,
0x00007fcc5c05e800, 0x00007fcc5c05f000, 0x00007fcc7c074000, 0x00007fcbe8060800,
0x00007fcc7404e000, 0x00007fcc04027000, 0x00007fcbf81f4000, 0x00007fcc94066800,
0x00007fcc70076800, 0x00007fcc8c003800, 0x00007fcbf80d4000, 0x00007fcc04028000,
0x00007fcbfc224800, 0x00007fcbfc21f800, 0x00007fcc04025000, 0x00007fcbe8062000,
0x00007fcbfc221000, 0x00007fcc7007e800, 0x00007fcccc00f800, 0x00007fcccc012000,
0x00007fcc700ae000, 0x00007fcc70095800, 0x00007fcc700a8800, 0x00007fcc700a9000,
0x00007fcc4011d000, 0x00007fcc60144800, 0x00007fcc60146000, 0x00007fcc64123000,
0x00007fcc50101800, 0x00007fcc10105800, 0x00007fcc38106000, 0x00007fcc1c104800,
0x00007fcc4011e000, 0x00007fcc6412c000, 0x00007fcc60147800, 0x00007fcc4c101000,
0x00007fcc1010b000, 0x00007fcc50105000, 0x00007fcc4011f800, 0x00007fcc1c106000,
0x00007fcc50107000, 0x00007fcc1010d000, 0x00007fcc28105000, 0x00007fcc38108000,
0x00007fcc1010f000, 0x00007fcca0006800, 0x00007fcc44101800, 0x00007fcc641ad800,
0x00007fcc10111000, 0x00007fcc1c107800, 0x00007fcc4c103000, 0x00007fcc28107000,
0x00007fcc28108800, 0x00007fcc4c113800, 0x00007fcc1c110000, 0x00007fcc10113000,
0x00007fcca010e800, 0x00007fcc40121800, 0x00007fcca0110000, 0x00007fcc4c115800,
0x00007fcc64177800, 0x00007fcc14103000
}
"Reference Handler" #2 daemon prio=10 os_prio=0 cpu=12.38ms elapsed=868.53s tid=0x00007fcdd5f22800 nid=0x163c waiting on condition [0x00007fcd6f7fe000]
java.lang.Thread.State: RUNNABLE
at java.lang.ref.Reference.waitForReferencePendingList(java.base@11.0.12/Native Method)
at java.lang.ref.Reference.processPendingReferences(java.base@11.0.12/Reference.java:241)
at java.lang.ref.Reference$ReferenceHandler.run(java.base@11.0.12/Reference.java:213)
"Finalizer" #3 daemon prio=8 os_prio=0 cpu=1.34ms elapsed=868.53s tid=0x00007fcdd5f24800 nid=0x163d in Object.wait() [0x00007fcd6f6fd000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(java.base@11.0.12/Native Method)
- waiting on <0x00000001c0001f68> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:155)
- waiting to re-lock in wait() <0x00000001c0001f68> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:176)
at java.lang.ref.Finalizer$FinalizerThread.run(java.base@11.0.12/Finalizer.java:170)
"Signal Dispatcher" #4 daemon prio=9 os_prio=0 cpu=0.24ms elapsed=868.53s tid=0x00007fcdd5f2a000 nid=0x163e runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Service Thread" #5 daemon prio=9 os_prio=0 cpu=0.17ms elapsed=868.53s tid=0x00007fcdd5f2c000 nid=0x163f runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"C2 CompilerThread0" #6 daemon prio=9 os_prio=0 cpu=97660.45ms elapsed=868.53s tid=0x00007fcdd5f2e000 nid=0x1640 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
No compile task
"C1 CompilerThread0" #14 daemon prio=9 os_prio=0 cpu=5756.25ms elapsed=868.53s tid=0x00007fcdd5f30000 nid=0x1641 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
No compile task
"Sweeper thread" #18 daemon prio=9 os_prio=0 cpu=295.45ms elapsed=868.52s tid=0x00007fcdd5f32000 nid=0x1642 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Common-Cleaner" #19 daemon prio=8 os_prio=0 cpu=9.20ms elapsed=868.51s tid=0x00007fcdd5f65800 nid=0x1645 in Object.wait() [0x00007fcd6ec0d000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(java.base@11.0.12/Native Method)
- waiting on <0x00000001c0005360> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@11.0.12/ReferenceQueue.java:155)
- waiting to re-lock in wait() <0x00000001c0005360> (a java.lang.ref.ReferenceQueue$Lock)
at jdk.internal.ref.CleanerImpl.run(java.base@11.0.12/CleanerImpl.java:148)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
at jdk.internal.misc.InnocuousThread.run(java.base@11.0.12/InnocuousThread.java:134)
"process reaper" #24 daemon prio=10 os_prio=0 cpu=0.32ms elapsed=867.18s tid=0x00007fcdd6603800 nid=0x1658 runnable [0x00007fcd6ded2000]
java.lang.Thread.State: RUNNABLE
at java.lang.ProcessHandleImpl.waitForProcessExit0(java.base@11.0.12/Native Method)
at java.lang.ProcessHandleImpl$1.run(java.base@11.0.12/ProcessHandleImpl.java:138)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][[timer]]" #28 daemon prio=5 os_prio=0 cpu=102.61ms elapsed=862.18s tid=0x00007fcdd7613000 nid=0x16ac waiting on condition [0x00007fcd6e7d7000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.elasticsearch.threadpool.ThreadPool$CachedTimeThread.run(ThreadPool.java:595)
"elasticsearch[es-node05-a][scheduler][T#1]" #29 daemon prio=5 os_prio=0 cpu=916.84ms elapsed=862.18s tid=0x00007fcdd761b800 nid=0x16ad waiting on condition [0x00007fcd6e6d6000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c00049e0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"ml-cpp-log-tail-thread" #30 daemon prio=5 os_prio=0 cpu=7.66ms elapsed=859.15s tid=0x00007fcca8415800 nid=0x16b8 runnable [0x00007fcd6e4d4000]
java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.readBytes(java.base@11.0.12/Native Method)
at java.io.FileInputStream.read(java.base@11.0.12/FileInputStream.java:257)
at org.elasticsearch.xpack.ml.process.logging.CppLogMessageHandler.tailStream(CppLogMessageHandler.java:105)
at org.elasticsearch.xpack.ml.process.NativeController.lambda$tailLogsInThread$0(NativeController.java:74)
at org.elasticsearch.xpack.ml.process.NativeController$$Lambda$2826/0x000000084095b040.run(Unknown Source)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"Connection evictor" #31 daemon prio=5 os_prio=0 cpu=7.77ms elapsed=858.36s tid=0x00007fcdd670e000 nid=0x16be waiting on condition [0x00007fcd6e3d3000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[scheduler][T#1]" #32 daemon prio=5 os_prio=0 cpu=47.80ms elapsed=858.25s tid=0x00007fcdd7a88000 nid=0x16bf waiting on condition [0x00007fcce32fc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c35cfc40> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"ticker-schedule-trigger-engine" #33 daemon prio=5 os_prio=0 cpu=86.94ms elapsed=858.25s tid=0x00007fcdd7a81800 nid=0x16c0 waiting on condition [0x00007fcce2cfa000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.12/Native Method)
at org.elasticsearch.xpack.watcher.trigger.schedule.engine.TickerScheduleTriggerEngine$Ticker.run(TickerScheduleTriggerEngine.java:193)
"elasticsearch[scheduler][T#1]" #34 daemon prio=5 os_prio=0 cpu=10.72ms elapsed=858.23s tid=0x00007fcdd778f000 nid=0x16c1 waiting on condition [0x00007fcce17f9000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c35c61c0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.12/AbstractQueuedSynchronizer.java:2123)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:1182)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.12/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#1]" #35 daemon prio=5 os_prio=0 cpu=298.64ms elapsed=856.80s tid=0x00007fcca8dd1000 nid=0x16c2 runnable [0x00007fcd6e5d5000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c7238> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c71e0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#2]" #36 daemon prio=5 os_prio=0 cpu=489.76ms elapsed=856.78s tid=0x00007fcca8dcd800 nid=0x16c3 runnable [0x00007fcce04f8000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2678> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2588> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#1]" #37 daemon prio=5 os_prio=0 cpu=30453.61ms elapsed=851.85s tid=0x00007fcca900e000 nid=0x16d1 waiting on condition [0x00007fcc9f7fa000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#2]" #41 daemon prio=5 os_prio=0 cpu=31646.85ms elapsed=851.85s tid=0x00007fcc8c009800 nid=0x16d5 waiting on condition [0x00007fcc9f3f6000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#3]" #42 daemon prio=5 os_prio=0 cpu=43090.56ms elapsed=851.85s tid=0x00007fcc8c00b800 nid=0x16d6 waiting on condition [0x00007fcc9f2f5000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#4]" #43 daemon prio=5 os_prio=0 cpu=24087.08ms elapsed=851.85s tid=0x00007fcc8c00d800 nid=0x16d7 runnable [0x00007fcc9f1f4000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.force0(java.base@11.0.12/Native Method)
at sun.nio.ch.FileDispatcherImpl.force(java.base@11.0.12/FileDispatcherImpl.java:82)
at sun.nio.ch.FileChannelImpl.force(java.base@11.0.12/FileChannelImpl.java:461)
at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:471)
at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:331)
at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:286)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84)
at org.elasticsearch.indices.recovery.MultiFileWriter.innerWriteFileChunk(MultiFileWriter.java:139)
at org.elasticsearch.indices.recovery.MultiFileWriter.access$000(MultiFileWriter.java:46)
at org.elasticsearch.indices.recovery.MultiFileWriter$FileChunkWriter.writeChunk(MultiFileWriter.java:213)
at org.elasticsearch.indices.recovery.MultiFileWriter.writeFileChunk(MultiFileWriter.java:74)
at org.elasticsearch.indices.recovery.RecoveryTarget.writeFileChunk(RecoveryTarget.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:478)
at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$FileChunkTransportRequestHandler.messageReceived(PeerRecoveryTargetService.java:448)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][clusterApplierService#updateTask][T#1]" #44 daemon prio=5 os_prio=0 cpu=2034.28ms elapsed=851.85s tid=0x00007fcca9015800 nid=0x16d8 runnable [0x00007fcc9f0f2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.fs.UnixNativeDispatcher.mkdir0(java.base@11.0.12/Native Method)
at sun.nio.fs.UnixNativeDispatcher.mkdir(java.base@11.0.12/UnixNativeDispatcher.java:229)
at sun.nio.fs.UnixFileSystemProvider.createDirectory(java.base@11.0.12/UnixFileSystemProvider.java:385)
at java.nio.file.Files.createDirectory(java.base@11.0.12/Files.java:690)
at java.nio.file.Files.createAndCheckIsDirectory(java.base@11.0.12/Files.java:797)
at java.nio.file.Files.createDirectories(java.base@11.0.12/Files.java:783)
at org.elasticsearch.index.store.FsDirectoryFactory.newDirectory(FsDirectoryFactory.java:66)
at org.elasticsearch.index.IndexService.createShard(IndexService.java:460)
- locked <0x00000007634c3078> (a org.elasticsearch.index.IndexService)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:766)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:177)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:593)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:570)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:248)
- locked <0x00000001c3b41748> (a org.elasticsearch.indices.cluster.IndicesClusterStateService)
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:510)
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:500)
at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:471)
at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:418)
at org.elasticsearch.cluster.service.ClusterApplierService.access$000(ClusterApplierService.java:68)
at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:162)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#4]" #47 daemon prio=5 os_prio=0 cpu=32000.67ms elapsed=851.84s tid=0x00007fcc7c004000 nid=0x16d9 runnable [0x00007fcc9eff2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3e10> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3d20> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#5]" #46 daemon prio=5 os_prio=0 cpu=191.38ms elapsed=851.84s tid=0x00007fcc70006000 nid=0x16da runnable [0x00007fcc9eef1000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c1dc0> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c1d68> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#3]" #45 daemon prio=5 os_prio=0 cpu=203.55ms elapsed=851.84s tid=0x00007fcc74002800 nid=0x16db runnable [0x00007fcc9edf0000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c3568> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c3510> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#5]" #48 daemon prio=5 os_prio=0 cpu=32114.42ms elapsed=851.76s tid=0x00007fcc70012000 nid=0x16dc waiting on condition [0x00007fcc9eaef000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][generic][T#6]" #49 daemon prio=5 os_prio=0 cpu=31302.73ms elapsed=851.76s tid=0x00007fcc74013800 nid=0x16dd waiting on condition [0x00007fcc9e9ee000]
java.lang.Thread.State: TIMED_WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.12/Native Method)
- parking to wait for <0x00000001c0004240> (a org.elasticsearch.common.util.concurrent.EsExecutors$ExecutorScalingQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.12/LockSupport.java:234)
at java.util.concurrent.LinkedTransferQueue.awaitMatch(java.base@11.0.12/LinkedTransferQueue.java:740)
at java.util.concurrent.LinkedTransferQueue.xfer(java.base@11.0.12/LinkedTransferQueue.java:684)
at java.util.concurrent.LinkedTransferQueue.poll(java.base@11.0.12/LinkedTransferQueue.java:1374)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.12/ThreadPoolExecutor.java:1053)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.12/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.12/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#7]" #50 daemon prio=5 os_prio=0 cpu=152.75ms elapsed=851.75s tid=0x00007fcc8c019000 nid=0x16de runnable [0x00007fcc9e8ed000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.12/SelectorImpl.java:124)
- locked <0x00000001c35c2dc8> (a sun.nio.ch.Util$2)
- locked <0x00000001c35c2d70> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@11.0.12/SelectorImpl.java:141)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(java.base@11.0.12/Thread.java:829)
"elasticsearch[es-node05-a][transport_worker][T#6]" #52 daemon prio=5 os_prio=0 cpu=91.95ms elapsed=851.75s tid=0x00007fcc74018800 nid=0x16df runnable [0x00007fcc9e7ec000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait(java.base@11.0.12/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.12/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment