Skip to content

Instantly share code, notes, and snippets.

@tomaso
Created August 10, 2020 15:57
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tomaso/620cd8d49dad6c68cc9e4b1931d16909 to your computer and use it in GitHub Desktop.
Save tomaso/620cd8d49dad6c68cc9e4b1931d16909 to your computer and use it in GitHub Desktop.
2020.08.10 15:57:07 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2020.08.10 15:57:07 DEBUG app[][o.s.a.NodeLifecycle] main tryToMoveTo from INIT to STARTING => true
2020.08.10 15:57:07 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] main tryToMoveTo es from INIT to STARTING => true
2020.08.10 15:57:07 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2020.08.10 15:57:07 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2020.08.10 15:57:07 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] main tryToMoveTo es from STARTING to STARTED => true
2020.08.10 15:57:07 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
2020.08.10 15:57:07 INFO app[][o.e.p.PluginsService] no modules loaded
2020.08.10 15:57:07 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [force_merge], size [1], queue size [unbounded]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_started], core [1], max [16], keep alive [5m]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [listener], size [4], queue size [unbounded]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [index], size [8], queue size [200]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [refresh], core [1], max [4], keep alive [5m]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [generic], core [4], max [128], keep alive [30s]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [warmer], core [1], max [4], keep alive [5m]
2020.08.10 15:57:07 DEBUG app[][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] thread pool [_client_/search] will adjust queue by [50] when determining automatic queue size
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [search], size [13], queue size [1k]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [flush], core [1], max [4], keep alive [5m]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_store], core [1], max [16], keep alive [5m]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [management], core [1], max [5], keep alive [5m]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [get], size [8], queue size [1k]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [analyze], size [1], queue size [16]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [write], size [8], queue size [200]
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [snapshot], core [1], max [4], keep alive [5m]
2020.08.10 15:57:07 DEBUG app[][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] thread pool [_client_/search_throttled] will adjust queue by [50] when determining automatic queue size
2020.08.10 15:57:07 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [search_throttled], size [1], queue size [100]
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: false
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent0] Java version: 11
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.theUnsafe: available
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.copyMemory: available
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.Buffer.address: available
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent0] direct buffer constructor: unavailable
java.lang.UnsupportedOperationException: Reflective setAccessible(true) disabled
at io.netty.util.internal.ReflectionUtil.trySetAccessible(ReflectionUtil.java:31)
at io.netty.util.internal.PlatformDependent0$4.run(PlatformDependent0.java:224)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:218)
at io.netty.util.internal.PlatformDependent.isAndroid(PlatformDependent.java:212)
at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:80)
at io.netty.util.ConstantPool.<init>(ConstantPool.java:32)
at io.netty.util.AttributeKey$1.<init>(AttributeKey.java:27)
at io.netty.util.AttributeKey.<clinit>(AttributeKey.java:27)
at org.elasticsearch.transport.netty4.Netty4Transport.<clinit>(Netty4Transport.java:219)
at org.elasticsearch.transport.Netty4Plugin.getSettings(Netty4Plugin.java:57)
at org.elasticsearch.plugins.PluginsService.lambda$getPluginSettings$0(PluginsService.java:89)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source)
at org.elasticsearch.plugins.PluginsService.getPluginSettings(PluginsService.java:89)
at org.elasticsearch.client.transport.TransportClient.buildTemplate(TransportClient.java:147)
at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:277)
at org.sonar.application.es.EsConnectorImpl$MinimalTransportClient.<init>(EsConnectorImpl.java:103)
at org.sonar.application.es.EsConnectorImpl.buildTransportClient(EsConnectorImpl.java:89)
at org.sonar.application.es.EsConnectorImpl.getTransportClient(EsConnectorImpl.java:74)
at org.sonar.application.es.EsConnectorImpl.getClusterHealthStatus(EsConnectorImpl.java:61)
at org.sonar.application.process.EsManagedProcess.checkStatus(EsManagedProcess.java:88)
at org.sonar.application.process.EsManagedProcess.checkOperational(EsManagedProcess.java:73)
at org.sonar.application.process.EsManagedProcess.isOperational(EsManagedProcess.java:58)
at org.sonar.application.process.ManagedProcessHandler.refreshState(ManagedProcessHandler.java:220)
at org.sonar.application.process.ManagedProcessHandler$EventWatcher.run(ManagedProcessHandler.java:285)
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.Bits.unaligned: available, true
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent0] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable
java.lang.IllegalAccessException: class io.netty.util.internal.PlatformDependent0$6 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @406be90
at java.base/jdk.internal.reflect.Reflection.newIllegalAccessException(Unknown Source)
at java.base/java.lang.reflect.AccessibleObject.checkAccess(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at io.netty.util.internal.PlatformDependent0$6.run(PlatformDependent0.java:334)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:325)
at io.netty.util.internal.PlatformDependent.isAndroid(PlatformDependent.java:212)
at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:80)
at io.netty.util.ConstantPool.<init>(ConstantPool.java:32)
at io.netty.util.AttributeKey$1.<init>(AttributeKey.java:27)
at io.netty.util.AttributeKey.<clinit>(AttributeKey.java:27)
at org.elasticsearch.transport.netty4.Netty4Transport.<clinit>(Netty4Transport.java:219)
at org.elasticsearch.transport.Netty4Plugin.getSettings(Netty4Plugin.java:57)
at org.elasticsearch.plugins.PluginsService.lambda$getPluginSettings$0(PluginsService.java:89)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source)
at org.elasticsearch.plugins.PluginsService.getPluginSettings(PluginsService.java:89)
at org.elasticsearch.client.transport.TransportClient.buildTemplate(TransportClient.java:147)
at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:277)
at org.sonar.application.es.EsConnectorImpl$MinimalTransportClient.<init>(EsConnectorImpl.java:103)
at org.sonar.application.es.EsConnectorImpl.buildTransportClient(EsConnectorImpl.java:89)
at org.sonar.application.es.EsConnectorImpl.getTransportClient(EsConnectorImpl.java:74)
at org.sonar.application.es.EsConnectorImpl.getClusterHealthStatus(EsConnectorImpl.java:61)
at org.sonar.application.process.EsManagedProcess.checkStatus(EsManagedProcess.java:88)
at org.sonar.application.process.EsManagedProcess.checkOperational(EsManagedProcess.java:73)
at org.sonar.application.process.EsManagedProcess.isOperational(EsManagedProcess.java:58)
at org.sonar.application.process.ManagedProcessHandler.refreshState(ManagedProcessHandler.java:220)
at org.sonar.application.process.ManagedProcessHandler$EventWatcher.run(ManagedProcessHandler.java:285)
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.<init>(long, int): unavailable
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent] sun.misc.Unsafe: available
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent] maxDirectMemory: 8361345024 bytes (maybe)
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model)
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: -1 bytes
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.CleanerJava9] java.nio.ByteBuffer.cleaner(): available
2020.08.10 15:57:07 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: false
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Module execution: 55ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] TypeListeners creation: 2ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Scopes creation: 4ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Converters creation: 0ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Binding creation: 3ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Private environment creation: 0ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Injector construction: 0ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Binding initialization: 0ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Binding indexing: 0ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Collecting injection requests: 0ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Binding validation: 0ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Static validation: 0ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Instance member validation: 0ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Provider verification: 0ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Static member injection: 0ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Instance injection: 0ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.i.i.Stopwatch] Preloading singletons: 0ms
2020.08.10 15:57:08 DEBUG app[][o.e.c.t.TransportClientNodesService] node_sampler_interval[5s]
2020.08.10 15:57:08 DEBUG app[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 16
2020.08.10 15:57:08 DEBUG app[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: false
2020.08.10 15:57:08 DEBUG app[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
2020.08.10 15:57:08 DEBUG app[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: available
2020.08.10 15:57:08 DEBUG app[][o.e.c.t.TransportClientNodesService] adding address [{#transport#-1}{1vSBE-wwTS-qqmW57PC4AQ}{127.0.0.1}{127.0.0.1:9001}]
2020.08.10 15:57:08 DEBUG app[][i.n.c.DefaultChannelId] -Dio.netty.processId: 1523 (auto-detected)
2020.08.10 15:57:08 DEBUG app[][i.netty.util.NetUtil] -Djava.net.preferIPv4Stack: false
2020.08.10 15:57:08 DEBUG app[][i.netty.util.NetUtil] -Djava.net.preferIPv6Addresses: false
2020.08.10 15:57:08 DEBUG app[][i.netty.util.NetUtil] Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
2020.08.10 15:57:08 DEBUG app[][i.netty.util.NetUtil] Failed to get SOMAXCONN from sysctl and file /proc/sys/net/core/somaxconn. Default: 128
2020.08.10 15:57:08 DEBUG app[][i.n.c.DefaultChannelId] -Dio.netty.machineId: 3a:53:cc:ff:fe:74:2a:e8 (auto-detected)
2020.08.10 15:57:08 DEBUG app[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
2020.08.10 15:57:08 DEBUG app[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
2020.08.10 15:57:08 DEBUG app[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple
2020.08.10 15:57:08 DEBUG app[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.targetRecords: 4
2020.08.10 15:57:08 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 16
2020.08.10 15:57:08 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 16
2020.08.10 15:57:08 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
2020.08.10 15:57:08 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11
2020.08.10 15:57:08 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216
2020.08.10 15:57:08 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.tinyCacheSize: 512
2020.08.10 15:57:08 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
2020.08.10 15:57:08 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
2020.08.10 15:57:08 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2020.08.10 15:57:08 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
2020.08.10 15:57:08 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true
2020.08.10 15:57:08 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled
2020.08.10 15:57:08 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 0
2020.08.10 15:57:08 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384
2020.08.10 15:57:08 DEBUG app[][o.e.c.t.TransportClientNodesService] failed to connect to node [{#transport#-1}{1vSBE-wwTS-qqmW57PC4AQ}{127.0.0.1}{127.0.0.1:9001}], ignoring...
org.elasticsearch.transport.ConnectTransportException: [][127.0.0.1:9001] connect_exception
at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1309)
at org.elasticsearch.action.ActionListener.lambda$toBiConsumer$2(ActionListener.java:100)
at org.elasticsearch.common.concurrent.CompletableContext.lambda$addListener$0(CompletableContext.java:42)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(Unknown Source)
at org.elasticsearch.common.concurrent.CompletableContext.completeExceptionally(CompletableContext.java:57)
at org.elasticsearch.transport.netty4.Netty4TcpChannel.lambda$new$1(Netty4TcpChannel.java:72)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:327)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:343)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:591)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:508)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:9001
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
... 6 common frames omitted
Caused by: java.net.ConnectException: Connection refused
... 10 common frames omitted
2020.08.10 15:57:08 DEBUG app[][o.s.a.e.EsConnectorImpl] Connected to Elasticsearch node: [127.0.0.1:9001]
2020.08.10 15:57:08 DEBUG es[][o.e.b.SystemCallFilter] Linux seccomp filter installation successful, threads: [all]
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] java.class.path: /opt/sonarqube/elasticsearch/lib/plugin-classloader-6.8.4.jar:/opt/sonarqube/elasticsearch/lib/lucene-spatial3d-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/lucene-suggest-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/log4j-api-2.11.1.jar:/opt/sonarqube/elasticsearch/lib/elasticsearch-cli-6.8.4.jar:/opt/sonarqube/elasticsearch/lib/jackson-core-2.8.11.jar:/opt/sonarqube/elasticsearch/lib/HdrHistogram-2.1.9.jar:/opt/sonarqube/elasticsearch/lib/java-version-checker-6.8.4.jar:/opt/sonarqube/elasticsearch/lib/lucene-join-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/snakeyaml-1.17.jar:/opt/sonarqube/elasticsearch/lib/hppc-0.7.1.jar:/opt/sonarqube/elasticsearch/lib/lucene-highlighter-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/lucene-backward-codecs-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/lucene-spatial-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/log4j-1.2-api-2.11.1.jar:/opt/sonarqube/elasticsearch/lib/joda-time-2.10.1.jar:/opt/sonarqube/elasticsearch/lib/elasticsearch-launchers-6.8.4.jar:/opt/sonarqube/elasticsearch/lib/t-digest-3.2.jar:/opt/sonarqube/elasticsearch/lib/lucene-grouping-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/lucene-queryparser-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/log4j-core-2.11.1.jar:/opt/sonarqube/elasticsearch/lib/lucene-core-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/lucene-misc-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/jts-core-1.15.0.jar:/opt/sonarqube/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar:/opt/sonarqube/elasticsearch/lib/elasticsearch-secure-sm-6.8.4.jar:/opt/sonarqube/elasticsearch/lib/lucene-spatial-extras-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/lucene-sandbox-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar:/opt/sonarqube/elasticsearch/lib/elasticsearch-core-6.8.4.jar:/opt/sonarqube/elasticsearch/lib/jopt-simple-5.0.2.jar:/opt/sonarqube/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar:/opt/sonarqube/elasticsearch/lib/elasticsearch-x-content-6.8.4.jar:/opt/sonarqube/elasticsearch/lib/jna-4.5.1.jar:/opt/sonarqube/elasticsearch/lib/elasticsearch-6.8.4.jar:/opt/sonarqube/elasticsearch/lib/lucene-queries-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/lucene-memory-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/lucene-analyzers-common-7.7.2.jar:/opt/sonarqube/elasticsearch/lib/spatial4j-0.7.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] sun.boot.class.path: null
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/plugin-classloader-6.8.4.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial3d-7.7.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-suggest-7.7.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-api-2.11.1.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-cli-6.8.4.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-core-2.8.11.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/HdrHistogram-2.1.9.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/java-version-checker-6.8.4.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-join-7.7.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/snakeyaml-1.17.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/hppc-0.7.1.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-highlighter-7.7.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-backward-codecs-7.7.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-7.7.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-1.2-api-2.11.1.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/joda-time-2.10.1.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-launchers-6.8.4.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/t-digest-3.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-grouping-7.7.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queryparser-7.7.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-core-2.11.1.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-core-7.7.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-misc-7.7.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jts-core-1.15.0.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-secure-sm-6.8.4.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-extras-7.7.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-sandbox-7.7.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-core-6.8.4.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jopt-simple-5.0.2.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-x-content-6.8.4.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jna-4.5.1.jar
2020.08.10 15:57:08 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queries-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-memory-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-analyzers-common-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/spatial4j-0.7.jar
2020.08.10 15:57:09 DEBUG es[][o.e.c.n.IfConfig] configuration:
lo
inet 127.0.0.1 netmask:255.0.0.0 scope:host
inet6 ::1 prefixlen:128 scope:host
UP LOOPBACK mtu:65536 index:1
tap0
inet 10.0.2.100 netmask:255.255.255.0 broadcast:10.0.2.255 scope:site
inet6 fe80::3853:ccff:fe74:2ae8 prefixlen:64 scope:link
hardware 3A:53:CC:74:2A:E8
UP mtu:65520 index:2
2020.08.10 15:57:09 DEBUG es[][o.e.e.NodeEnvironment] using node location [[NodePath{path=/opt/sonarqube/data/es6/nodes/0, indicesPath=/opt/sonarqube/data/es6/nodes/0/indices, fileStore=/ (fuse-overlayfs), majorDeviceNumber=0, minorDeviceNumber=54}]], local_lock_id [0]
2020.08.10 15:57:09 DEBUG es[][o.e.e.NodeEnvironment] node data locations details:
-> /opt/sonarqube/data/es6/nodes/0, free_space [127.6gb], usable_space [108.1gb], total_space [382.5gb], mount [/ (fuse-overlayfs)], type [fuse.fuse-overlayfs]
2020.08.10 15:57:09 INFO es[][o.e.e.NodeEnvironment] heap size [494.9mb], compressed ordinary object pointers [true]
2020.08.10 15:57:09 INFO es[][o.e.n.Node] node name [sonarqube], node ID [QHs-cc0wSs-6RCIxzXJNHQ]
2020.08.10 15:57:09 INFO es[][o.e.n.Node] version[6.8.4], pid[1556], build[default/tar/bca0c8d/2019-10-16T06:19:49.319352Z], OS[Linux/5.7.11-100.fc31.x86_64/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/11.0.6/11.0.6+10]
2020.08.10 15:57:09 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/opt/sonarqube/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Xmx512m, -Xms512m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/opt/sonarqube/elasticsearch, -Des.path.conf=/opt/sonarqube/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar]
2020.08.10 15:57:09 DEBUG es[][o.e.n.Node] using config [/opt/sonarqube/temp/conf/es], data [[/opt/sonarqube/data/es6]], logs [/opt/sonarqube/logs], plugins [/opt/sonarqube/elasticsearch/plugins]
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/repository-url/repository-url-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-core-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-backward-codecs-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/repository-url/repository-url-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-extras-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-grouping-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queryparser-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/java-version-checker-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/hppc-0.7.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/plugin-classloader-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-highlighter-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-core-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jts-core-1.15.0.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-cli-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-core-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/snakeyaml-1.17.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-memory-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/spatial4j-0.7.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/t-digest-3.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/joda-time-2.10.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-join-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-1.2-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jopt-simple-5.0.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-x-content-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-suggest-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-core-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-analyzers-common-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-sandbox-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/HdrHistogram-2.1.9.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-launchers-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queries-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jna-4.5.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-secure-sm-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-misc-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial3d-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/parent-join/parent-join-client-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-core-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-backward-codecs-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-extras-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-grouping-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queryparser-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/java-version-checker-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/hppc-0.7.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/plugin-classloader-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-highlighter-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-core-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jts-core-1.15.0.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-cli-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-core-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/snakeyaml-1.17.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-memory-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/spatial4j-0.7.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/t-digest-3.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/joda-time-2.10.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-join-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-1.2-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jopt-simple-5.0.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-x-content-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-suggest-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-core-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-analyzers-common-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-sandbox-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/HdrHistogram-2.1.9.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-launchers-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queries-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jna-4.5.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-secure-sm-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-misc-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/parent-join/parent-join-client-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial3d-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/percolator/percolator-client-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-core-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-backward-codecs-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-extras-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-grouping-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queryparser-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/percolator/percolator-client-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/java-version-checker-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/hppc-0.7.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/plugin-classloader-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-highlighter-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-core-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jts-core-1.15.0.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-cli-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-core-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/snakeyaml-1.17.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-memory-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/spatial4j-0.7.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/t-digest-3.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/joda-time-2.10.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-join-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-1.2-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jopt-simple-5.0.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-x-content-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-suggest-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-core-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-analyzers-common-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-sandbox-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/HdrHistogram-2.1.9.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-launchers-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queries-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jna-4.5.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-secure-sm-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-misc-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial3d-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/lang-painless-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/elasticsearch-scripting-painless-spi-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.3.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/asm-debug-all-5.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-core-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-backward-codecs-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-extras-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-grouping-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queryparser-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/java-version-checker-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/hppc-0.7.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/plugin-classloader-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-highlighter-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-core-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jts-core-1.15.0.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-cli-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-core-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/snakeyaml-1.17.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-memory-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/spatial4j-0.7.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/t-digest-3.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/joda-time-2.10.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-join-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-1.2-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jopt-simple-5.0.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/elasticsearch-scripting-painless-spi-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-x-content-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.3.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-suggest-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-core-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-analyzers-common-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/lang-painless-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-sandbox-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/HdrHistogram-2.1.9.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-launchers-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queries-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jna-4.5.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-secure-sm-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/asm-debug-all-5.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-misc-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial3d-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/commons-logging-1.1.3.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/reindex-client-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/httpclient-4.5.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/commons-codec-1.10.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/elasticsearch-rest-client-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/httpcore-nio-4.4.5.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/elasticsearch-ssl-config-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/httpcore-4.4.5.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/httpasyncclient-4.1.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/reindex-client-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-core-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-backward-codecs-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-extras-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-grouping-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queryparser-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/elasticsearch-rest-client-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/java-version-checker-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/hppc-0.7.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/plugin-classloader-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-highlighter-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-core-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/elasticsearch-ssl-config-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jts-core-1.15.0.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-cli-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-core-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/snakeyaml-1.17.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-memory-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/spatial4j-0.7.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/t-digest-3.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/joda-time-2.10.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-join-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-1.2-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/httpcore-4.4.5.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jopt-simple-5.0.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/httpclient-4.5.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-x-content-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-suggest-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-core-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-analyzers-common-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/commons-logging-1.1.3.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-sandbox-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/commons-codec-1.10.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/HdrHistogram-2.1.9.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-launchers-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/httpcore-nio-4.4.5.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queries-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jna-4.5.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-secure-sm-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-misc-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial3d-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/reindex/httpasyncclient-4.1.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/lang-painless-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/elasticsearch-scripting-painless-spi-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.3.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/asm-debug-all-5.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/lang-painless-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/elasticsearch-scripting-painless-spi-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.3.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/lang-painless/asm-debug-all-5.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/analysis-common/analysis-common-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-core-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-backward-codecs-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-extras-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-grouping-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queryparser-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/java-version-checker-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/hppc-0.7.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/plugin-classloader-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-highlighter-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-core-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jts-core-1.15.0.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-cli-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-core-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/snakeyaml-1.17.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-memory-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/spatial4j-0.7.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/t-digest-3.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/joda-time-2.10.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-join-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-1.2-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/analysis-common/analysis-common-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jopt-simple-5.0.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-x-content-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-suggest-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-core-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-analyzers-common-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-sandbox-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/HdrHistogram-2.1.9.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-launchers-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queries-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jna-4.5.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-secure-sm-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-misc-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial3d-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-common-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-handler-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-codec-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-resolver-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/transport-netty4-client-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-transport-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-buffer-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-codec-http-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-common-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-core-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-backward-codecs-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-extras-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-grouping-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queryparser-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-codec-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-resolver-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/java-version-checker-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/hppc-0.7.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/plugin-classloader-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-highlighter-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-core-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jts-core-1.15.0.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-cli-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-core-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/snakeyaml-1.17.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-memory-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-handler-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/spatial4j-0.7.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/t-digest-3.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/joda-time-2.10.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-buffer-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-join-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-1.2-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jopt-simple-5.0.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-x-content-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-transport-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-suggest-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/netty-codec-http-4.1.32.Final.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-core-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-analyzers-common-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-sandbox-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/HdrHistogram-2.1.9.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-launchers-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/transport-netty4/transport-netty4-client-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queries-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jna-4.5.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-secure-sm-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-misc-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial3d-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/mapper-extras/mapper-extras-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] java.home: /opt/java/openjdk
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-core-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-backward-codecs-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-yaml-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-extras-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-grouping-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queryparser-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/java-version-checker-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/hppc-0.7.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/plugin-classloader-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-highlighter-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-core-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jts-core-1.15.0.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-cli-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-core-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/snakeyaml-1.17.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-smile-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-memory-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jackson-dataformat-cbor-2.8.11.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/spatial4j-0.7.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/t-digest-3.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/joda-time-2.10.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-join-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-1.2-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jopt-simple-5.0.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-x-content-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/modules/mapper-extras/mapper-extras-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-suggest-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-core-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-analyzers-common-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/log4j-api-2.11.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-sandbox-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/HdrHistogram-2.1.9.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-launchers-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-queries-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/jna-4.5.1.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/elasticsearch-secure-sm-6.8.4.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-misc-7.7.2.jar
2020.08.10 15:57:09 DEBUG es[][o.e.b.JarHell] examining jar: /opt/sonarqube/elasticsearch/lib/lucene-spatial3d-7.7.2.jar
2020.08.10 15:57:09 INFO es[][o.e.p.PluginsService] loaded module [analysis-common]
2020.08.10 15:57:09 INFO es[][o.e.p.PluginsService] loaded module [lang-painless]
2020.08.10 15:57:09 INFO es[][o.e.p.PluginsService] loaded module [mapper-extras]
2020.08.10 15:57:09 INFO es[][o.e.p.PluginsService] loaded module [parent-join]
2020.08.10 15:57:09 INFO es[][o.e.p.PluginsService] loaded module [percolator]
2020.08.10 15:57:09 INFO es[][o.e.p.PluginsService] loaded module [reindex]
2020.08.10 15:57:09 INFO es[][o.e.p.PluginsService] loaded module [repository-url]
2020.08.10 15:57:09 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4]
2020.08.10 15:57:09 INFO es[][o.e.p.PluginsService] no plugins loaded
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [force_merge], size [1], queue size [unbounded]
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_started], core [1], max [16], keep alive [5m]
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [listener], size [4], queue size [unbounded]
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [index], size [8], queue size [200]
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [refresh], core [1], max [4], keep alive [5m]
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [generic], core [4], max [128], keep alive [30s]
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [warmer], core [1], max [4], keep alive [5m]
2020.08.10 15:57:09 DEBUG es[][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] thread pool [sonarqube/search] will adjust queue by [50] when determining automatic queue size
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search], size [13], queue size [1k]
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [flush], core [1], max [4], keep alive [5m]
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_store], core [1], max [16], keep alive [5m]
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [management], core [1], max [5], keep alive [5m]
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [get], size [8], queue size [1k]
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [analyze], size [1], queue size [16]
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [write], size [8], queue size [200]
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [snapshot], core [1], max [4], keep alive [5m]
2020.08.10 15:57:09 DEBUG es[][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] thread pool [sonarqube/search_throttled] will adjust queue by [50] when determining automatic queue size
2020.08.10 15:57:09 DEBUG es[][o.e.t.ThreadPool] created thread pool: name [search_throttled], size [1], queue size [100]
2020.08.10 15:57:09 DEBUG es[][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: true
2020.08.10 15:57:09 DEBUG es[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe: unavailable (io.netty.noUnsafe)
2020.08.10 15:57:09 DEBUG es[][i.n.u.i.PlatformDependent0] Java version: 11
2020.08.10 15:57:09 DEBUG es[][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.<init>(long, int): unavailable
2020.08.10 15:57:09 DEBUG es[][i.n.u.i.PlatformDependent] maxDirectMemory: 518979584 bytes (maybe)
2020.08.10 15:57:09 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: /opt/sonarqube/temp (java.io.tmpdir)
2020.08.10 15:57:09 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model)
2020.08.10 15:57:09 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: -1 bytes
2020.08.10 15:57:09 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1
2020.08.10 15:57:09 DEBUG es[][i.n.u.i.CleanerJava9] java.nio.ByteBuffer.cleaner(): unavailable
java.lang.UnsupportedOperationException: sun.misc.Unsafe unavailable
at io.netty.util.internal.CleanerJava9.<clinit>(CleanerJava9.java:68) [netty-common-4.1.32.Final.jar:4.1.32.Final]
at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:172) [netty-common-4.1.32.Final.jar:4.1.32.Final]
at io.netty.util.ConstantPool.<init>(ConstantPool.java:32) [netty-common-4.1.32.Final.jar:4.1.32.Final]
at io.netty.util.AttributeKey$1.<init>(AttributeKey.java:27) [netty-common-4.1.32.Final.jar:4.1.32.Final]
at io.netty.util.AttributeKey.<clinit>(AttributeKey.java:27) [netty-common-4.1.32.Final.jar:4.1.32.Final]
at org.elasticsearch.transport.netty4.Netty4Transport.<clinit>(Netty4Transport.java:219) [transport-netty4-client-6.8.4.jar:6.8.4]
at org.elasticsearch.transport.Netty4Plugin.getSettings(Netty4Plugin.java:57) [transport-netty4-client-6.8.4.jar:6.8.4]
at org.elasticsearch.plugins.PluginsService.lambda$getPluginSettings$0(PluginsService.java:89) [elasticsearch-6.8.4.jar:6.8.4]
at java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) [?:?]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source) [?:?]
at java.util.stream.AbstractPipeline.copyInto(Unknown Source) [?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source) [?:?]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source) [?:?]
at java.util.stream.AbstractPipeline.evaluate(Unknown Source) [?:?]
at java.util.stream.ReferencePipeline.collect(Unknown Source) [?:?]
at org.elasticsearch.plugins.PluginsService.getPluginSettings(PluginsService.java:89) [elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.node.Node.<init>(Node.java:356) [elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.node.Node.<init>(Node.java:266) [elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:212) [elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:212) [elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) [elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) [elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) [elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) [elasticsearch-cli-6.8.4.jar:6.8.4]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:116) [elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) [elasticsearch-6.8.4.jar:6.8.4]
2020.08.10 15:57:09 DEBUG es[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: true
2020.08.10 15:57:10 DEBUG es[][o.e.s.ScriptService] using script cache with max_size [100], expire [0s]
2020.08.10 15:57:11 WARN es[][o.e.d.c.s.Settings] [http.enabled] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
2020.08.10 15:57:11 DEBUG es[][o.e.m.j.JvmGcMonitorService] enabled [true], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}], overhead [50, 25, 10]
2020.08.10 15:57:11 DEBUG es[][o.e.m.o.OsService] using refresh_interval [1s]
2020.08.10 15:57:11 DEBUG es[][o.e.m.p.ProcessService] using refresh_interval [1s]
2020.08.10 15:57:11 DEBUG es[][o.e.m.j.JvmService] using refresh_interval [1s]
2020.08.10 15:57:11 DEBUG es[][o.e.m.f.FsService] using refresh_interval [1s]
2020.08.10 15:57:11 DEBUG es[][o.e.c.r.a.d.ClusterRebalanceAllocationDecider] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
2020.08.10 15:57:11 DEBUG es[][o.e.c.r.a.d.ConcurrentRebalanceAllocationDecider] using [cluster_concurrent_rebalance] with [2]
2020.08.10 15:57:11 DEBUG es[][o.e.c.r.a.d.ThrottlingAllocationDecider] using node_concurrent_outgoing_recoveries [2], node_concurrent_incoming_recoveries [2], node_initial_primaries_recoveries [4]
2020.08.10 15:57:11 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [49.4mb] max filter count [10000]
2020.08.10 15:57:11 DEBUG es[][o.e.i.IndexingMemoryController] using indexing buffer size [49.4mb] with indices.memory.shard_inactive_time [5m], indices.memory.interval [5s]
2020.08.10 15:57:11 DEBUG es[][o.e.g.GatewayMetaState] took 16ms to load state
2020.08.10 15:57:11 DEBUG es[][o.e.d.z.SettingsBasedHostsProvider] using initial hosts [127.0.0.1, [::1]]
2020.08.10 15:57:11 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and host providers [settings]
2020.08.10 15:57:11 DEBUG es[][o.e.d.z.UnicastZenPing] using concurrent_connects [10], resolve_timeout [5s]
2020.08.10 15:57:11 DEBUG es[][o.e.d.z.ElectMasterService] using minimum_master_nodes [1]
2020.08.10 15:57:11 DEBUG es[][o.e.d.z.ZenDiscovery] using ping_timeout [3s], join.timeout [1m], master_election.ignore_non_master [false]
2020.08.10 15:57:11 DEBUG es[][o.e.d.z.MasterFaultDetection] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
2020.08.10 15:57:11 DEBUG es[][o.e.d.z.NodesFaultDetection] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
2020.08.10 15:57:11 DEBUG es[][o.e.i.r.RecoverySettings] using max_bytes_per_sec[40mb]
2020.08.10 15:57:11 INFO es[][o.e.n.Node] initialized
2020.08.10 15:57:11 INFO es[][o.e.n.Node] starting ...
2020.08.10 15:57:11 DEBUG es[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 16
2020.08.10 15:57:11 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: true
2020.08.10 15:57:11 DEBUG es[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
2020.08.10 15:57:11 DEBUG es[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: unavailable
2020.08.10 15:57:11 DEBUG es[][o.e.t.n.Netty4Transport] using profile[default], worker_count[16], port[9001], bind_host[[127.0.0.1]], publish_host[[127.0.0.1]], receive_predictor[64kb->64kb]
2020.08.10 15:57:11 DEBUG es[][o.e.t.TcpTransport] binding server bootstrap to: [127.0.0.1]
2020.08.10 15:57:11 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.processId: 1556 (auto-detected)
2020.08.10 15:57:11 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv4Stack: false
2020.08.10 15:57:11 DEBUG es[][i.n.u.NetUtil] -Djava.net.preferIPv6Addresses: false
2020.08.10 15:57:11 DEBUG es[][i.n.u.NetUtil] Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
2020.08.10 15:57:11 DEBUG es[][i.n.u.NetUtil] Failed to get SOMAXCONN from sysctl and file /proc/sys/net/core/somaxconn. Default: 128
2020.08.10 15:57:11 DEBUG es[][i.n.c.DefaultChannelId] -Dio.netty.machineId: 3a:53:cc:ff:fe:74:2a:e8 (auto-detected)
2020.08.10 15:57:11 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
2020.08.10 15:57:11 DEBUG es[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
2020.08.10 15:57:11 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple
2020.08.10 15:57:11 DEBUG es[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.targetRecords: 4
2020.08.10 15:57:12 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 5
2020.08.10 15:57:12 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 5
2020.08.10 15:57:12 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
2020.08.10 15:57:12 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11
2020.08.10 15:57:12 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216
2020.08.10 15:57:12 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.tinyCacheSize: 512
2020.08.10 15:57:12 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
2020.08.10 15:57:12 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
2020.08.10 15:57:12 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2020.08.10 15:57:12 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
2020.08.10 15:57:12 DEBUG es[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true
2020.08.10 15:57:12 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled
2020.08.10 15:57:12 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 0
2020.08.10 15:57:12 DEBUG es[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384
2020.08.10 15:57:12 DEBUG es[][o.e.t.TcpTransport] Bound profile [default] to address {127.0.0.1:9001}
2020.08.10 15:57:12 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2020.08.10 15:57:12 WARN es[][o.e.b.BootstrapChecks] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
2020.08.10 15:57:12 DEBUG es[][o.e.n.Node] waiting to join the cluster. timeout [30s]
2020.08.10 15:57:13 DEBUG app[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkAccessible: true
2020.08.10 15:57:13 DEBUG app[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkBounds: true
2020.08.10 15:57:13 DEBUG app[][i.n.u.ResourceLeakDetectorFactory] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@4cdbdcd7
2020.08.10 15:57:13 DEBUG app[][i.n.util.Recycler] -Dio.netty.recycler.maxCapacityPerThread: 4096
2020.08.10 15:57:13 DEBUG app[][i.n.util.Recycler] -Dio.netty.recycler.maxSharedCapacityFactor: 2
2020.08.10 15:57:13 DEBUG app[][i.n.util.Recycler] -Dio.netty.recycler.linkCapacity: 16
2020.08.10 15:57:13 DEBUG app[][i.n.util.Recycler] -Dio.netty.recycler.ratio: 8
2020.08.10 15:57:13 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxCapacityPerThread: disabled
2020.08.10 15:57:13 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.maxSharedCapacityFactor: disabled
2020.08.10 15:57:13 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.linkCapacity: disabled
2020.08.10 15:57:13 DEBUG es[][i.n.u.Recycler] -Dio.netty.recycler.ratio: disabled
2020.08.10 15:57:13 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkAccessible: true
2020.08.10 15:57:13 DEBUG es[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkBounds: true
2020.08.10 15:57:13 DEBUG es[][i.n.u.ResourceLeakDetectorFactory] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@489d0c89
2020.08.10 15:57:13 DEBUG app[][o.e.t.ConnectionManager] connected to node [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}]
2020.08.10 15:57:13 DEBUG es[][o.e.a.a.c.h.TransportClusterHealthAction] no known master node, scheduling a retry
2020.08.10 15:57:15 DEBUG es[][o.e.d.z.ZenDiscovery] filtered ping responses: (ignore_non_masters [false])
--> ping_response{node [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}], id[7], master [null],cluster_state_version [-1], cluster_name[sonarqube]}
2020.08.10 15:57:15 DEBUG es[][o.e.d.z.ZenDiscovery] elected as master, waiting for incoming joins ([0] needed)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [zen-disco-elected-as-master ([0] nodes joined)]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [1], source [zen-disco-elected-as-master ([0] nodes joined)]
2020.08.10 15:57:15 INFO es[][o.e.c.s.MasterService] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [1]
2020.08.10 15:57:15 DEBUG es[][o.e.d.z.ZenDiscovery] got first state from fresh master [QHs-cc0wSs-6RCIxzXJNHQ]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [1], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])]
2020.08.10 15:57:15 INFO es[][o.e.c.s.ClusterApplierService] new_master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}, reason: apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 1
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 1
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 1
2020.08.10 15:57:15 INFO es[][o.e.n.Node] started
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])]: took [11ms] done applying updated cluster state (version: 1, uuid: fUYr2GjjSs2JGeH7nf4INQ)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [zen-disco-elected-as-master ([0] nodes joined)]: took [63ms] done publishing updated cluster state (version: 1, uuid: fUYr2GjjSs2JGeH7nf4INQ)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [update snapshot state after node removal]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [update snapshot state after node removal]: took [0s] no change in cluster state
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [49.4mb] max filter count [10000]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]], shards [5]/[0] - reason [metadata verification]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [49.4mb] max filter count [10000]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[issues/pq-tKEOwTnuKUrGVJdyyXw]], shards [5]/[0] - reason [metadata verification]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [49.4mb] max filter count [10000]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[metadatas/jdWkT5wLSPu39POUGCCcNQ]], shards [1]/[0] - reason [metadata verification]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [49.4mb] max filter count [10000]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[users/Qwudd26uT7uxMioOkklJMA]], shards [1]/[0] - reason [metadata verification]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [49.4mb] max filter count [10000]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[views/k7sEtQDGQwezRjUEkL1GUw]], shards [5]/[0] - reason [metadata verification]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [49.4mb] max filter count [10000]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[components/F2-zpQErTpiCe98RShd98w]], shards [5]/[0] - reason [metadata verification]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesQueryCache] using [node] query cache with size [49.4mb] max filter count [10000]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[rules/TMYeofUySm2ICb13cva2zQ]], shards [2]/[0] - reason [metadata verification]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.q.IndexQueryCache] full cache clear, reason [close]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.b.BitsetFilterCache] clearing all bitsets because [close]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [local-gateway-elected-state]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][3] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/3], state path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/3]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][4] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/4], state path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/4]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][3] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/3], state path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/3]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [users][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/Qwudd26uT7uxMioOkklJMA/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/Qwudd26uT7uxMioOkklJMA/0]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][1] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/1], state path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/1]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][2] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/2], state path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/2]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/0]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][1] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/1], state path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/1]
2020.08.10 15:57:15 DEBUG es[][o.e.c.r.a.a.BalancedShardsAllocator] skipping rebalance due to in-flight shard/store fetches
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][4] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/4], state path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/4]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][2] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/2], state path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/2]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][1] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/TMYeofUySm2ICb13cva2zQ/1], state path [/opt/sonarqube/data/es6/nodes/0/indices/TMYeofUySm2ICb13cva2zQ/1]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [2], source [local-gateway-elected-state]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [2]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [2] source [local-gateway-elected-state]])]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [2], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [2] source [local-gateway-elected-state]])]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/0]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 2
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 2
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][2] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/2], state path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/2]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/TMYeofUySm2ICb13cva2zQ/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/TMYeofUySm2ICb13cva2zQ/0]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][1] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/1], state path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/1]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][1] shard state info found: [primary [true], allocation [[id=4-IDUuoMRDKlSyLPF2dNPw]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][3] shard state info found: [primary [true], allocation [[id=lmO_aW1yTASkBVHDXBZ1jA]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][2] shard state info found: [primary [true], allocation [[id=7IlA2fqDQAiFnvIQdf9DmQ]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][2] shard state info found: [primary [true], allocation [[id=wO5sCNqsSSSpyFKpPFjMWw]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][1] shard state info found: [primary [true], allocation [[id=UERKa8rhQUii0j4LqsjQvg]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][2] shard state info found: [primary [true], allocation [[id=8TBtqpHTTBq6RHgxKJcb5A]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][0] shard state info found: [primary [true], allocation [[id=e8RWPxXiSZqAUBcmXsT0gQ]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [users][0] shard state info found: [primary [true], allocation [[id=q8YW0mGsTGSNxPw3Mg_v4g]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][1] shard state info found: [primary [true], allocation [[id=l4TJiFUCTQqDD-E5oILQ7g]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [views][4] shard state info found: [primary [true], allocation [[id=osJ2fx9US4qbFvcvra6kHw]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][3] shard state info found: [primary [true], allocation [[id=UWAJOFdLRaSh9-q6z-LKaA]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][4] shard state info found: [primary [true], allocation [[id=spxQVGp7QGaJ5jsXI8pq6Q]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [issues][0] shard state info found: [primary [true], allocation [[id=IIjO4YIpQP6Jz-zaEYjzQg]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [rules][0] shard state info found: [primary [true], allocation [[id=TJ1cAin1TOCthOq84wy0_A]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][4] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/4], state path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/4]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][1] shard state info found: [primary [true], allocation [[id=Q3yg8WGsShagVwnTZ8ZIJw]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][3] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/3], state path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/3]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][2] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/2], state path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/2]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/0]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [metadatas][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/jdWkT5wLSPu39POUGCCcNQ/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/jdWkT5wLSPu39POUGCCcNQ/0]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][0] shard state info found: [primary [true], allocation [[id=HreUNEnKQki0SDQTjQsowA]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/0]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][3] shard state info found: [primary [true], allocation [[id=Yp9cuTU3Q_u69Ncf3LQ31A]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][1] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/1], state path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/1]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][4] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/4], state path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/4]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][3] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/3], state path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/3]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [projectmeasures][4] shard state info found: [primary [true], allocation [[id=1Yw55tgqRqmpvRLFDRReyg]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][0] shard state info found: [primary [true], allocation [[id=1xGlCS7uS86CO0x8xQZQwQ]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][4] shard state info found: [primary [true], allocation [[id=SXfnnXT0S1aEfFo9EhhlzA]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][2] shard state info found: [primary [true], allocation [[id=Yo_R1BwFRGOugTjCyhLmsA]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][1] shard state info found: [primary [true], allocation [[id=wcCBJNECRZGMs8tiiHccGw]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [components][3] shard state info found: [primary [true], allocation [[id=LCfh5_aNRaGV3kAFapHweQ]]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.TransportNodesListGatewayStartedShards] [metadatas][0] shard state info found: [primary [true], allocation [[id=zAYyrf0fSXOpelrr0yDmnA]]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 2
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [2] source [local-gateway-elected-state]])]: took [56ms] done applying updated cluster state (version: 2, uuid: ISMlZci5Rxyz67FCEWt3zQ)
2020.08.10 15:57:15 INFO es[][o.e.g.GatewayService] recovered [7] indices into cluster_state
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [local-gateway-elected-state]: took [108ms] done publishing updated cluster state (version: 2, uuid: ISMlZci5Rxyz67FCEWt3zQ)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [cluster_reroute(async_shard_fetch)]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/k7sEtQDGQwezRjUEkL1GUw]][3]: found 1 allocation candidates of [views][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[lmO_aW1yTASkBVHDXBZ1jA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/k7sEtQDGQwezRjUEkL1GUw]][3]: allocating [[views][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/k7sEtQDGQwezRjUEkL1GUw]][1]: found 1 allocation candidates of [views][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[UERKa8rhQUii0j4LqsjQvg]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/k7sEtQDGQwezRjUEkL1GUw]][1]: allocating [[views][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/k7sEtQDGQwezRjUEkL1GUw]][4]: found 1 allocation candidates of [views][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[osJ2fx9US4qbFvcvra6kHw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/k7sEtQDGQwezRjUEkL1GUw]][4]: allocating [[views][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/k7sEtQDGQwezRjUEkL1GUw]][0]: found 1 allocation candidates of [views][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[e8RWPxXiSZqAUBcmXsT0gQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/k7sEtQDGQwezRjUEkL1GUw]][0]: allocating [[views][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/k7sEtQDGQwezRjUEkL1GUw]][2]: found 1 allocation candidates of [views][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[8TBtqpHTTBq6RHgxKJcb5A]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/k7sEtQDGQwezRjUEkL1GUw]][2]: throttling allocation [[views][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5af9fba4]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/Qwudd26uT7uxMioOkklJMA]][0]: found 1 allocation candidates of [users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[q8YW0mGsTGSNxPw3Mg_v4g]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/Qwudd26uT7uxMioOkklJMA]][0]: throttling allocation [[users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@68c42fd1]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[IIjO4YIpQP6Jz-zaEYjzQg]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][0]: throttling allocation [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@48e1076]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[UWAJOFdLRaSh9-q6z-LKaA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][3]: throttling allocation [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@798fb391]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[4-IDUuoMRDKlSyLPF2dNPw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][1]: throttling allocation [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@556e168]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[7IlA2fqDQAiFnvIQdf9DmQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][2]: throttling allocation [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5982af]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][4]: found 1 allocation candidates of [issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[spxQVGp7QGaJ5jsXI8pq6Q]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][4]: throttling allocation [[issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@daefcd6]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[l4TJiFUCTQqDD-E5oILQ7g]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6621ce14]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[TJ1cAin1TOCthOq84wy0_A]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3baaf557]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[1Yw55tgqRqmpvRLFDRReyg]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@403e9424]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[Yp9cuTU3Q_u69Ncf3LQ31A]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3707cb7c]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[HreUNEnKQki0SDQTjQsowA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6d785206]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[Q3yg8WGsShagVwnTZ8ZIJw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@54ee2d4e]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[wO5sCNqsSSSpyFKpPFjMWw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5bc10afa]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[LCfh5_aNRaGV3kAFapHweQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4ad9f6a9]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[SXfnnXT0S1aEfFo9EhhlzA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3d390389]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[1xGlCS7uS86CO0x8xQZQwQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@23a8dd33]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[Yo_R1BwFRGOugTjCyhLmsA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5fb8175a]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[wcCBJNECRZGMs8tiiHccGw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7baddfec]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]] based on allocation ids: [[zAYyrf0fSXOpelrr0yDmnA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[fetching_shard_data]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4968afc0]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [3], source [cluster_reroute(async_shard_fetch)]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [3]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [3] source [cluster_reroute(async_shard_fetch)]])]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [3], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [3] source [cluster_reroute(async_shard_fetch)]])]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 3
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 3
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[views/k7sEtQDGQwezRjUEkL1GUw]] creating index
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[views/k7sEtQDGQwezRjUEkL1GUw]], shards [5]/[0] - reason [create index]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] [[views/k7sEtQDGQwezRjUEkL1GUw]] added mapping [view], source [{"view":{"dynamic":"false","properties":{"projects":{"type":"keyword"},"uuid":{"type":"keyword"}}}}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][4] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [views][4] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/4], state path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/4]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [views][4] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/4, shard=[views][4]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [views][4]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][1] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [views][1] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/1], state path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/1]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [views][1] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/1, shard=[views][1]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [views][1]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][3] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [views][3] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/3], state path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/3]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [views][3] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/3, shard=[views][3]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [views][3]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][0] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [views][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/0]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [views][0] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/0, shard=[views][0]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [views][0]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 3
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [3] source [cluster_reroute(async_shard_fetch)]])]: took [63ms] done applying updated cluster state (version: 3, uuid: r9pCtWwLQB64b3k6cULRAw)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [cluster_reroute(async_shard_fetch)]: took [85ms] done publishing updated cluster state (version: 3, uuid: r9pCtWwLQB64b3k6cULRAw)
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=O_EToECtTjacxJFLMIhDZQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=4yoxk5LGRcW1ymIE8IKadg, translog_generation=5, translog_uuid=37_cno63QmKUXckTa0Or0A}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=ypNGqyu2QQasf6OmjeuJQg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=dqgVtQt5S6uhDSMuv-VHvg, translog_generation=5, translog_uuid=_R4BoWAPTGSG0z7PEa39Rw}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=ny0JkgTHSvGVdHYhHFPFfQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=8_RuMw8bTw2e8lZql4LD9w, translog_generation=5, translog_uuid=B522LN5pS9-lkymsBKcUdQ}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=IyhVWaUGQ32Poej0Z1MfcA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=jBtQ8hQcQN64CcoOZ676ZA, translog_generation=5, translog_uuid=vMhzttMjTJmi3EUC0kuiZQ}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=ypNGqyu2QQasf6OmjeuJQg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=dqgVtQt5S6uhDSMuv-VHvg, translog_generation=5, translog_uuid=_R4BoWAPTGSG0z7PEa39Rw}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=ypNGqyu2QQasf6OmjeuJQg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=dqgVtQt5S6uhDSMuv-VHvg, translog_generation=5, translog_uuid=_R4BoWAPTGSG0z7PEa39Rw}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=O_EToECtTjacxJFLMIhDZQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=4yoxk5LGRcW1ymIE8IKadg, translog_generation=5, translog_uuid=37_cno63QmKUXckTa0Or0A}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=O_EToECtTjacxJFLMIhDZQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=4yoxk5LGRcW1ymIE8IKadg, translog_generation=5, translog_uuid=37_cno63QmKUXckTa0Or0A}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=IyhVWaUGQ32Poej0Z1MfcA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=jBtQ8hQcQN64CcoOZ676ZA, translog_generation=5, translog_uuid=vMhzttMjTJmi3EUC0kuiZQ}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=IyhVWaUGQ32Poej0Z1MfcA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=jBtQ8hQcQN64CcoOZ676ZA, translog_generation=5, translog_uuid=vMhzttMjTJmi3EUC0kuiZQ}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=ny0JkgTHSvGVdHYhHFPFfQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=8_RuMw8bTw2e8lZql4LD9w, translog_generation=5, translog_uuid=B522LN5pS9-lkymsBKcUdQ}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=ny0JkgTHSvGVdHYhHFPFfQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=8_RuMw8bTw2e8lZql4LD9w, translog_generation=5, translog_uuid=B522LN5pS9-lkymsBKcUdQ}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [101ms]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [123ms]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [96ms]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [91ms]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] received shard started for [StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][0] starting shard [views][0], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=e8RWPxXiSZqAUBcmXsT0gQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][3] received shard started for [StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][1] received shard started for [StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][4] received shard started for [StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/k7sEtQDGQwezRjUEkL1GUw]][2]: found 1 allocation candidates of [views][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[8TBtqpHTTBq6RHgxKJcb5A]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[views/k7sEtQDGQwezRjUEkL1GUw]][2]: allocating [[views][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/Qwudd26uT7uxMioOkklJMA]][0]: found 1 allocation candidates of [users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[q8YW0mGsTGSNxPw3Mg_v4g]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/Qwudd26uT7uxMioOkklJMA]][0]: throttling allocation [[users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@374eb985]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[7IlA2fqDQAiFnvIQdf9DmQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][2]: throttling allocation [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@31540290]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][4]: found 1 allocation candidates of [issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[spxQVGp7QGaJ5jsXI8pq6Q]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][4]: throttling allocation [[issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@65149a40]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[IIjO4YIpQP6Jz-zaEYjzQg]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][0]: throttling allocation [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@664c8681]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[4-IDUuoMRDKlSyLPF2dNPw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][1]: throttling allocation [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@24f10e7f]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[UWAJOFdLRaSh9-q6z-LKaA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][3]: throttling allocation [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3b586f70]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[l4TJiFUCTQqDD-E5oILQ7g]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4027811e]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[TJ1cAin1TOCthOq84wy0_A]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@15815e33]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1Yw55tgqRqmpvRLFDRReyg]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4ab8857f]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HreUNEnKQki0SDQTjQsowA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6c4c2189]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yp9cuTU3Q_u69Ncf3LQ31A]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@a3c727e]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wO5sCNqsSSSpyFKpPFjMWw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@68f5e42f]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Q3yg8WGsShagVwnTZ8ZIJw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@9fd52ee]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1xGlCS7uS86CO0x8xQZQwQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6dfebc46]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yo_R1BwFRGOugTjCyhLmsA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@356f9ba7]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[SXfnnXT0S1aEfFo9EhhlzA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@684ed2d0]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wcCBJNECRZGMs8tiiHccGw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@417e858b]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[LCfh5_aNRaGV3kAFapHweQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7db0f58f]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[zAYyrf0fSXOpelrr0yDmnA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6c5a3aaa]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [4], source [shard-started StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [4]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [4] source [shard-started StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [4], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [4] source [shard-started StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 4
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 4
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [views][2] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [views][2] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/2], state path [/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/2]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [views][2] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/k7sEtQDGQwezRjUEkL1GUw/2, shard=[views][2]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [views][2]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][4] received shard started for [StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][1] received shard started for [StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][3] received shard started for [StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=r6oBL3M3Q_mJoadJj2HlLA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=wlvGEZOIRPWzr5ZSMPkHjA, translog_generation=5, translog_uuid=IyixYtmWQY2pdLE5CsmNOA}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 4
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [4] source [shard-started StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [15ms] done applying updated cluster state (version: 4, uuid: fmH_ymYJS0idR_0U3hXehw)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][0]], allocationId [e8RWPxXiSZqAUBcmXsT0gQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [29ms] done publishing updated cluster state (version: 4, uuid: fmH_ymYJS0idR_0U3hXehw)
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][3] starting shard [views][3], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=lmO_aW1yTASkBVHDXBZ1jA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][1] starting shard [views][1], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=UERKa8rhQUii0j4LqsjQvg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][4] starting shard [views][4], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=osJ2fx9US4qbFvcvra6kHw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[fetching_shard_data]] (shard started task: [StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/Qwudd26uT7uxMioOkklJMA]][0]: found 1 allocation candidates of [users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[q8YW0mGsTGSNxPw3Mg_v4g]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[users/Qwudd26uT7uxMioOkklJMA]][0]: allocating [[users][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][0]: found 1 allocation candidates of [issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[IIjO4YIpQP6Jz-zaEYjzQg]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][0]: allocating [[issues][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][4]: found 1 allocation candidates of [issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[spxQVGp7QGaJ5jsXI8pq6Q]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][4]: allocating [[issues][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[7IlA2fqDQAiFnvIQdf9DmQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][2]: throttling allocation [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1491712f]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[4-IDUuoMRDKlSyLPF2dNPw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][1]: throttling allocation [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7e577310]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[UWAJOFdLRaSh9-q6z-LKaA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][3]: throttling allocation [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2f109566]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[TJ1cAin1TOCthOq84wy0_A]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4090a3a8]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[l4TJiFUCTQqDD-E5oILQ7g]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6789bb3a]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wO5sCNqsSSSpyFKpPFjMWw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@73757570]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HreUNEnKQki0SDQTjQsowA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5874d808]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yp9cuTU3Q_u69Ncf3LQ31A]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5e7e47dd]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Q3yg8WGsShagVwnTZ8ZIJw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@25d7a044]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1Yw55tgqRqmpvRLFDRReyg]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4c629f4e]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yo_R1BwFRGOugTjCyhLmsA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@63384360]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1xGlCS7uS86CO0x8xQZQwQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2884cd87]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wcCBJNECRZGMs8tiiHccGw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5aee454a]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=r6oBL3M3Q_mJoadJj2HlLA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=wlvGEZOIRPWzr5ZSMPkHjA, translog_generation=5, translog_uuid=IyixYtmWQY2pdLE5CsmNOA}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=r6oBL3M3Q_mJoadJj2HlLA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=wlvGEZOIRPWzr5ZSMPkHjA, translog_generation=5, translog_uuid=IyixYtmWQY2pdLE5CsmNOA}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[SXfnnXT0S1aEfFo9EhhlzA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1b87364]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[LCfh5_aNRaGV3kAFapHweQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@337217e1]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[zAYyrf0fSXOpelrr0yDmnA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@dce5e27]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [27ms]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [5], source [shard-started StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][2] received shard started for [StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [5]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [5] source [shard-started StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [5], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [5] source [shard-started StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 5
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 5
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[issues/pq-tKEOwTnuKUrGVJdyyXw]] creating index
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[issues/pq-tKEOwTnuKUrGVJdyyXw]], shards [5]/[0] - reason [create index]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] [[issues/pq-tKEOwTnuKUrGVJdyyXw]] added mapping [auth] (source suppressed due to length, use TRACE level if needed)
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[users/Qwudd26uT7uxMioOkklJMA]] creating index
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[users/Qwudd26uT7uxMioOkklJMA]], shards [1]/[0] - reason [create index]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] [[users/Qwudd26uT7uxMioOkklJMA]] added mapping [user], source [{"user":{"dynamic":"false","properties":{"active":{"type":"boolean"},"email":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true},"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"login":{"type":"keyword","fields":{"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"name":{"type":"keyword","fields":{"user_search_grams_analyzer":{"type":"text","norms":false,"analyzer":"user_index_grams_analyzer","search_analyzer":"user_search_grams_analyzer"}}},"organizationUuids":{"type":"keyword"},"scmAccounts":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"uuid":{"type":"keyword"}}}}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][4] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [issues][4] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/4], state path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/4]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [issues][4] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/4, shard=[issues][4]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][4]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][0] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [issues][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/0]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [issues][0] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/0, shard=[issues][0]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][0]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [users][0] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [users][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/Qwudd26uT7uxMioOkklJMA/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/Qwudd26uT7uxMioOkklJMA/0]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [users][0] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/Qwudd26uT7uxMioOkklJMA/0, shard=[users][0]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [users][0]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=OrLCSBIvQOyZyEW9_1abiQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=unBSce4MSp6vruVkoMU_7w, translog_generation=5, translog_uuid=FHS59ZrISC-tvyQCt3-PgQ}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][2] received shard started for [StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=a70MSB7TQnqptB8iW75zFg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=qOi_mCYOTJ2CndcyCuHpzA, translog_generation=5, translog_uuid=5jN5TMG4Tf6S4yUT0V6KZg}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=NoZJaIYDSjej906VEdQcIQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=nHecIl_lSXKIfb6tvBEEIw, translog_generation=5, translog_uuid=xBBlHyiRQAKoz-RFzS3ENQ}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=OrLCSBIvQOyZyEW9_1abiQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=unBSce4MSp6vruVkoMU_7w, translog_generation=5, translog_uuid=FHS59ZrISC-tvyQCt3-PgQ}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=OrLCSBIvQOyZyEW9_1abiQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=unBSce4MSp6vruVkoMU_7w, translog_generation=5, translog_uuid=FHS59ZrISC-tvyQCt3-PgQ}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 5
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]: took [63ms] done publishing updated cluster state (version: 5, uuid: mjyKJow8QOm5duKTN-XFyg)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [5] source [shard-started StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][4]], allocationId [osJ2fx9US4qbFvcvra6kHw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][1]], allocationId [UERKa8rhQUii0j4LqsjQvg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][3]], allocationId [lmO_aW1yTASkBVHDXBZ1jA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]: took [49ms] done applying updated cluster state (version: 5, uuid: mjyKJow8QOm5duKTN-XFyg)
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [views][2] starting shard [views][2], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=8TBtqpHTTBq6RHgxKJcb5A], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.299Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][3]: found 1 allocation candidates of [issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[UWAJOFdLRaSh9-q6z-LKaA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][3]: allocating [[issues][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[4-IDUuoMRDKlSyLPF2dNPw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][1]: throttling allocation [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@a9bccfd]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[7IlA2fqDQAiFnvIQdf9DmQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][2]: throttling allocation [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@76fdcfae]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=a70MSB7TQnqptB8iW75zFg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=qOi_mCYOTJ2CndcyCuHpzA, translog_generation=5, translog_uuid=5jN5TMG4Tf6S4yUT0V6KZg}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=a70MSB7TQnqptB8iW75zFg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=qOi_mCYOTJ2CndcyCuHpzA, translog_generation=5, translog_uuid=5jN5TMG4Tf6S4yUT0V6KZg}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[l4TJiFUCTQqDD-E5oILQ7g]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@59b0ec6f]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[TJ1cAin1TOCthOq84wy0_A]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][0]: throttling allocation [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@213ccf68]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HreUNEnKQki0SDQTjQsowA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7523c820]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Q3yg8WGsShagVwnTZ8ZIJw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2c230245]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yp9cuTU3Q_u69Ncf3LQ31A]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@188ea28d]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wO5sCNqsSSSpyFKpPFjMWw]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=NoZJaIYDSjej906VEdQcIQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=nHecIl_lSXKIfb6tvBEEIw, translog_generation=5, translog_uuid=xBBlHyiRQAKoz-RFzS3ENQ}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=NoZJaIYDSjej906VEdQcIQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=nHecIl_lSXKIfb6tvBEEIw, translog_generation=5, translog_uuid=xBBlHyiRQAKoz-RFzS3ENQ}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@59e35dfe]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1Yw55tgqRqmpvRLFDRReyg]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7f689ded]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [44ms]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] received shard started for [StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[SXfnnXT0S1aEfFo9EhhlzA]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [40ms]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@63c7abcc]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] received shard started for [StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yo_R1BwFRGOugTjCyhLmsA]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@347723ab]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [36ms]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wcCBJNECRZGMs8tiiHccGw]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@a2cffcf]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [users][0] received shard started for [StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1xGlCS7uS86CO0x8xQZQwQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@52d7adf6]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[LCfh5_aNRaGV3kAFapHweQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@36b0275f]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[zAYyrf0fSXOpelrr0yDmnA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3b32099]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [6], source [shard-started StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [6]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [6] source [shard-started StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [6], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [6] source [shard-started StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 6
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 6
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [users][0] received shard started for [StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] received shard started for [StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][3] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [issues][3] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/3], state path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/3]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [issues][3] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/3, shard=[issues][3]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][3]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] received shard started for [StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 6
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [6] source [shard-started StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [15ms] done applying updated cluster state (version: 6, uuid: 7zDhHJbeTRusjtVH9X-INg)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[views][2]], allocationId [8TBtqpHTTBq6RHgxKJcb5A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [31ms] done publishing updated cluster state (version: 6, uuid: 7zDhHJbeTRusjtVH9X-INg)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=_XUA4z2iRU6uedpnb6ICww, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=LIbavqFSSPO2qtS9gq0XBA, translog_generation=5, translog_uuid=cDpKzZg1TEaKEWEOo9eWLA}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][4] starting shard [issues][4], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=spxQVGp7QGaJ5jsXI8pq6Q], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][0] starting shard [issues][0], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=IIjO4YIpQP6Jz-zaEYjzQg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [users][0] starting shard [users][0], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=q8YW0mGsTGSNxPw3Mg_v4g], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][1]: found 1 allocation candidates of [issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[4-IDUuoMRDKlSyLPF2dNPw]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][1]: allocating [[issues][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][2]: found 1 allocation candidates of [issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[7IlA2fqDQAiFnvIQdf9DmQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[issues/pq-tKEOwTnuKUrGVJdyyXw]][2]: allocating [[issues][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][0]: found 1 allocation candidates of [rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[TJ1cAin1TOCthOq84wy0_A]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][0]: allocating [[rules][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[l4TJiFUCTQqDD-E5oILQ7g]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][1]: throttling allocation [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@46acf150]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wO5sCNqsSSSpyFKpPFjMWw]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1b43516f]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Q3yg8WGsShagVwnTZ8ZIJw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@6bb90dfd]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yp9cuTU3Q_u69Ncf3LQ31A]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4db33116]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1Yw55tgqRqmpvRLFDRReyg]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@565c3c9d]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HreUNEnKQki0SDQTjQsowA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@181d8021]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wcCBJNECRZGMs8tiiHccGw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@120f4afa]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1xGlCS7uS86CO0x8xQZQwQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3bd1ddbe]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[LCfh5_aNRaGV3kAFapHweQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7755b1f1]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[SXfnnXT0S1aEfFo9EhhlzA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@47e74e65]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yo_R1BwFRGOugTjCyhLmsA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2935d552]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[zAYyrf0fSXOpelrr0yDmnA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@54bf4d7f]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=_XUA4z2iRU6uedpnb6ICww, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=LIbavqFSSPO2qtS9gq0XBA, translog_generation=5, translog_uuid=cDpKzZg1TEaKEWEOo9eWLA}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=_XUA4z2iRU6uedpnb6ICww, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=LIbavqFSSPO2qtS9gq0XBA, translog_generation=5, translog_uuid=cDpKzZg1TEaKEWEOo9eWLA}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [7], source [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [7]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [7] source [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [7], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [7] source [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 7
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 7
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[rules/TMYeofUySm2ICb13cva2zQ]] creating index
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [26ms]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] received shard started for [StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[rules/TMYeofUySm2ICb13cva2zQ]], shards [2]/[0] - reason [create index]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] [[rules/TMYeofUySm2ICb13cva2zQ]] added mapping [rule], source [{"rule":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"activeRule_inheritance":{"type":"keyword"},"activeRule_ruleProfile":{"type":"keyword"},"activeRule_severity":{"type":"keyword"},"activeRule_uuid":{"type":"keyword"},"createdAt":{"type":"long"},"cwe":{"type":"keyword"},"htmlDesc":{"type":"keyword","index":false,"doc_values":false,"fields":{"english_html_analyzer":{"type":"text","norms":false,"analyzer":"english_html_analyzer"}}},"indexType":{"type":"keyword","doc_values":false},"internalKey":{"type":"keyword","index":false},"isExternal":{"type":"boolean"},"isTemplate":{"type":"boolean"},"join_rules":{"type":"join","eager_global_ordinals":true,"relations":{"rule":["activeRule","ruleExtension"]}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"lang":{"type":"keyword"},"name":{"type":"keyword","fields":{"search_grams_analyzer":{"type":"text","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"owaspTop10":{"type":"keyword"},"repo":{"type":"keyword","norms":true},"ruleExt_scope":{"type":"keyword"},"ruleExt_tags":{"type":"keyword","norms":true},"ruleKey":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"ruleUuid":{"type":"keyword"},"sansTop25":{"type":"keyword"},"severity":{"type":"keyword"},"sonarsourceSecurity":{"type":"keyword"},"status":{"type":"keyword"},"templateKey":{"type":"keyword"},"type":{"type":"keyword"},"updatedAt":{"type":"long"}}}}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][2] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [issues][2] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/2], state path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/2]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [issues][2] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/2, shard=[issues][2]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][2]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [issues][1] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [issues][1] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/1], state path [/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/1]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [issues][1] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/pq-tKEOwTnuKUrGVJdyyXw/1, shard=[issues][1]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [issues][1]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] received shard started for [StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [rules][0] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=weudsSRpT8-sHKnCxxZ6jg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=GbajxAz0Qj-J-X9XGqcnkA, translog_generation=5, translog_uuid=etHtpWg0R36Ib1-9N0WXUg}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [rules][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/TMYeofUySm2ICb13cva2zQ/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/TMYeofUySm2ICb13cva2zQ/0]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [rules][0] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/TMYeofUySm2ICb13cva2zQ/0, shard=[rules][0]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [rules][0]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=NXj0fLjnRn-91nzf3bTFOQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=bj4TtnyZSSywMc65jGgjxA, translog_generation=5, translog_uuid=tl19IfFCRG6vRcVhxXqG4A}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=FwFpckYHTvCYVoAHVJVumw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=wF_t0peIQLi73qQVYDfuNA, translog_generation=5, translog_uuid=sf1ryNMdT4e0yPGz3IeacA}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 7
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=weudsSRpT8-sHKnCxxZ6jg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=GbajxAz0Qj-J-X9XGqcnkA, translog_generation=5, translog_uuid=etHtpWg0R36Ib1-9N0WXUg}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=weudsSRpT8-sHKnCxxZ6jg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=GbajxAz0Qj-J-X9XGqcnkA, translog_generation=5, translog_uuid=etHtpWg0R36Ib1-9N0WXUg}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [7] source [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]: took [43ms] done applying updated cluster state (version: 7, uuid: j-G3PbsoRtmjute0Zzr5TQ)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][0]], allocationId [IIjO4YIpQP6Jz-zaEYjzQg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[users][0]], allocationId [q8YW0mGsTGSNxPw3Mg_v4g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][4]], allocationId [spxQVGp7QGaJ5jsXI8pq6Q], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]: took [55ms] done publishing updated cluster state (version: 7, uuid: j-G3PbsoRtmjute0Zzr5TQ)
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][3] starting shard [issues][3], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=UWAJOFdLRaSh9-q6z-LKaA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][1]: found 1 allocation candidates of [rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[l4TJiFUCTQqDD-E5oILQ7g]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[rules/TMYeofUySm2ICb13cva2zQ]][1]: allocating [[rules][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yp9cuTU3Q_u69Ncf3LQ31A]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [40ms]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: throttling allocation [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@15173974]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1Yw55tgqRqmpvRLFDRReyg]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] received shard started for [StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@a212da0]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wO5sCNqsSSSpyFKpPFjMWw]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: throttling allocation [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@569a3323]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Q3yg8WGsShagVwnTZ8ZIJw]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: throttling allocation [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4b94f25]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HreUNEnKQki0SDQTjQsowA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@2ab2a57e]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yo_R1BwFRGOugTjCyhLmsA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@528e0e32]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1xGlCS7uS86CO0x8xQZQwQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@74c1a97c]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[SXfnnXT0S1aEfFo9EhhlzA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@4fd236b3]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wcCBJNECRZGMs8tiiHccGw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@46d7c997]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=NXj0fLjnRn-91nzf3bTFOQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=bj4TtnyZSSywMc65jGgjxA, translog_generation=5, translog_uuid=tl19IfFCRG6vRcVhxXqG4A}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=NXj0fLjnRn-91nzf3bTFOQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=bj4TtnyZSSywMc65jGgjxA, translog_generation=5, translog_uuid=tl19IfFCRG6vRcVhxXqG4A}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[LCfh5_aNRaGV3kAFapHweQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@f23f277]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[zAYyrf0fSXOpelrr0yDmnA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@68116800]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [8], source [shard-started StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [8]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [8] source [shard-started StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [8], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [8] source [shard-started StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 8
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 8
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] received shard started for [StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=FwFpckYHTvCYVoAHVJVumw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=wF_t0peIQLi73qQVYDfuNA, translog_generation=5, translog_uuid=sf1ryNMdT4e0yPGz3IeacA}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=FwFpckYHTvCYVoAHVJVumw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=wF_t0peIQLi73qQVYDfuNA, translog_generation=5, translog_uuid=sf1ryNMdT4e0yPGz3IeacA}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [rules][1] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [49ms]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][1] received shard started for [StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [rules][1] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/TMYeofUySm2ICb13cva2zQ/1], state path [/opt/sonarqube/data/es6/nodes/0/indices/TMYeofUySm2ICb13cva2zQ/1]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [rules][1] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/TMYeofUySm2ICb13cva2zQ/1, shard=[rules][1]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [rules][1]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [46ms]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][0] received shard started for [StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][0] received shard started for [StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 8
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [8] source [shard-started StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [19ms] done applying updated cluster state (version: 8, uuid: W8Yl2UPQTuKN_ggrQmKpLQ)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][3]], allocationId [UWAJOFdLRaSh9-q6z-LKaA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [32ms] done publishing updated cluster state (version: 8, uuid: W8Yl2UPQTuKN_ggrQmKpLQ)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][2] starting shard [issues][2], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=7IlA2fqDQAiFnvIQdf9DmQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=UGgiAw9cSJKk83MfdFf2pA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=dNm-zEwIRYivcoduZpBlbg, translog_generation=5, translog_uuid=kX1ccEbyTmOL3VrmHBKUWw}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [issues][1] starting shard [issues][1], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=4-IDUuoMRDKlSyLPF2dNPw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][0] starting shard [rules][0], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=TJ1cAin1TOCthOq84wy0_A], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: found 1 allocation candidates of [projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yp9cuTU3Q_u69Ncf3LQ31A]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][3]: allocating [[projectmeasures][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: found 1 allocation candidates of [projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wO5sCNqsSSSpyFKpPFjMWw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][2]: allocating [[projectmeasures][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: found 1 allocation candidates of [projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Q3yg8WGsShagVwnTZ8ZIJw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][1]: allocating [[projectmeasures][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HreUNEnKQki0SDQTjQsowA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: throttling allocation [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7821eb1f]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1Yw55tgqRqmpvRLFDRReyg]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1f62f7d1]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wcCBJNECRZGMs8tiiHccGw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@86c7de3]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[LCfh5_aNRaGV3kAFapHweQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7a9b41e3]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yo_R1BwFRGOugTjCyhLmsA]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7ebe081f]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1xGlCS7uS86CO0x8xQZQwQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5328a7d9]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[SXfnnXT0S1aEfFo9EhhlzA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7f4d7eb3]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[zAYyrf0fSXOpelrr0yDmnA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@79e1cde4]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [9], source [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [9]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [9] source [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [9], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [9] source [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 9
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 9
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]] creating index
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=UGgiAw9cSJKk83MfdFf2pA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=dNm-zEwIRYivcoduZpBlbg, translog_generation=5, translog_uuid=kX1ccEbyTmOL3VrmHBKUWw}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=UGgiAw9cSJKk83MfdFf2pA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=dNm-zEwIRYivcoduZpBlbg, translog_generation=5, translog_uuid=kX1ccEbyTmOL3VrmHBKUWw}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]], shards [5]/[0] - reason [create index]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [34ms]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][1] received shard started for [StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]] added mapping [auth], source [{"auth":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"analysedAt":{"type":"date","format":"date_time||epoch_second"},"auth_allowAnyone":{"type":"boolean"},"auth_groupIds":{"type":"keyword","norms":true},"auth_userIds":{"type":"keyword","norms":true},"indexType":{"type":"keyword","doc_values":false},"join_projectmeasures":{"type":"join","eager_global_ordinals":true,"relations":{"auth":"projectmeasure"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"languages":{"type":"keyword","norms":true},"measures":{"type":"nested","properties":{"key":{"type":"keyword"},"value":{"type":"double"}}},"name":{"type":"keyword","fields":{"search_grams_analyzer":{"type":"text","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"nclocLanguageDistribution":{"type":"nested","properties":{"language":{"type":"keyword"},"ncloc":{"type":"integer"}}},"organizationUuid":{"type":"keyword"},"qualifier":{"type":"keyword"},"qualityGateStatus":{"type":"keyword","norms":true},"tags":{"type":"keyword","norms":true},"uuid":{"type":"keyword"}}}}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][1] received shard started for [StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][2] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [projectmeasures][2] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/2], state path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/2]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [projectmeasures][2] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/2, shard=[projectmeasures][2]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][2]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][1] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [projectmeasures][1] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/1], state path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/1]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [projectmeasures][1] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/1, shard=[projectmeasures][1]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][1]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][3] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [projectmeasures][3] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/3], state path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/3]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [projectmeasures][3] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/3, shard=[projectmeasures][3]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][3]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=30EYxFV5RaeExAS3_ukZyg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=Dkl4wbIySZihxJ71tvemPA, translog_generation=5, translog_uuid=OUw9vlRRT9O4xep96VOzyw}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=pbTQOzc3Tdi2ABxatmJuLA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=gK4OwKaOSl6R-aE_L_A3aw, translog_generation=5, translog_uuid=q1xOXLrnQpu4t33zquDjlg}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=CcDdl3MWRKuc0pjopGdYzA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=T01By2CFR8il_f9Npcen_Q, translog_generation=5, translog_uuid=OZkWJgWAQea0zAxnTwYfFw}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=30EYxFV5RaeExAS3_ukZyg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=Dkl4wbIySZihxJ71tvemPA, translog_generation=5, translog_uuid=OUw9vlRRT9O4xep96VOzyw}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=30EYxFV5RaeExAS3_ukZyg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=Dkl4wbIySZihxJ71tvemPA, translog_generation=5, translog_uuid=OUw9vlRRT9O4xep96VOzyw}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 9
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [34ms]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [58ms] done publishing updated cluster state (version: 9, uuid: i6ObwIPTSG6x4OYfbUW7KA)
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=pbTQOzc3Tdi2ABxatmJuLA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=gK4OwKaOSl6R-aE_L_A3aw, translog_generation=5, translog_uuid=q1xOXLrnQpu4t33zquDjlg}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=pbTQOzc3Tdi2ABxatmJuLA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=gK4OwKaOSl6R-aE_L_A3aw, translog_generation=5, translog_uuid=q1xOXLrnQpu4t33zquDjlg}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [9] source [shard-started StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][1]], allocationId [4-IDUuoMRDKlSyLPF2dNPw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[issues][2]], allocationId [7IlA2fqDQAiFnvIQdf9DmQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][0]], allocationId [TJ1cAin1TOCthOq84wy0_A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [48ms] done applying updated cluster state (version: 9, uuid: i6ObwIPTSG6x4OYfbUW7KA)
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [rules][1] starting shard [rules][1], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=l4TJiFUCTQqDD-E5oILQ7g], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][2] received shard started for [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: found 1 allocation candidates of [projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[HreUNEnKQki0SDQTjQsowA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][0]: allocating [[projectmeasures][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1Yw55tgqRqmpvRLFDRReyg]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: throttling allocation [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@62aee291]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[SXfnnXT0S1aEfFo9EhhlzA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: throttling allocation [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@14650cbd]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[LCfh5_aNRaGV3kAFapHweQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@28ad14cb]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1xGlCS7uS86CO0x8xQZQwQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@30feae5d]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yo_R1BwFRGOugTjCyhLmsA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5b80b16f]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wcCBJNECRZGMs8tiiHccGw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: throttling allocation [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@494d93]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[zAYyrf0fSXOpelrr0yDmnA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1a3f9d5d]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [37ms]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [10], source [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [10]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] received shard started for [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [10] source [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [10], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [10] source [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 10
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 10
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=CcDdl3MWRKuc0pjopGdYzA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=T01By2CFR8il_f9Npcen_Q, translog_generation=5, translog_uuid=OZkWJgWAQea0zAxnTwYfFw}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=CcDdl3MWRKuc0pjopGdYzA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=T01By2CFR8il_f9Npcen_Q, translog_generation=5, translog_uuid=OZkWJgWAQea0zAxnTwYfFw}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][2] received shard started for [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] received shard started for [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][0] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [38ms]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [projectmeasures][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/0]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][3] received shard started for [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [projectmeasures][0] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/0, shard=[projectmeasures][0]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][0]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 10
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=Ot2GhpMKRdCa7Z1RPTuClg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=MEEkzroHTjGCYcDPgvoW-Q, translog_generation=5, translog_uuid=1hCN4yjBT6qdtaTct_d7RA}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [23ms] done publishing updated cluster state (version: 10, uuid: AMvGnz0dTMiXNaJsoaf_Hg)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [10] source [shard-started StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[rules][1]], allocationId [l4TJiFUCTQqDD-E5oILQ7g], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [14ms] done applying updated cluster state (version: 10, uuid: AMvGnz0dTMiXNaJsoaf_Hg)
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][2] starting shard [projectmeasures][2], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=wO5sCNqsSSSpyFKpPFjMWw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][1] starting shard [projectmeasures][1], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=Q3yg8WGsShagVwnTZ8ZIJw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][3] starting shard [projectmeasures][3], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=Yp9cuTU3Q_u69Ncf3LQ31A], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: found 1 allocation candidates of [projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1Yw55tgqRqmpvRLFDRReyg]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[projectmeasures/2rk4DQYeTkCV7UCSyma32Q]][4]: allocating [[projectmeasures][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: found 1 allocation candidates of [components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[SXfnnXT0S1aEfFo9EhhlzA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][4]: allocating [[components][4], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: found 1 allocation candidates of [components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[wcCBJNECRZGMs8tiiHccGw]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][1]: allocating [[components][1], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yo_R1BwFRGOugTjCyhLmsA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@72f7767]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1xGlCS7uS86CO0x8xQZQwQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@3becbb4]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[LCfh5_aNRaGV3kAFapHweQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: throttling allocation [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@28f767a8]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[zAYyrf0fSXOpelrr0yDmnA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@5a47d9be]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [11], source [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [11]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [11] source [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [11], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [11] source [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 11
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 11
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[components/F2-zpQErTpiCe98RShd98w]] creating index
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[components/F2-zpQErTpiCe98RShd98w]], shards [5]/[0] - reason [create index]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=Ot2GhpMKRdCa7Z1RPTuClg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=MEEkzroHTjGCYcDPgvoW-Q, translog_generation=5, translog_uuid=1hCN4yjBT6qdtaTct_d7RA}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=Ot2GhpMKRdCa7Z1RPTuClg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=MEEkzroHTjGCYcDPgvoW-Q, translog_generation=5, translog_uuid=1hCN4yjBT6qdtaTct_d7RA}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] [[components/F2-zpQErTpiCe98RShd98w]] added mapping [auth], source [{"auth":{"dynamic":"false","_routing":{"required":true},"_source":{"enabled":false},"properties":{"auth_allowAnyone":{"type":"boolean"},"auth_groupIds":{"type":"keyword","norms":true},"auth_userIds":{"type":"keyword","norms":true},"indexType":{"type":"keyword","doc_values":false},"join_components":{"type":"join","eager_global_ordinals":true,"relations":{"auth":"component"}},"key":{"type":"keyword","fields":{"sortable_analyzer":{"type":"text","norms":false,"analyzer":"sortable_analyzer","fielddata":true}}},"name":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"fields":{"search_grams_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_grams_analyzer","search_analyzer":"search_grams_analyzer"},"search_prefix_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_prefix_analyzer","search_analyzer":"search_prefix_analyzer"},"search_prefix_case_insensitive_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"index_prefix_case_insensitive_analyzer","search_analyzer":"search_prefix_case_insensitive_analyzer"},"sortable_analyzer":{"type":"text","store":true,"term_vector":"with_positions_offsets","norms":false,"analyzer":"sortable_analyzer","fielddata":true}},"fielddata":true},"organization_uuid":{"type":"keyword"},"project_uuid":{"type":"keyword"},"qualifier":{"type":"keyword","norms":true},"uuid":{"type":"keyword"}}}}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [projectmeasures][4] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [projectmeasures][4] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/4], state path [/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/4]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [projectmeasures][4] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/2rk4DQYeTkCV7UCSyma32Q/4, shard=[projectmeasures][4]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [projectmeasures][4]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [37ms]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] received shard started for [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] received shard started for [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][4] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [components][4] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/4], state path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/4]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [components][4] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/4, shard=[components][4]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [components][4]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][1] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=yxj16-DARgKgTH0OSL0zDA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=GhGDVwIRSg6ZpBO7EB4kPw, translog_generation=5, translog_uuid=upRdYZTlS76Po-AkRMtrXg}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [components][1] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/1], state path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/1]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [components][1] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/1, shard=[components][1]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [components][1]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=igdfHt0LQhOgmXb1cGgggQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=SvOjcfyyTWOukQ3itP_A0A, translog_generation=5, translog_uuid=HnYv5r5yRVm_AIPmhZQ75A}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=yxj16-DARgKgTH0OSL0zDA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=GhGDVwIRSg6ZpBO7EB4kPw, translog_generation=5, translog_uuid=upRdYZTlS76Po-AkRMtrXg}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=yxj16-DARgKgTH0OSL0zDA, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=GhGDVwIRSg6ZpBO7EB4kPw, translog_generation=5, translog_uuid=upRdYZTlS76Po-AkRMtrXg}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=Wf2VOI-ZRcCrBLI7yTVPHw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=89xesUaWRUCMkmclpRFOVA, translog_generation=5, translog_uuid=BXRLt7w4Q_am2X8l2sUsIg}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 11
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [11] source [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]])]: took [39ms] done applying updated cluster state (version: 11, uuid: kza2o3mSQPethQVsN4uXFA)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][3]], allocationId [Yp9cuTU3Q_u69Ncf3LQ31A], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][2]], allocationId [wO5sCNqsSSSpyFKpPFjMWw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][1]], allocationId [Q3yg8WGsShagVwnTZ8ZIJw], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]]: took [51ms] done publishing updated cluster state (version: 11, uuid: kza2o3mSQPethQVsN4uXFA)
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][0] starting shard [projectmeasures][0], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=HreUNEnKQki0SDQTjQsowA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: found 1 allocation candidates of [components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[LCfh5_aNRaGV3kAFapHweQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][3]: allocating [[components][3], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [30ms]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yo_R1BwFRGOugTjCyhLmsA]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: throttling allocation [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@21a6ee96]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][4] received shard started for [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1xGlCS7uS86CO0x8xQZQwQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: throttling allocation [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@7a5e0384]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[zAYyrf0fSXOpelrr0yDmnA]]
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: throttling allocation [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [[org.elasticsearch.gateway.PrimaryShardAllocator$DecidedNode@1114d058]] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [12], source [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [12]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [12] source [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [12], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [12] source [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 12
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 12
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=igdfHt0LQhOgmXb1cGgggQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=SvOjcfyyTWOukQ3itP_A0A, translog_generation=5, translog_uuid=HnYv5r5yRVm_AIPmhZQ75A}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=igdfHt0LQhOgmXb1cGgggQ, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=SvOjcfyyTWOukQ3itP_A0A, translog_generation=5, translog_uuid=HnYv5r5yRVm_AIPmhZQ75A}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=Wf2VOI-ZRcCrBLI7yTVPHw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=89xesUaWRUCMkmclpRFOVA, translog_generation=5, translog_uuid=BXRLt7w4Q_am2X8l2sUsIg}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=Wf2VOI-ZRcCrBLI7yTVPHw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=89xesUaWRUCMkmclpRFOVA, translog_generation=5, translog_uuid=BXRLt7w4Q_am2X8l2sUsIg}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [32ms]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][4] received shard started for [StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][3] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [30ms]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [components][3] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/3], state path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/3]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [components][3] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/3, shard=[components][3]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [components][3]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][1] received shard started for [StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][4] received shard started for [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=jAWDHlG8RhKf8ZFAdVtWEg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=l9vX3pjiTuOzSx0iHScwcA, translog_generation=5, translog_uuid=RALYe6rRRuue9C-Adjw3cg}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 12
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [12] source [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [20ms] done applying updated cluster state (version: 12, uuid: Q7sS22kTS2KaVSHdyqPhpg)
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][0]], allocationId [HreUNEnKQki0SDQTjQsowA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [25ms] done publishing updated cluster state (version: 12, uuid: Q7sS22kTS2KaVSHdyqPhpg)
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [projectmeasures][4] starting shard [projectmeasures][4], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=1Yw55tgqRqmpvRLFDRReyg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][4] starting shard [components][4], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=SXfnnXT0S1aEfFo9EhhlzA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][1] starting shard [components][1], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=wcCBJNECRZGMs8tiiHccGw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: found 1 allocation candidates of [components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[Yo_R1BwFRGOugTjCyhLmsA]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][2]: allocating [[components][2], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: found 1 allocation candidates of [components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[1xGlCS7uS86CO0x8xQZQwQ]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[components/F2-zpQErTpiCe98RShd98w]][0]: allocating [[components][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: found 1 allocation candidates of [metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] based on allocation ids: [[zAYyrf0fSXOpelrr0yDmnA]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.g.G.InternalPrimaryShardAllocator] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]][0]: allocating [[metadatas][0], node[null], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]]] to [{sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}] on primary allocation
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [13], source [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [13]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [13] source [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [13], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [13] source [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 13
2020.08.10 15:57:15 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 13
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]] creating index
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndicesService] creating Index [[metadatas/jdWkT5wLSPu39POUGCCcNQ]], shards [1]/[0] - reason [create index]
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=jAWDHlG8RhKf8ZFAdVtWEg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=l9vX3pjiTuOzSx0iHScwcA, translog_generation=5, translog_uuid=RALYe6rRRuue9C-Adjw3cg}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=jAWDHlG8RhKf8ZFAdVtWEg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=l9vX3pjiTuOzSx0iHScwcA, translog_generation=5, translog_uuid=RALYe6rRRuue9C-Adjw3cg}]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] using dynamic[true]
2020.08.10 15:57:15 DEBUG es[][o.e.i.m.MapperService] [[metadatas/jdWkT5wLSPu39POUGCCcNQ]] added mapping [metadata], source [{"metadata":{"dynamic":"false","properties":{"value":{"type":"keyword","index":false,"store":true,"norms":true}}}}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [metadatas][0] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [24ms]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][3] received shard started for [StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [metadatas][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/jdWkT5wLSPu39POUGCCcNQ/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/jdWkT5wLSPu39POUGCCcNQ/0]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [metadatas][0] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/jdWkT5wLSPu39POUGCCcNQ/0, shard=[metadatas][0]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [metadatas][0]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][2] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [components][2] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/2], state path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/2]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [components][2] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/2, shard=[components][2]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [components][2]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=MlhoLCZfT3Gadw3cNLzOXg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=Hgp1RQwqSh2ub_h_pcJ4Mg, translog_generation=5, translog_uuid=k4IdYS9jS16yiAG63rOX0A}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][3] received shard started for [StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.c.IndicesClusterStateService] [components][0] creating shard with primary term [5]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [components][0] loaded data path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/0], state path [/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/0]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] [components][0] creating using an existing path [ShardPath{path=/opt/sonarqube/data/es6/nodes/0/indices/F2-zpQErTpiCe98RShd98w/0, shard=[components][0]}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.IndexService] creating shard_id [components][0]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] store stats are refreshed with refresh_interval [10s]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] state: [CREATED]->[RECOVERING], reason [from store]
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.IndexShard] starting recovery from store ...
2020.08.10 15:57:15 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=woAa8rauR2yNAqlVV9BTyw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=rlW6v80nQj-rxEPQDkvCMQ, translog_generation=5, translog_uuid=r53lDm6HQZ-Dlqmp_kR5lw}]
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:15 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=MlhoLCZfT3Gadw3cNLzOXg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=Hgp1RQwqSh2ub_h_pcJ4Mg, translog_generation=5, translog_uuid=k4IdYS9jS16yiAG63rOX0A}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=MlhoLCZfT3Gadw3cNLzOXg, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=Hgp1RQwqSh2ub_h_pcJ4Mg, translog_generation=5, translog_uuid=k4IdYS9jS16yiAG63rOX0A}]}]
2020.08.10 15:57:16 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:16 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:16 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:16 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:16 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [37ms]
2020.08.10 15:57:16 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=5, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:16 DEBUG es[][o.e.i.s.Store] starting index commit [{history_uuid=8n-eVoJBS56JDS5cDmqZXQ, local_checkpoint=17, max_seq_no=17, max_unsafe_auto_id_timestamp=-1, sync_id=A6zj5iEPS7GZ7gwQ-r_k1A, translog_generation=6, translog_uuid=BUeJlmvKQOSTW1BjIZt_tQ}]
2020.08.10 15:57:16 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:16 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][2] received shard started for [StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 13
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [61ms] done publishing updated cluster state (version: 13, uuid: ASeFydR6TWWzOcrL9up2YQ)
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [13] source [shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][1]], allocationId [wcCBJNECRZGMs8tiiHccGw], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][4]], allocationId [SXfnnXT0S1aEfFo9EhhlzA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[projectmeasures][4]], allocationId [1Yw55tgqRqmpvRLFDRReyg], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [57ms] done applying updated cluster state (version: 13, uuid: ASeFydR6TWWzOcrL9up2YQ)
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute
2020.08.10 15:57:16 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][3] starting shard [components][3], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=LCfh5_aNRaGV3kAFapHweQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:16 DEBUG es[][o.e.i.t.Translog] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=17, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:16 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][2] starting shard [components][2], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=Yo_R1BwFRGOugTjCyhLmsA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:16 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=17, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:16 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=17, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:16 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=woAa8rauR2yNAqlVV9BTyw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=rlW6v80nQj-rxEPQDkvCMQ, translog_generation=5, translog_uuid=r53lDm6HQZ-Dlqmp_kR5lw}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=woAa8rauR2yNAqlVV9BTyw, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, sync_id=rlW6v80nQj-rxEPQDkvCMQ, translog_generation=5, translog_uuid=r53lDm6HQZ-Dlqmp_kR5lw}]}]
2020.08.10 15:57:16 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=17, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:16 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=17, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [14], source [shard-started StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [14]
2020.08.10 15:57:16 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=17, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [14] source [shard-started StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [14], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [14] source [shard-started StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 14
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 14
2020.08.10 15:57:16 DEBUG es[][o.e.i.t.Translog] recovered local translog from checkpoint Checkpoint{offset=55, numOps=0, generation=6, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=17, minTranslogGeneration=1, trimmedAboveSeqNo=-2}
2020.08.10 15:57:16 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:16 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:16 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [36ms]
2020.08.10 15:57:16 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:16 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:16 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:16 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] received shard started for [StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:16 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] received shard started for [StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}]
2020.08.10 15:57:16 DEBUG es[][o.e.i.e.Engine] Safe commit [CommitPoint{segment[segments_4], userData[{history_uuid=8n-eVoJBS56JDS5cDmqZXQ, local_checkpoint=17, max_seq_no=17, max_unsafe_auto_id_timestamp=-1, sync_id=A6zj5iEPS7GZ7gwQ-r_k1A, translog_generation=6, translog_uuid=BUeJlmvKQOSTW1BjIZt_tQ}]}], last commit [CommitPoint{segment[segments_4], userData[{history_uuid=8n-eVoJBS56JDS5cDmqZXQ, local_checkpoint=17, max_seq_no=17, max_unsafe_auto_id_timestamp=-1, sync_id=A6zj5iEPS7GZ7gwQ-r_k1A, translog_generation=6, translog_uuid=BUeJlmvKQOSTW1BjIZt_tQ}]}]
2020.08.10 15:57:16 DEBUG es[][o.e.i.s.IndexShard] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]
2020.08.10 15:57:16 DEBUG es[][o.e.i.s.IndexShard] recovery completed from [shard_store], took [107ms]
2020.08.10 15:57:16 DEBUG es[][o.e.c.a.s.ShardStateAction] sending [internal:cluster/shard/started] to [QHs-cc0wSs-6RCIxzXJNHQ] for shard entry [StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:16 DEBUG es[][o.e.c.a.s.ShardStateAction] [metadatas][0] received shard started for [StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 14
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [14] source [shard-started StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [53ms] done applying updated cluster state (version: 14, uuid: KfvrCk7cSc2dF36lDvLo7w)
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][2]], allocationId [Yo_R1BwFRGOugTjCyhLmsA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][3]], allocationId [LCfh5_aNRaGV3kAFapHweQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [57ms] done publishing updated cluster state (version: 14, uuid: KfvrCk7cSc2dF36lDvLo7w)
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: execute
2020.08.10 15:57:16 DEBUG es[][o.e.c.a.s.ShardStateAction] [components][0] starting shard [components][0], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=1xGlCS7uS86CO0x8xQZQwQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:16 DEBUG es[][o.e.c.a.s.ShardStateAction] [metadatas][0] starting shard [metadatas][0], node[QHs-cc0wSs-6RCIxzXJNHQ], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=zAYyrf0fSXOpelrr0yDmnA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-08-10T15:57:15.301Z], delayed=false, allocation_status[deciders_throttled]] (shard started task: [StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}])
2020.08.10 15:57:16 INFO es[][o.e.c.r.a.AllocationService] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[components][0], [metadatas][0]] ...]).
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.MasterService] cluster state updated, version [15], source [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.MasterService] publishing cluster state version [15]
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [15] source [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: execute
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] cluster state updated, version [15], source [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [15] source [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] applying cluster state version 15
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] apply cluster state with version 15
2020.08.10 15:57:16 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:16 DEBUG es[][o.e.i.s.IndexShard] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] set locally applied cluster state to version 15
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.ClusterApplierService] processing [apply cluster state (from master [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [15] source [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]])]: took [9ms] done applying updated cluster state (version: 15, uuid: Q2i1I8pkTOmBLBH_V-STbg)
2020.08.10 15:57:16 DEBUG es[][o.e.c.s.MasterService] processing [shard-started StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[metadatas][0]], allocationId [zAYyrf0fSXOpelrr0yDmnA], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}[StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [master {sonarqube}{QHs-cc0wSs-6RCIxzXJNHQ}{NjhnnOTfRc23EV0qSj0xdw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]}], shard-started StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}[StartedShardEntry{shardId [[components][0]], allocationId [1xGlCS7uS86CO0x8xQZQwQ], primary term [5], message [after existing store recovery; bootstrap_history_uuid=false]}]]: took [11ms] done publishing updated cluster state (version: 15, uuid: Q2i1I8pkTOmBLBH_V-STbg)
2020.08.10 15:57:16 INFO app[][o.s.a.SchedulerImpl] Process[es] is up
2020.08.10 15:57:16 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] EventWatcher[es] tryToMoveTo es from STARTED to STARTING => false
2020.08.10 15:57:16 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] EventWatcher[es] tryToMoveTo web from INIT to STARTING => true
2020.08.10 15:57:16 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [/opt/sonarqube]: /opt/java/openjdk/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/temp -XX:-OmitStackTraceInFastThrow --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.*|[::1] -cp ./lib/common/*:/opt/sonarqube/lib/jdbc/h2/h2-1.4.199.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process2087643266092397055properties
2020.08.10 15:57:16 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] EventWatcher[es] tryToMoveTo web from STARTING to STARTED => true
2020.08.10 15:57:16 INFO web[][o.s.p.ProcessEntryPoint] Starting web
2020.08.10 15:57:16 INFO web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2020.08.10 15:57:17 INFO web[][o.e.p.PluginsService] no modules loaded
2020.08.10 15:57:17 INFO web[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.join.ParentJoinPlugin]
2020.08.10 15:57:17 INFO web[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
2020.08.10 15:57:17 INFO web[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: false
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent0] Java version: 11
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.theUnsafe: available
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.copyMemory: available
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent0] java.nio.Buffer.address: available
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent0] direct buffer constructor: unavailable
java.lang.UnsupportedOperationException: Reflective setAccessible(true) disabled
at io.netty.util.internal.ReflectionUtil.trySetAccessible(ReflectionUtil.java:31)
at io.netty.util.internal.PlatformDependent0$4.run(PlatformDependent0.java:224)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:218)
at io.netty.util.internal.PlatformDependent.isAndroid(PlatformDependent.java:212)
at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:80)
at io.netty.util.ConstantPool.<init>(ConstantPool.java:32)
at io.netty.util.AttributeKey$1.<init>(AttributeKey.java:27)
at io.netty.util.AttributeKey.<clinit>(AttributeKey.java:27)
at org.elasticsearch.transport.netty4.Netty4Transport.<clinit>(Netty4Transport.java:219)
at org.elasticsearch.transport.Netty4Plugin.getSettings(Netty4Plugin.java:57)
at org.elasticsearch.plugins.PluginsService.lambda$getPluginSettings$0(PluginsService.java:89)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source)
at org.elasticsearch.plugins.PluginsService.getPluginSettings(PluginsService.java:89)
at org.elasticsearch.client.transport.TransportClient.buildTemplate(TransportClient.java:147)
at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:277)
at org.sonar.server.es.EsClientProvider$MinimalTransportClient.<init>(EsClientProvider.java:104)
at org.sonar.server.es.EsClientProvider.provide(EsClientProvider.java:71)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at org.picocontainer.injectors.MethodInjector.invokeMethod(MethodInjector.java:129)
at org.picocontainer.injectors.MethodInjector.access$000(MethodInjector.java:39)
at org.picocontainer.injectors.MethodInjector$2.run(MethodInjector.java:113)
at org.picocontainer.injectors.AbstractInjector$ThreadLocalCyclicDependencyGuard.observe(AbstractInjector.java:270)
at org.picocontainer.injectors.MethodInjector.decorateComponentInstance(MethodInjector.java:120)
at org.picocontainer.injectors.CompositeInjector.decorateComponentInstance(CompositeInjector.java:58)
at org.picocontainer.injectors.Reinjector.reinject(Reinjector.java:142)
at org.picocontainer.injectors.ProviderAdapter.getComponentInstance(ProviderAdapter.java:96)
at org.picocontainer.DefaultPicoContainer.getInstance(DefaultPicoContainer.java:699)
at org.picocontainer.DefaultPicoContainer.getComponent(DefaultPicoContainer.java:647)
at org.sonar.core.platform.ComponentContainer$ExtendedDefaultPicoContainer.getComponent(ComponentContainer.java:64)
at org.picocontainer.DefaultPicoContainer.getComponent(DefaultPicoContainer.java:632)
at org.picocontainer.parameters.BasicComponentParameter$1.resolveInstance(BasicComponentParameter.java:118)
at org.picocontainer.parameters.ComponentParameter$1.resolveInstance(ComponentParameter.java:136)
at org.picocontainer.injectors.SingleMemberInjector.getParameter(SingleMemberInjector.java:78)
at org.picocontainer.injectors.ConstructorInjector$CtorAndAdapters.getParameterArguments(ConstructorInjector.java:309)
at org.picocontainer.injectors.ConstructorInjector$1.run(ConstructorInjector.java:335)
at org.picocontainer.injectors.AbstractInjector$ThreadLocalCyclicDependencyGuard.observe(AbstractInjector.java:270)
at org.picocontainer.injectors.ConstructorInjector.getComponentInstance(ConstructorInjector.java:364)
at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.getComponentInstance(AbstractInjectionFactory.java:56)
at org.picocontainer.behaviors.AbstractBehavior.getComponentInstance(AbstractBehavior.java:64)
at org.picocontainer.behaviors.Stored.getComponentInstance(Stored.java:91)
at org.picocontainer.DefaultPicoContainer.instantiateComponentAsIsStartable(DefaultPicoContainer.java:1034)
at org.picocontainer.DefaultPicoContainer.addAdapterIfStartable(DefaultPicoContainer.java:1026)
at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1003)
at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:767)
at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:136)
at org.sonar.server.platform.platformlevel.PlatformLevel.start(PlatformLevel.java:90)
at org.sonar.server.platform.platformlevel.PlatformLevel1.start(PlatformLevel1.java:166)
at org.sonar.server.platform.PlatformImpl.start(PlatformImpl.java:213)
at org.sonar.server.platform.PlatformImpl.startLevel1Container(PlatformImpl.java:172)
at org.sonar.server.platform.PlatformImpl.init(PlatformImpl.java:86)
at org.sonar.server.platform.web.PlatformServletContextListener.contextInitialized(PlatformServletContextListener.java:43)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4689)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5155)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1412)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1402)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent0] java.nio.Bits.unaligned: available, true
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent0] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable
java.lang.IllegalAccessException: class io.netty.util.internal.PlatformDependent0$6 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @62379589
at java.base/jdk.internal.reflect.Reflection.newIllegalAccessException(Unknown Source)
at java.base/java.lang.reflect.AccessibleObject.checkAccess(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at io.netty.util.internal.PlatformDependent0$6.run(PlatformDependent0.java:334)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:325)
at io.netty.util.internal.PlatformDependent.isAndroid(PlatformDependent.java:212)
at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:80)
at io.netty.util.ConstantPool.<init>(ConstantPool.java:32)
at io.netty.util.AttributeKey$1.<init>(AttributeKey.java:27)
at io.netty.util.AttributeKey.<clinit>(AttributeKey.java:27)
at org.elasticsearch.transport.netty4.Netty4Transport.<clinit>(Netty4Transport.java:219)
at org.elasticsearch.transport.Netty4Plugin.getSettings(Netty4Plugin.java:57)
at org.elasticsearch.plugins.PluginsService.lambda$getPluginSettings$0(PluginsService.java:89)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source)
at org.elasticsearch.plugins.PluginsService.getPluginSettings(PluginsService.java:89)
at org.elasticsearch.client.transport.TransportClient.buildTemplate(TransportClient.java:147)
at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:277)
at org.sonar.server.es.EsClientProvider$MinimalTransportClient.<init>(EsClientProvider.java:104)
at org.sonar.server.es.EsClientProvider.provide(EsClientProvider.java:71)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at org.picocontainer.injectors.MethodInjector.invokeMethod(MethodInjector.java:129)
at org.picocontainer.injectors.MethodInjector.access$000(MethodInjector.java:39)
at org.picocontainer.injectors.MethodInjector$2.run(MethodInjector.java:113)
at org.picocontainer.injectors.AbstractInjector$ThreadLocalCyclicDependencyGuard.observe(AbstractInjector.java:270)
at org.picocontainer.injectors.MethodInjector.decorateComponentInstance(MethodInjector.java:120)
at org.picocontainer.injectors.CompositeInjector.decorateComponentInstance(CompositeInjector.java:58)
at org.picocontainer.injectors.Reinjector.reinject(Reinjector.java:142)
at org.picocontainer.injectors.ProviderAdapter.getComponentInstance(ProviderAdapter.java:96)
at org.picocontainer.DefaultPicoContainer.getInstance(DefaultPicoContainer.java:699)
at org.picocontainer.DefaultPicoContainer.getComponent(DefaultPicoContainer.java:647)
at org.sonar.core.platform.ComponentContainer$ExtendedDefaultPicoContainer.getComponent(ComponentContainer.java:64)
at org.picocontainer.DefaultPicoContainer.getComponent(DefaultPicoContainer.java:632)
at org.picocontainer.parameters.BasicComponentParameter$1.resolveInstance(BasicComponentParameter.java:118)
at org.picocontainer.parameters.ComponentParameter$1.resolveInstance(ComponentParameter.java:136)
at org.picocontainer.injectors.SingleMemberInjector.getParameter(SingleMemberInjector.java:78)
at org.picocontainer.injectors.ConstructorInjector$CtorAndAdapters.getParameterArguments(ConstructorInjector.java:309)
at org.picocontainer.injectors.ConstructorInjector$1.run(ConstructorInjector.java:335)
at org.picocontainer.injectors.AbstractInjector$ThreadLocalCyclicDependencyGuard.observe(AbstractInjector.java:270)
at org.picocontainer.injectors.ConstructorInjector.getComponentInstance(ConstructorInjector.java:364)
at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.getComponentInstance(AbstractInjectionFactory.java:56)
at org.picocontainer.behaviors.AbstractBehavior.getComponentInstance(AbstractBehavior.java:64)
at org.picocontainer.behaviors.Stored.getComponentInstance(Stored.java:91)
at org.picocontainer.DefaultPicoContainer.instantiateComponentAsIsStartable(DefaultPicoContainer.java:1034)
at org.picocontainer.DefaultPicoContainer.addAdapterIfStartable(DefaultPicoContainer.java:1026)
at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1003)
at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:767)
at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:136)
at org.sonar.server.platform.platformlevel.PlatformLevel.start(PlatformLevel.java:90)
at org.sonar.server.platform.platformlevel.PlatformLevel1.start(PlatformLevel1.java:166)
at org.sonar.server.platform.PlatformImpl.start(PlatformImpl.java:213)
at org.sonar.server.platform.PlatformImpl.startLevel1Container(PlatformImpl.java:172)
at org.sonar.server.platform.PlatformImpl.init(PlatformImpl.java:86)
at org.sonar.server.platform.web.PlatformServletContextListener.contextInitialized(PlatformServletContextListener.java:43)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4689)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5155)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1412)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1402)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.<init>(long, int): unavailable
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent] sun.misc.Unsafe: available
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent] maxDirectMemory: 536870912 bytes (maybe)
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: /opt/sonarqube/temp (java.io.tmpdir)
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model)
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: -1 bytes
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.CleanerJava9] java.nio.ByteBuffer.cleaner(): available
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: false
2020.08.10 15:57:17 DEBUG web[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 16
2020.08.10 15:57:17 DEBUG web[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: false
2020.08.10 15:57:17 DEBUG web[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: available
2020.08.10 15:57:17 DEBUG web[][i.n.c.DefaultChannelId] -Dio.netty.processId: 1707 (auto-detected)
2020.08.10 15:57:17 DEBUG web[][i.netty.util.NetUtil] -Djava.net.preferIPv4Stack: false
2020.08.10 15:57:17 DEBUG web[][i.netty.util.NetUtil] -Djava.net.preferIPv6Addresses: false
2020.08.10 15:57:17 DEBUG web[][i.netty.util.NetUtil] Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
2020.08.10 15:57:17 DEBUG web[][i.netty.util.NetUtil] Failed to get SOMAXCONN from sysctl and file /proc/sys/net/core/somaxconn. Default: 128
2020.08.10 15:57:17 DEBUG web[][i.n.c.DefaultChannelId] -Dio.netty.machineId: 3a:53:cc:ff:fe:74:2a:e8 (auto-detected)
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
2020.08.10 15:57:17 DEBUG web[][i.n.u.i.InternalThreadLocalMap] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
2020.08.10 15:57:17 DEBUG web[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple
2020.08.10 15:57:17 DEBUG web[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.targetRecords: 4
2020.08.10 15:57:17 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 5
2020.08.10 15:57:17 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 5
2020.08.10 15:57:17 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
2020.08.10 15:57:17 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11
2020.08.10 15:57:17 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216
2020.08.10 15:57:17 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.tinyCacheSize: 512
2020.08.10 15:57:17 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
2020.08.10 15:57:17 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
2020.08.10 15:57:17 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2020.08.10 15:57:17 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
2020.08.10 15:57:17 DEBUG web[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true
2020.08.10 15:57:17 DEBUG web[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled
2020.08.10 15:57:17 DEBUG web[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 0
2020.08.10 15:57:17 DEBUG web[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384
2020.08.10 15:57:18 DEBUG web[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkAccessible: true
2020.08.10 15:57:18 DEBUG web[][i.n.b.AbstractByteBuf] -Dio.netty.buffer.checkBounds: true
2020.08.10 15:57:18 DEBUG web[][i.n.u.ResourceLeakDetectorFactory] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@6242b244
2020.08.10 15:57:18 DEBUG web[][i.n.util.Recycler] -Dio.netty.recycler.maxCapacityPerThread: 4096
2020.08.10 15:57:18 DEBUG web[][i.n.util.Recycler] -Dio.netty.recycler.maxSharedCapacityFactor: 2
2020.08.10 15:57:18 DEBUG web[][i.n.util.Recycler] -Dio.netty.recycler.linkCapacity: 16
2020.08.10 15:57:18 DEBUG web[][i.n.util.Recycler] -Dio.netty.recycler.ratio: 8
2020.08.10 15:57:18 INFO web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2020.08.10 15:57:18 INFO web[][o.s.s.p.LogServerVersion] SonarQube Server / 8.4.1.35646 / 7267e37dda923d9336125657aa6d0878af14af53
2020.08.10 15:57:18 INFO web[][o.s.s.p.d.EmbeddedDatabase] Starting embedded database on port 9092 with url jdbc:h2:tcp://127.0.0.1:9092/sonar
2020.08.10 15:57:18 INFO web[][o.s.s.p.d.EmbeddedDatabase] Embedded database started. Data stored in: /opt/sonarqube/data
2020.08.10 15:57:18 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:h2:tcp://127.0.0.1:9092/sonar
2020.08.10 15:57:18 WARN web[][o.s.db.dialect.H2] H2 database should be used for evaluation purpose only.
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerFileSystemImpl] SonarQube home: /opt/sonarqube
2020.08.10 15:57:19 INFO web[][o.s.s.u.SystemPasscodeImpl] System authentication by passcode is disabled
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin C# Code Quality and Security / 8.9.0.19135 / 804f945fb3e4a3534eb903ffec2b6bff24124741
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin Git / 1.12.0.2034 / 8002ffb45020fe70f56ebb22075fc5462f64ba7f
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin JaCoCo / 1.1.0.898 / f65b288e6c2888393bd7fb72ad7ac1425f88eebf
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin Java Code Quality and Security / 6.5.1.22586 / 83734b1bf28e9f7c0cbcb723ee261a3d4acd7ce3
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin PHP Code Quality and Security / 3.5.0.5655 / 01929a7f1f25848f25b6aa60a857a2033bd6dbbc
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin Python Code Quality and Security / 2.13.0.7236 / 474d91318dddaab6c1f8a4108f131bca05ac9238
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin SonarCSS / 1.2.0.1325 / 8dc9fe17b6230c20715d3b4cb34e0b6d02151afd
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin SonarFlex / 2.5.1.1831 / a0c44437f6abb0feec76edd073f91fec64db2a6c
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin SonarGo / 1.6.0.719 / edcc6a9e42fcdd30bb6f84a779c6cd7009ec72fd
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin SonarHTML / 3.2.0.2082 / 997a51b39c4d0a5399c73a8fb729030a69eb392b
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin SonarJS / 6.2.1.12157 / 3444def97744d3b811822b3a4bca74798de3ded1
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin SonarKotlin / 1.5.0.315 / 4ff3a145a58f3f84f1b39846a205a129d742e993
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin SonarRuby / 1.5.0.315 / 4ff3a145a58f3f84f1b39846a205a129d742e993
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin SonarScala / 1.5.0.315 / 4ff3a145a58f3f84f1b39846a205a129d742e993
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin SonarTS / 2.1.0.4359 / 268ba9581b700c4fb2bc194d4069d283da915213
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin SonarXML / 2.0.1.2020 / c5b84004face582d56f110e24c29bf9c6a679e69
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin Svn / 1.10.0.1917 / 91ccef5aac1f4dd90a7edc2ee3e677fcf4be72bf
2020.08.10 15:57:19 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin VB.NET Code Quality and Security / 8.9.0.19135 / 804f945fb3e4a3534eb903ffec2b6bff24124741
2020.08.10 15:57:19 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withDSA, Serial:10, Subject:CN=JCE Code Signing CA, OU=Java Software Code Signing, O=Sun Microsystems Inc, L=Palo Alto, ST=CA, C=US, Issuer:CN=JCE Code Signing CA, OU=Java Software Code Signing, O=Sun Microsystems Inc, L=Palo Alto, ST=CA, C=US, Key type:DSA, Length:1024, Cert Id:1776909028, Valid from:4/25/01, 7:00 AM, Valid until:4/25/20, 7:00 AM
2020.08.10 15:57:19 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withDSA, Serial:47f, Subject:CN=Legion of the Bouncy Castle Inc., OU=Java Software Code Signing, O=Sun Microsystems Inc, Issuer:CN=JCE Code Signing CA, OU=Java Software Code Signing, O=Sun Microsystems Inc, L=Palo Alto, ST=CA, C=US, Key type:DSA, Length:1024, Cert Id:-2023852845, Valid from:3/11/17, 1:15 AM, Valid until:4/25/20, 7:00 AM
2020.08.10 15:57:19 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:3c9eb1fc89f733d3, Subject:CN=JCE Code Signing CA, OU=Java Software Code Signing, O=Oracle Corporation, Issuer:CN=JCE Code Signing CA, OU=Java Software Code Signing, O=Oracle Corporation, Key type:RSA, Length:2048, Cert Id:-1250580323, Valid from:7/6/16, 11:48 PM, Valid until:12/31/30, 12:00 AM
2020.08.10 15:57:19 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:4efb7bc62e2b049e, Subject:CN=Legion of the Bouncy Castle Inc., OU=Java Software Code Signing, O=Oracle Corporation, Issuer:CN=JCE Code Signing CA, OU=Java Software Code Signing, O=Oracle Corporation, Key type:DSA, Length:2048, Cert Id:-654023182, Valid from:3/11/17, 1:07 AM, Valid until:3/11/22, 1:07 AM
2020.08.10 15:57:19 DEBUG web[][o.s.c.i.DefaultI18n] Loaded 3017 properties from l10n bundles
2020.08.10 15:57:20 INFO web[][o.s.s.p.w.MasterServletFilter] Initializing servlet filter org.sonar.server.platform.web.WebServiceFilter@4f0e807a [pattern=UrlPattern{inclusions=[/api/system/migrate_db.*, ...], exclusions=[/api/components/update_key, ...]}]
2020.08.10 15:57:20 DEBUG web[][o.s.s.a.TomcatAccessLog] Tomcat is started
2020.08.10 15:57:20 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2020.08.10 15:57:20 INFO web[][o.s.s.p.UpdateCenterClient] Update center: https://update.sonarsource.org/update-center.properties (no proxy)
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] Available languages:
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * Python => "py"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * CSS => "css"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * Kotlin => "kotlin"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * Go => "go"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * JavaScript => "js"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * TypeScript => "ts"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * Ruby => "ruby"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * Scala => "scala"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * C# => "cs"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * Java => "java"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * HTML => "web"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * JSP => "jsp"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * XML => "xml"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * Flex => "flex"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * PHP => "php"
2020.08.10 15:57:20 DEBUG web[][o.s.a.r.Languages] * VB.NET => "vbnet"
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:b92f60cc889fa17a4609b85b706c8aaf, Subject:OU=VeriSign Trust Network, OU="(c) 1998 VeriSign, Inc. - For authorized use only", OU=Class 2 Public Primary Certification Authority - G2, O="VeriSign, Inc.", C=US, Issuer:OU=VeriSign Trust Network, OU="(c) 1998 VeriSign, Inc. - For authorized use only", OU=Class 2 Public Primary Certification Authority - G2, O="VeriSign, Inc.", C=US, Key type:RSA, Length:1024, Cert Id:-1971861053, Valid from:5/18/98, 12:00 AM, Valid until:8/1/28, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withECDSA, Serial:ba15afa1ddfa0b54944afcd24a06cec, Subject:CN=DigiCert Assured ID Root G3, OU=www.digicert.com, O=DigiCert Inc, C=US, Issuer:CN=DigiCert Assured ID Root G3, OU=www.digicert.com, O=DigiCert Inc, C=US, Key type:EC, Length:384, Cert Id:-645537245, Valid from:8/1/13, 12:00 PM, Valid until:1/15/38, 12:00 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:401ac46421b31321030ebbe4121ac51d, Subject:CN=VeriSign Universal Root Certification Authority, OU="(c) 2008 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US, Issuer:CN=VeriSign Universal Root Certification Authority, OU="(c) 2008 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US, Key type:RSA, Length:2048, Cert Id:-1976681486, Valid from:4/2/08, 12:00 AM, Valid until:12/1/37, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withRSA, Serial:59b1b579e8e2132e23907bda777755c, Subject:CN=DigiCert Trusted Root G4, OU=www.digicert.com, O=DigiCert Inc, C=US, Issuer:CN=DigiCert Trusted Root G4, OU=www.digicert.com, O=DigiCert Inc, C=US, Key type:RSA, Length:4096, Cert Id:1057369358, Valid from:8/1/13, 12:00 PM, Valid until:1/15/38, 12:00 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:a7ea6df4b449eda6a24859ee6b815d3167fbbb1, Subject:CN=LuxTrust Global Root 2, O=LuxTrust S.A., C=LU, Issuer:CN=LuxTrust Global Root 2, O=LuxTrust S.A., C=LU, Key type:RSA, Length:4096, Cert Id:-1239330694, Valid from:3/5/15, 1:21 PM, Valid until:3/5/35, 1:21 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:a0142800000014523cf467c00000002, Subject:CN=IdenTrust Public Sector Root CA 1, O=IdenTrust, C=US, Issuer:CN=IdenTrust Public Sector Root CA 1, O=IdenTrust, C=US, Key type:RSA, Length:4096, Cert Id:2123370772, Valid from:1/16/14, 5:53 PM, Valid until:1/16/34, 5:53 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:44be0c8b500024b411d3362de0b35f1b, Subject:CN=UTN-USERFirst-Object, OU=http://www.usertrust.com, O=The USERTRUST Network, L=Salt Lake City, ST=UT, C=US, Issuer:CN=UTN-USERFirst-Object, OU=http://www.usertrust.com, O=The USERTRUST Network, L=Salt Lake City, ST=UT, C=US, Key type:RSA, Length:2048, Cert Id:-297053428, Valid from:7/9/99, 6:31 PM, Valid until:7/9/19, 6:40 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:1, Subject:CN=GeoTrust Universal CA, O=GeoTrust Inc., C=US, Issuer:CN=GeoTrust Universal CA, O=GeoTrust Inc., C=US, Key type:RSA, Length:4096, Cert Id:313566089, Valid from:3/4/04, 5:00 AM, Valid until:3/4/29, 5:00 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withECDSA, Serial:55556bcf25ea43535c3a40fd5ab4572, Subject:CN=DigiCert Global Root G3, OU=www.digicert.com, O=DigiCert Inc, C=US, Issuer:CN=DigiCert Global Root G3, OU=www.digicert.com, O=DigiCert Inc, C=US, Key type:EC, Length:384, Cert Id:-795968543, Valid from:8/1/13, 12:00 PM, Valid until:1/15/38, 12:00 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withECDSA, Serial:a68b79290000000050d091f9, Subject:CN=Entrust Root Certification Authority - EC1, OU="(c) 2012 Entrust, Inc. - for authorized use only", OU=See www.entrust.net/legal-terms, O="Entrust, Inc.", C=US, Issuer:CN=Entrust Root Certification Authority - EC1, OU="(c) 2012 Entrust, Inc. - for authorized use only", OU=See www.entrust.net/legal-terms, O="Entrust, Inc.", C=US, Key type:EC, Length:384, Cert Id:924514073, Valid from:12/18/12, 3:25 PM, Valid until:12/18/37, 3:55 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:0, Subject:OU=Security Communication RootCA1, O=SECOM Trust.net, C=JP, Issuer:OU=Security Communication RootCA1, O=SECOM Trust.net, C=JP, Key type:RSA, Length:2048, Cert Id:1802358121, Valid from:9/30/03, 4:20 AM, Valid until:9/30/23, 4:20 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:400000000010f8626e60d, Subject:CN=GlobalSign, O=GlobalSign, OU=GlobalSign Root CA - R2, Issuer:CN=GlobalSign, O=GlobalSign, OU=GlobalSign Root CA - R2, Key type:RSA, Length:2048, Cert Id:7087067, Valid from:12/15/06, 8:00 AM, Valid until:12/15/21, 8:00 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:44afb080d6a327ba893039862ef8406b, Subject:CN=DST Root CA X3, O=Digital Signature Trust Co., Issuer:CN=DST Root CA X3, O=Digital Signature Trust Co., Key type:RSA, Length:2048, Cert Id:1007302312, Valid from:9/30/00, 9:12 PM, Valid until:9/30/21, 2:01 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withECDSA, Serial:1f47afaa62007050544c019e9b63992a, Subject:CN=COMODO ECC Certification Authority, O=COMODO CA Limited, L=Salford, ST=Greater Manchester, C=GB, Issuer:CN=COMODO ECC Certification Authority, O=COMODO CA Limited, L=Salford, ST=Greater Manchester, C=GB, Key type:EC, Length:384, Cert Id:1146488932, Valid from:3/6/08, 12:00 AM, Valid until:1/18/38, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withRSA, Serial:45e6bb038333c3856548e6ff4551, Subject:CN=GlobalSign, O=GlobalSign, OU=GlobalSign Root CA - R6, Issuer:CN=GlobalSign, O=GlobalSign, OU=GlobalSign Root CA - R6, Key type:RSA, Length:4096, Cert Id:-506627753, Valid from:12/10/14, 12:00 AM, Valid until:12/10/34, 12:00 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:3863def8, Subject:CN=Entrust.net Certification Authority (2048), OU=(c) 1999 Entrust.net Limited, OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.), O=Entrust.net, Issuer:CN=Entrust.net Certification Authority (2048), OU=(c) 1999 Entrust.net Limited, OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.), O=Entrust.net, Key type:RSA, Length:2048, Cert Id:-328536082, Valid from:12/24/99, 5:50 PM, Valid until:7/24/29, 2:15 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:1, Subject:CN=AddTrust External CA Root, OU=AddTrust External TTP Network, O=AddTrust AB, C=SE, Issuer:CN=AddTrust External CA Root, OU=AddTrust External TTP Network, O=AddTrust AB, C=SE, Key type:RSA, Length:2048, Cert Id:-326352672, Valid from:5/30/00, 10:48 AM, Valid until:5/30/20, 10:48 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withECDSA, Serial:2a38a41c960a04de42b228a50be8349802, Subject:CN=GlobalSign, O=GlobalSign, OU=GlobalSign ECC Root CA - R4, Issuer:CN=GlobalSign, O=GlobalSign, OU=GlobalSign ECC Root CA - R4, Key type:EC, Length:256, Cert Id:-1923273545, Valid from:11/13/12, 12:00 AM, Valid until:1/19/38, 3:14 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withRSA, Serial:1fd6d30fca3ca51a81bbc640e35032d, Subject:CN=USERTrust RSA Certification Authority, O=The USERTRUST Network, L=Jersey City, ST=New Jersey, C=US, Issuer:CN=USERTrust RSA Certification Authority, O=The USERTRUST Network, L=Jersey City, ST=New Jersey, C=US, Key type:RSA, Length:4096, Cert Id:-347365895, Valid from:2/1/10, 12:00 AM, Valid until:1/18/38, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:ce7e0e517d846fe8fe560fc1bf03039, Subject:CN=DigiCert Assured ID Root CA, OU=www.digicert.com, O=DigiCert Inc, C=US, Issuer:CN=DigiCert Assured ID Root CA, OU=www.digicert.com, O=DigiCert Inc, C=US, Key type:RSA, Length:2048, Cert Id:-860404528, Valid from:11/10/06, 12:00 AM, Valid until:11/10/31, 12:00 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:33af1e6a711a9a0bb2864b11d09fae5, Subject:CN=DigiCert Global Root G2, OU=www.digicert.com, O=DigiCert Inc, C=US, Issuer:CN=DigiCert Global Root G2, OU=www.digicert.com, O=DigiCert Inc, C=US, Key type:RSA, Length:2048, Cert Id:1136084297, Valid from:8/1/13, 12:00 PM, Valid until:1/15/38, 12:00 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:570a119742c4e3cc, Subject:CN=Actalis Authentication Root CA, O=Actalis S.p.A./03358520967, L=Milan, C=IT, Issuer:CN=Actalis Authentication Root CA, O=Actalis S.p.A./03358520967, L=Milan, C=IT, Key type:RSA, Length:4096, Cert Id:1729119956, Valid from:9/22/11, 11:22 AM, Valid until:9/22/30, 11:22 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:b931c3ad63967ea6723bfc3af9af44b, Subject:CN=DigiCert Assured ID Root G2, OU=www.digicert.com, O=DigiCert Inc, C=US, Issuer:CN=DigiCert Assured ID Root G2, OU=www.digicert.com, O=DigiCert Inc, C=US, Key type:RSA, Length:2048, Cert Id:-385398383, Valid from:8/1/13, 12:00 PM, Valid until:1/15/38, 12:00 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withECDSA, Serial:66c9fd7c1bb104c2943e5717b7b2cc81ac10e, Subject:CN=Amazon Root CA 4, O=Amazon, C=US, Issuer:CN=Amazon Root CA 4, O=Amazon, C=US, Key type:EC, Length:384, Cert Id:-1654272513, Valid from:5/26/15, 12:00 AM, Valid until:5/26/40, 12:00 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:bb401c43f55e4fb0, Subject:CN=SwissSign Gold CA - G2, O=SwissSign AG, C=CH, Issuer:CN=SwissSign Gold CA - G2, O=SwissSign AG, C=CH, Key type:RSA, Length:4096, Cert Id:1516221943, Valid from:10/25/06, 8:30 AM, Valid until:10/25/36, 8:30 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:4a538c28, Subject:CN=Entrust Root Certification Authority - G2, OU="(c) 2009 Entrust, Inc. - for authorized use only", OU=See www.entrust.net/legal-terms, O="Entrust, Inc.", C=US, Issuer:CN=Entrust Root Certification Authority - G2, OU="(c) 2009 Entrust, Inc. - for authorized use only", OU=See www.entrust.net/legal-terms, O="Entrust, Inc.", C=US, Key type:RSA, Length:2048, Cert Id:1936920337, Valid from:7/7/09, 5:25 PM, Valid until:12/7/30, 5:55 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:445734245b81899b35f2ceb82b3b5ba726f07528, Subject:CN=QuoVadis Root CA 2 G3, O=QuoVadis Limited, C=BM, Issuer:CN=QuoVadis Root CA 2 G3, O=QuoVadis Limited, C=BM, Key type:RSA, Length:4096, Cert Id:696763521, Valid from:1/12/12, 6:59 PM, Valid until:1/12/42, 6:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:cf08e5c0816a5ad427ff0eb271859d0, Subject:CN=SecureTrust CA, O=SecureTrust Corporation, C=US, Issuer:CN=SecureTrust CA, O=SecureTrust Corporation, C=US, Key type:RSA, Length:2048, Cert Id:2034155325, Valid from:11/7/06, 7:31 PM, Valid until:12/31/29, 7:40 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:a3da427ea4b1aeda, Subject:CN=Chambers of Commerce Root - 2008, O=AC Camerfirma S.A., SERIALNUMBER=A82743287, L=Madrid (see current address at www.camerfirma.com/address), C=EU, Issuer:CN=Chambers of Commerce Root - 2008, O=AC Camerfirma S.A., SERIALNUMBER=A82743287, L=Madrid (see current address at www.camerfirma.com/address), C=EU, Key type:RSA, Length:4096, Cert Id:-28263924, Valid from:8/1/08, 12:29 PM, Valid until:7/31/38, 12:29 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:18acb56afd69b6153a636cafdafac4a1, Subject:CN=GeoTrust Primary Certification Authority, O=GeoTrust Inc., C=US, Issuer:CN=GeoTrust Primary Certification Authority, O=GeoTrust Inc., C=US, Key type:RSA, Length:2048, Cert Id:-965345157, Valid from:11/27/06, 12:00 AM, Valid until:7/16/36, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:a0142800000014523c844b500000002, Subject:CN=IdenTrust Commercial Root CA 1, O=IdenTrust, C=US, Issuer:CN=IdenTrust Commercial Root CA 1, O=IdenTrust, C=US, Key type:RSA, Length:4096, Cert Id:1232565210, Valid from:1/16/14, 6:12 PM, Valid until:1/16/34, 6:12 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:600197b746a7eab4b49ad64b2ff790fb, Subject:CN=thawte Primary Root CA - G3, OU="(c) 2008 thawte, Inc. - For authorized use only", OU=Certification Services Division, O="thawte, Inc.", C=US, Issuer:CN=thawte Primary Root CA - G3, OU="(c) 2008 thawte, Inc. - For authorized use only", OU=Certification Services Division, O="thawte, Inc.", C=US, Key type:RSA, Length:2048, Cert Id:1032730720, Valid from:4/2/08, 12:00 AM, Valid until:12/1/37, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:2, Subject:CN=Buypass Class 3 Root CA, O=Buypass AS-983163327, C=NO, Issuer:CN=Buypass Class 3 Root CA, O=Buypass AS-983163327, C=NO, Key type:RSA, Length:4096, Cert Id:1264269967, Valid from:10/26/10, 8:28 AM, Valid until:10/26/40, 8:28 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:67c8e1e8e3be1cbdfc913b8ea6238749, Subject:CN=Thawte Timestamping CA, OU=Thawte Certification, O=Thawte, L=Durbanville, ST=Western Cape, C=ZA, Issuer:CN=Thawte Timestamping CA, OU=Thawte Certification, O=Thawte, L=Durbanville, ST=Western Cape, C=ZA, Key type:RSA, Length:1024, Cert Id:-1032800436, Valid from:1/1/97, 12:00 AM, Valid until:1/1/21, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withECDSA, Serial:2f80fe238c0e220f486712289187acb3, Subject:CN=VeriSign Class 3 Public Primary Certification Authority - G4, OU="(c) 2007 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US, Issuer:CN=VeriSign Class 3 Public Primary Certification Authority - G4, OU="(c) 2007 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US, Key type:EC, Length:384, Cert Id:-131493977, Valid from:11/5/07, 12:00 AM, Valid until:1/18/38, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:20000b9, Subject:CN=Baltimore CyberTrust Root, OU=CyberTrust, O=Baltimore, C=IE, Issuer:CN=Baltimore CyberTrust Root, OU=CyberTrust, O=Baltimore, C=IE, Key type:RSA, Length:2048, Cert Id:1425294543, Valid from:5/12/00, 6:46 PM, Valid until:5/12/25, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:bb8, Subject:CN=LuxTrust Global Root, O=LuxTrust s.a., C=LU, Issuer:CN=LuxTrust Global Root, O=LuxTrust s.a., C=LU, Key type:RSA, Length:2048, Cert Id:1714819687, Valid from:3/17/11, 9:51 AM, Valid until:3/17/21, 9:51 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:7dd9fe07cfa81eb7107967fba78934c6, Subject:OU=VeriSign Trust Network, OU="(c) 1998 VeriSign, Inc. - For authorized use only", OU=Class 3 Public Primary Certification Authority - G2, O="VeriSign, Inc.", C=US, Issuer:OU=VeriSign Trust Network, OU="(c) 1998 VeriSign, Inc. - For authorized use only", OU=Class 3 Public Primary Certification Authority - G2, O="VeriSign, Inc.", C=US, Key type:RSA, Length:1024, Cert Id:-11071679, Valid from:5/18/98, 12:00 AM, Valid until:8/1/28, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:0, Subject:CN=Chambers of Commerce Root, OU=http://www.chambersign.org, O=AC Camerfirma SA CIF A82743287, C=EU, Issuer:CN=Chambers of Commerce Root, OU=http://www.chambersign.org, O=AC Camerfirma SA CIF A82743287, C=EU, Key type:RSA, Length:2048, Cert Id:1827306452, Valid from:9/30/03, 4:13 PM, Valid until:9/30/37, 4:13 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:1d, Subject:CN=Sonera Class2 CA, O=Sonera, C=FI, Issuer:CN=Sonera Class2 CA, O=Sonera, C=FI, Key type:RSA, Length:2048, Cert Id:-572101437, Valid from:4/6/01, 7:29 AM, Valid until:4/6/21, 7:29 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:7c4f04391cd4992d, Subject:CN=AffirmTrust Networking, O=AffirmTrust, C=US, Issuer:CN=AffirmTrust Networking, O=AffirmTrust, C=US, Key type:RSA, Length:2048, Cert Id:651670175, Valid from:1/29/10, 2:08 PM, Valid until:12/31/30, 2:08 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:1, Subject:CN=T-TeleSec GlobalRoot Class 3, OU=T-Systems Trust Center, O=T-Systems Enterprise Services GmbH, C=DE, Issuer:CN=T-TeleSec GlobalRoot Class 3, OU=T-Systems Trust Center, O=T-Systems Enterprise Services GmbH, C=DE, Key type:RSA, Length:2048, Cert Id:1894096264, Valid from:10/1/08, 10:29 AM, Valid until:10/1/33, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:50946cec18ead59c4dd597ef758fa0ad, Subject:CN=XRamp Global Certification Authority, O=XRamp Security Services Inc, OU=www.xrampsecurity.com, C=US, Issuer:CN=XRamp Global Certification Authority, O=XRamp Security Services Inc, OU=www.xrampsecurity.com, C=US, Key type:RSA, Length:2048, Cert Id:-952474086, Valid from:11/1/04, 5:14 PM, Valid until:1/1/35, 5:37 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:15ac6e9419b2794b41f627a9c3180f1f, Subject:CN=GeoTrust Primary Certification Authority - G3, OU=(c) 2008 GeoTrust Inc. - For authorized use only, O=GeoTrust Inc., C=US, Issuer:CN=GeoTrust Primary Certification Authority - G3, OU=(c) 2008 GeoTrust Inc. - For authorized use only, O=GeoTrust Inc., C=US, Key type:RSA, Length:2048, Cert Id:-1330153758, Valid from:4/2/08, 12:00 AM, Valid until:12/1/37, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:c9cdd3e9d57d23ce, Subject:CN=Global Chambersign Root - 2008, O=AC Camerfirma S.A., SERIALNUMBER=A82743287, L=Madrid (see current address at www.camerfirma.com/address), C=EU, Issuer:CN=Global Chambersign Root - 2008, O=AC Camerfirma S.A., SERIALNUMBER=A82743287, L=Madrid (see current address at www.camerfirma.com/address), C=EU, Key type:RSA, Length:4096, Cert Id:1271252776, Valid from:8/1/08, 12:31 PM, Valid until:7/31/38, 12:31 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withECDSA, Serial:35fc265cd9844fc93d263d579baed756, Subject:CN=thawte Primary Root CA - G2, OU="(c) 2007 thawte, Inc. - For authorized use only", O="thawte, Inc.", C=US, Issuer:CN=thawte Primary Root CA - G2, OU="(c) 2007 thawte, Inc. - For authorized use only", O="thawte, Inc.", C=US, Key type:EC, Length:384, Cert Id:2068083090, Valid from:11/5/07, 12:00 AM, Valid until:1/18/38, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withECDSA, Serial:66c9fd5749736663f3b0b9ad9e89e7603f24a, Subject:CN=Amazon Root CA 3, O=Amazon, C=US, Issuer:CN=Amazon Root CA 3, O=Amazon, C=US, Key type:EC, Length:256, Cert Id:-1562287523, Valid from:5/26/15, 12:00 AM, Valid until:5/26/40, 12:00 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:64ffc0a00, Subject:CN=na1-freeipa01.intgdc.com, O=INTGDC.COM, Issuer:CN=Certificate Authority, O=INTGDC.COM, Key type:RSA, Length:2048, Cert Id:-94783380, Valid from:7/22/20, 1:12 PM, Valid until:7/23/22, 1:12 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withECDSA, Serial:5c8b99c55a94c5d27156decd8980cc26, Subject:CN=USERTrust ECC Certification Authority, O=The USERTRUST Network, L=Jersey City, ST=New Jersey, C=US, Issuer:CN=USERTrust ECC Certification Authority, O=The USERTRUST Network, L=Jersey City, ST=New Jersey, C=US, Key type:EC, Length:384, Cert Id:-965679910, Valid from:2/1/10, 12:00 AM, Valid until:1/18/38, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:4f1bd42f54bb2f4b, Subject:CN=SwissSign Silver CA - G2, O=SwissSign AG, C=CH, Issuer:CN=SwissSign Silver CA - G2, O=SwissSign AG, C=CH, Key type:RSA, Length:4096, Cert Id:126726124, Valid from:10/25/06, 8:32 AM, Valid until:10/25/36, 8:32 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withRSA, Serial:6d8c1446b1a60aee, Subject:CN=AffirmTrust Premium, O=AffirmTrust, C=US, Issuer:CN=AffirmTrust Premium, O=AffirmTrust, C=US, Key type:RSA, Length:4096, Cert Id:-2130283955, Valid from:1/29/10, 2:10 PM, Valid until:12/31/40, 2:10 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:40000000001154b5ac394, Subject:CN=GlobalSign Root CA, OU=Root CA, O=GlobalSign nv-sa, C=BE, Issuer:CN=GlobalSign Root CA, OU=Root CA, O=GlobalSign nv-sa, C=BE, Key type:RSA, Length:2048, Cert Id:536948034, Valid from:9/1/98, 12:00 PM, Valid until:1/28/28, 12:00 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:983f3, Subject:CN=D-TRUST Root Class 3 CA 2 2009, O=D-Trust GmbH, C=DE, Issuer:CN=D-TRUST Root Class 3 CA 2 2009, O=D-Trust GmbH, C=DE, Key type:RSA, Length:2048, Cert Id:1430153102, Valid from:11/5/09, 8:35 AM, Valid until:11/5/29, 8:35 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:7777062726a9b17c, Subject:CN=AffirmTrust Commercial, O=AffirmTrust, C=US, Issuer:CN=AffirmTrust Commercial, O=AffirmTrust, C=US, Key type:RSA, Length:2048, Cert Id:630485644, Valid from:1/29/10, 2:06 PM, Valid until:12/31/30, 2:06 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:8210cfb0d240e3594463e0bb63828b00, Subject:CN=ISRG Root X1, O=Internet Security Research Group, C=US, Issuer:CN=ISRG Root X1, O=Internet Security Research Group, C=US, Key type:RSA, Length:4096, Cert Id:1521974916, Valid from:6/4/15, 11:04 AM, Valid until:6/4/35, 11:04 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:36122296c5e338a520a1d25f4cd70954, Subject:EMAILADDRESS=premium-server@thawte.com, CN=Thawte Premium Server CA, OU=Certification Services Division, O=Thawte Consulting cc, L=Cape Town, ST=Western Cape, C=ZA, Issuer:EMAILADDRESS=premium-server@thawte.com, CN=Thawte Premium Server CA, OU=Certification Services Division, O=Thawte Consulting cc, L=Cape Town, ST=Western Cape, C=ZA, Key type:RSA, Length:1024, Cert Id:857909202, Valid from:8/1/96, 12:00 AM, Valid until:1/1/21, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:1, Subject:CN=AAA Certificate Services, O=Comodo CA Limited, L=Salford, ST=Greater Manchester, C=GB, Issuer:CN=AAA Certificate Services, O=Comodo CA Limited, L=Salford, ST=Greater Manchester, C=GB, Key type:RSA, Length:2048, Cert Id:-1197580894, Valid from:1/1/04, 12:00 AM, Valid until:12/31/28, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withECDSA, Serial:3cb2f4480a00e2feeb243b5e603ec36b, Subject:CN=GeoTrust Primary Certification Authority - G2, OU=(c) 2007 GeoTrust Inc. - For authorized use only, O=GeoTrust Inc., C=US, Issuer:CN=GeoTrust Primary Certification Authority - G2, OU=(c) 2007 GeoTrust Inc. - For authorized use only, O=GeoTrust Inc., C=US, Key type:EC, Length:384, Cert Id:-1114303822, Valid from:11/5/07, 12:00 AM, Valid until:1/18/38, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:4000000000121585308a2, Subject:CN=GlobalSign, O=GlobalSign, OU=GlobalSign Root CA - R3, Issuer:CN=GlobalSign, O=GlobalSign, OU=GlobalSign Root CA - R3, Key type:RSA, Length:2048, Cert Id:733875591, Valid from:3/18/09, 10:00 AM, Valid until:3/18/29, 10:00 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:344ed55720d5edec49f42fce37db2b6d, Subject:CN=thawte Primary Root CA, OU="(c) 2006 thawte, Inc. - For authorized use only", OU=Certification Services Division, O="thawte, Inc.", C=US, Issuer:CN=thawte Primary Root CA, OU="(c) 2006 thawte, Inc. - For authorized use only", OU=Certification Services Division, O="thawte, Inc.", C=US, Key type:RSA, Length:2048, Cert Id:643000026, Valid from:11/17/06, 12:00 AM, Valid until:7/16/36, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:5c6, Subject:CN=QuoVadis Root CA 3, O=QuoVadis Limited, C=BM, Issuer:CN=QuoVadis Root CA 3, O=QuoVadis Limited, C=BM, Key type:RSA, Length:4096, Cert Id:1470392860, Valid from:11/24/06, 7:11 PM, Valid until:11/24/31, 7:06 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withRSA, Serial:66c9fd29635869f0a0fe58678f85b26bb8a37, Subject:CN=Amazon Root CA 2, O=Amazon, C=US, Issuer:CN=Amazon Root CA 2, O=Amazon, C=US, Key type:RSA, Length:4096, Cert Id:-1766132388, Valid from:5/26/15, 12:00 AM, Valid until:5/26/40, 12:00 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:0, Subject:OU=Starfield Class 2 Certification Authority, O="Starfield Technologies, Inc.", C=US, Issuer:OU=Starfield Class 2 Certification Authority, O="Starfield Technologies, Inc.", C=US, Key type:RSA, Length:2048, Cert Id:1825617644, Valid from:6/29/04, 5:39 PM, Valid until:6/29/34, 5:39 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:0, Subject:CN=Starfield Root Certificate Authority - G2, O="Starfield Technologies, Inc.", L=Scottsdale, ST=Arizona, C=US, Issuer:CN=Starfield Root Certificate Authority - G2, O="Starfield Technologies, Inc.", L=Scottsdale, ST=Arizona, C=US, Key type:RSA, Length:2048, Cert Id:-1026641587, Valid from:9/1/09, 12:00 AM, Valid until:12/31/37, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:3c9131cb1ff6d01b0e9ab8d044bf12be, Subject:OU=Class 3 Public Primary Certification Authority, O="VeriSign, Inc.", C=US, Issuer:OU=Class 3 Public Primary Certification Authority, O="VeriSign, Inc.", C=US, Key type:RSA, Length:1024, Cert Id:118031811, Valid from:1/29/96, 12:00 AM, Valid until:8/2/28, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withECDSA, Serial:7497258ac73f7a54, Subject:CN=AffirmTrust Premium ECC, O=AffirmTrust, C=US, Issuer:CN=AffirmTrust Premium ECC, O=AffirmTrust, C=US, Key type:EC, Length:384, Cert Id:-2080363786, Valid from:1/29/10, 2:20 PM, Valid until:12/31/40, 2:20 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:23456, Subject:CN=GeoTrust Global CA, O=GeoTrust Inc., C=US, Issuer:CN=GeoTrust Global CA, O=GeoTrust Inc., C=US, Key type:RSA, Length:2048, Cert Id:-2028617374, Valid from:5/21/02, 4:00 AM, Valid until:5/21/22, 4:00 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:0, Subject:OU=Go Daddy Class 2 Certification Authority, O="The Go Daddy Group, Inc.", C=US, Issuer:OU=Go Daddy Class 2 Certification Authority, O="The Go Daddy Group, Inc.", C=US, Key type:RSA, Length:2048, Cert Id:-271444299, Valid from:6/29/04, 5:06 PM, Valid until:6/29/34, 5:06 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:0, Subject:CN=Go Daddy Root Certificate Authority - G2, O="GoDaddy.com, Inc.", L=Scottsdale, ST=Arizona, C=US, Issuer:CN=Go Daddy Root Certificate Authority - G2, O="GoDaddy.com, Inc.", L=Scottsdale, ST=Arizona, C=US, Key type:RSA, Length:2048, Cert Id:439600313, Valid from:9/1/09, 12:00 AM, Valid until:12/31/37, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:18dad19e267de8bb4a2158cdcc6b3b4a, Subject:CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU="(c) 2006 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US, Issuer:CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU="(c) 2006 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US, Key type:RSA, Length:2048, Cert Id:303010488, Valid from:11/8/06, 12:00 AM, Valid until:7/16/36, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:66c9fcf99bf8c0a39e2f0788a43e696365bca, Subject:CN=Amazon Root CA 1, O=Amazon, C=US, Issuer:CN=Amazon Root CA 1, O=Amazon, C=US, Key type:RSA, Length:2048, Cert Id:-1472444962, Valid from:5/26/15, 12:00 AM, Valid until:1/17/38, 12:00 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:509, Subject:CN=QuoVadis Root CA 2, O=QuoVadis Limited, C=BM, Issuer:CN=QuoVadis Root CA 2, O=QuoVadis Limited, C=BM, Key type:RSA, Length:4096, Cert Id:338250116, Valid from:11/24/06, 6:27 PM, Valid until:11/24/31, 6:23 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:10020, Subject:CN=Certum CA, O=Unizeto Sp. z o.o., C=PL, Issuer:CN=Certum CA, O=Unizeto Sp. z o.o., C=PL, Key type:RSA, Length:2048, Cert Id:-744451266, Valid from:6/11/02, 10:46 AM, Valid until:6/11/27, 10:46 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:4eb200670c035d4f, Subject:CN=SwissSign Platinum CA - G2, O=SwissSign AG, C=CH, Issuer:CN=SwissSign Platinum CA - G2, O=SwissSign AG, C=CH, Key type:RSA, Length:4096, Cert Id:771312514, Valid from:10/25/06, 8:36 AM, Valid until:10/25/36, 8:36 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:15c8bd65475cafb897005ee406d2bc9d, Subject:OU=ePKI Root Certification Authority, O="Chunghwa Telecom Co., Ltd.", C=TW, Issuer:OU=ePKI Root Certification Authority, O="Chunghwa Telecom Co., Ltd.", C=TW, Key type:RSA, Length:4096, Cert Id:-662636137, Valid from:12/20/04, 2:31 AM, Valid until:12/20/34, 2:31 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:2ef59b0228a7db7affd5a3a9eebd03a0cf126a1d, Subject:CN=QuoVadis Root CA 3 G3, O=QuoVadis Limited, C=BM, Issuer:CN=QuoVadis Root CA 3 G3, O=QuoVadis Limited, C=BM, Key type:RSA, Length:4096, Cert Id:-705622991, Valid from:1/12/12, 8:26 PM, Valid until:1/12/42, 8:26 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:3ab6508b, Subject:CN=QuoVadis Root Certification Authority, OU=Root Certification Authority, O=QuoVadis Limited, C=BM, Issuer:CN=QuoVadis Root Certification Authority, OU=Root Certification Authority, O=QuoVadis Limited, C=BM, Key type:RSA, Length:2048, Cert Id:-1882405602, Valid from:3/19/01, 6:33 PM, Valid until:3/17/21, 6:33 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:1, Subject:CN=AddTrust Class 1 CA Root, OU=AddTrust TTP Network, O=AddTrust AB, C=SE, Issuer:CN=AddTrust Class 1 CA Root, OU=AddTrust TTP Network, O=AddTrust AB, C=SE, Key type:RSA, Length:2048, Cert Id:764620144, Valid from:5/30/00, 10:38 AM, Valid until:5/30/20, 10:38 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:2ac5c266a0b409b8f0b79f2ae462577, Subject:CN=DigiCert High Assurance EV Root CA, OU=www.digicert.com, O=DigiCert Inc, C=US, Issuer:CN=DigiCert High Assurance EV Root CA, OU=www.digicert.com, O=DigiCert Inc, C=US, Key type:RSA, Length:2048, Cert Id:-1410680354, Valid from:11/10/06, 12:00 AM, Valid until:11/10/31, 12:00 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:78585f2ead2c194be3370735341328b596d46593, Subject:CN=QuoVadis Root CA 1 G3, O=QuoVadis Limited, C=BM, Issuer:CN=QuoVadis Root CA 1 G3, O=QuoVadis Limited, C=BM, Key type:RSA, Length:4096, Cert Id:-762436034, Valid from:1/12/12, 5:27 PM, Valid until:1/12/42, 5:27 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:1121bc276c5547af584eefd4ced629b2a285, Subject:CN=KEYNECTIS ROOT CA, OU=ROOT, O=KEYNECTIS, C=FR, Issuer:CN=KEYNECTIS ROOT CA, OU=ROOT, O=KEYNECTIS, C=FR, Key type:RSA, Length:2048, Cert Id:1479486418, Valid from:5/26/09, 12:00 AM, Valid until:5/26/20, 12:00 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withRSA, Serial:4caaf9cadb636fe01ff74ed85b03869d, Subject:CN=COMODO RSA Certification Authority, O=COMODO CA Limited, L=Salford, ST=Greater Manchester, C=GB, Issuer:CN=COMODO RSA Certification Authority, O=COMODO CA Limited, L=Salford, ST=Greater Manchester, C=GB, Key type:RSA, Length:4096, Cert Id:1769425049, Valid from:1/19/10, 12:00 AM, Valid until:1/18/38, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:10ffe97a5, Subject:CN=Certificate Authority, O=INTGDC.COM, Issuer:CN=Certificate Authority, O=INTGDC.COM, Key type:RSA, Length:2048, Cert Id:-472218879, Valid from:3/6/17, 7:27 AM, Valid until:3/6/37, 7:27 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:2, Subject:CN=Buypass Class 2 Root CA, O=Buypass AS-983163327, C=NO, Issuer:CN=Buypass Class 2 Root CA, O=Buypass AS-983163327, C=NO, Key type:RSA, Length:4096, Cert Id:969960563, Valid from:10/26/10, 8:38 AM, Valid until:10/26/40, 8:38 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:0, Subject:OU=Security Communication RootCA2, O="SECOM Trust Systems CO.,LTD.", C=JP, Issuer:OU=Security Communication RootCA2, O="SECOM Trust Systems CO.,LTD.", C=JP, Key type:RSA, Length:2048, Cert Id:1521072570, Valid from:5/29/09, 5:00 AM, Valid until:5/29/29, 5:00 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:444c0, Subject:CN=Certum Trusted Network CA, OU=Certum Certification Authority, O=Unizeto Technologies S.A., C=PL, Issuer:CN=Certum Trusted Network CA, OU=Certum Certification Authority, O=Unizeto Technologies S.A., C=PL, Key type:RSA, Length:2048, Cert Id:2014002193, Valid from:10/22/08, 12:07 PM, Valid until:12/31/29, 12:07 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:456b5054, Subject:CN=Entrust Root Certification Authority, OU="(c) 2006 Entrust, Inc.", OU=www.entrust.net/CPS is incorporated by reference, O="Entrust, Inc.", C=US, Issuer:CN=Entrust Root Certification Authority, OU="(c) 2006 Entrust, Inc.", OU=www.entrust.net/CPS is incorporated by reference, O="Entrust, Inc.", C=US, Key type:RSA, Length:2048, Cert Id:-1261404096, Valid from:11/27/06, 8:23 PM, Valid until:11/27/26, 8:53 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:9b7e0649a33e62b9d5ee90487129ef57, Subject:CN=VeriSign Class 3 Public Primary Certification Authority - G3, OU="(c) 1999 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US, Issuer:CN=VeriSign Class 3 Public Primary Certification Authority - G3, OU="(c) 1999 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US, Key type:RSA, Length:2048, Cert Id:2057300190, Valid from:10/1/99, 12:00 AM, Valid until:7/16/36, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:95be16a0f72e46f17b398272fa8bcd96, Subject:CN=TeliaSonera Root CA v1, O=TeliaSonera, Issuer:CN=TeliaSonera Root CA v1, O=TeliaSonera, Key type:RSA, Length:4096, Cert Id:1495358374, Valid from:10/18/07, 12:00 PM, Valid until:10/18/32, 12:00 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:983f4, Subject:CN=D-TRUST Root Class 3 CA 2 EV 2009, O=D-Trust GmbH, C=DE, Issuer:CN=D-TRUST Root Class 3 CA 2 EV 2009, O=D-Trust GmbH, C=DE, Key type:RSA, Length:2048, Cert Id:971313728, Valid from:11/5/09, 8:50 AM, Valid until:11/5/29, 8:50 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA384withECDSA, Serial:605949e0262ebb55f90a778a71f94ad86c, Subject:CN=GlobalSign, O=GlobalSign, OU=GlobalSign ECC Root CA - R5, Issuer:CN=GlobalSign, O=GlobalSign, OU=GlobalSign ECC Root CA - R5, Key type:EC, Length:384, Cert Id:1997048439, Valid from:11/13/12, 12:00 AM, Valid until:1/19/38, 3:14 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:0, Subject:CN=Starfield Services Root Certificate Authority - G2, O="Starfield Technologies, Inc.", L=Scottsdale, ST=Arizona, C=US, Issuer:CN=Starfield Services Root Certificate Authority - G2, O="Starfield Technologies, Inc.", L=Scottsdale, ST=Arizona, C=US, Key type:RSA, Length:2048, Cert Id:1964785574, Valid from:9/1/09, 12:00 AM, Valid until:12/31/37, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA256withRSA, Serial:1, Subject:CN=T-TeleSec GlobalRoot Class 2, OU=T-Systems Trust Center, O=T-Systems Enterprise Services GmbH, C=DE, Issuer:CN=T-TeleSec GlobalRoot Class 2, OU=T-Systems Trust Center, O=T-Systems Enterprise Services GmbH, C=DE, Key type:RSA, Length:2048, Cert Id:-1238464039, Valid from:10/1/08, 10:40 AM, Valid until:10/1/33, 11:59 PM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:1, Subject:CN=AddTrust Qualified CA Root, OU=AddTrust TTP Network, O=AddTrust AB, C=SE, Issuer:CN=AddTrust Qualified CA Root, OU=AddTrust TTP Network, O=AddTrust AB, C=SE, Key type:RSA, Length:2048, Cert Id:607365522, Valid from:5/30/00, 10:44 AM, Valid until:5/30/20, 10:44 AM
2020.08.10 15:57:21 DEBUG web[][jdk.event.security] X509Certificate: Alg:SHA1withRSA, Serial:83be056904246b1a1756ac95991c74a, Subject:CN=DigiCert Global Root CA, OU=www.digicert.com, O=DigiCert Inc, C=US, Issuer:CN=DigiCert Global Root CA, OU=www.digicert.com, O=DigiCert Inc, C=US, Key type:RSA, Length:2048, Cert Id:1341898239, Valid from:11/10/06, 12:00 AM, Valid until:11/10/31, 12:00 AM
2020.08.10 15:57:21 DEBUG web[][o.s.s.e.RecoveryIndexer] Elasticsearch recovery - sonar.search.recovery.minAgeInMs=300000
2020.08.10 15:57:21 DEBUG web[][o.s.s.e.RecoveryIndexer] Elasticsearch recovery - sonar.search.recovery.loopLimit=10000
2020.08.10 15:57:21 INFO web[][o.s.s.s.LogServerId] Server ID: BF41A1F2-AXPY8TFRaBzN7rN1RBMX
2020.08.10 15:57:21 WARN web[][o.s.s.a.LogOAuthWarning] For security reasons, OAuth authentication should use HTTPS. You should set the property 'Administration > Configuration > Server base URL' to a HTTPS URL.
2020.08.10 15:57:21 INFO web[][org.sonar.INFO] Security realm: LDAP
2020.08.10 15:57:21 INFO web[][o.s.a.l.LdapSettingsManager] User mapping: LdapUserMapping{baseDn="cn=users,cn=accounts,dc=intgdc,dc=com", request=(&(objectClass=inetOrgPerson)(uid={0})), realNameAttribute=cn, emailAttribute=mail}
2020.08.10 15:57:21 INFO web[][o.s.a.l.LdapSettingsManager] Groups will not be synchronized, because property 'ldap.group.baseDn' is empty.
2020.08.10 15:57:21 DEBUG web[][o.s.a.l.LdapContextFactory] Initializing LDAP context {java.naming.referral=follow, java.naming.security.principal="uid=viewer,cn=sysaccounts,cn=etc,dc=intgdc,dc=com", com.sun.jndi.ldap.connect.pool=true, java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory, java.naming.provider.url=ldaps://na1-freeipa01.intgdc.com:636, java.naming.security.authentication=simple}
2020.08.10 15:57:22 DEBUG web[][jdk.event.security] TLSHandshake: na1-freeipa01.intgdc.com:636, TLSv1.3, TLS_AES_128_GCM_SHA256, -94783380
2020.08.10 15:57:22 INFO web[][o.s.a.l.LdapContextFactory] Test LDAP connection: FAIL
2020.08.10 15:57:22 ERROR web[][o.s.s.p.Platform] Background initialization failed. Stopping SonarQube
org.sonar.api.utils.SonarException: Security realm fails to start: Unable to open LDAP connection
at org.sonar.server.user.SecurityRealmFactory.start(SecurityRealmFactory.java:93)
at org.sonar.core.platform.StartableCloseableSafeLifecyleStrategy.start(StartableCloseableSafeLifecyleStrategy.java:40)
at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84)
at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169)
at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132)
at org.picocontainer.behaviors.Stored.start(Stored.java:110)
at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:1016)
at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1009)
at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:767)
at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:136)
at org.sonar.server.platform.platformlevel.PlatformLevel.start(PlatformLevel.java:90)
at org.sonar.server.platform.platformlevel.PlatformLevel4.start(PlatformLevel4.java:555)
at org.sonar.server.platform.PlatformImpl.start(PlatformImpl.java:213)
at org.sonar.server.platform.PlatformImpl.startLevel34Containers(PlatformImpl.java:187)
at org.sonar.server.platform.PlatformImpl.access$500(PlatformImpl.java:46)
at org.sonar.server.platform.PlatformImpl$1.lambda$doRun$0(PlatformImpl.java:120)
at org.sonar.server.platform.PlatformImpl$AutoStarterRunnable.runIfNotAborted(PlatformImpl.java:370)
at org.sonar.server.platform.PlatformImpl$1.doRun(PlatformImpl.java:120)
at org.sonar.server.platform.PlatformImpl$AutoStarterRunnable.run(PlatformImpl.java:354)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: org.sonar.auth.ldap.LdapException: Unable to open LDAP connection
at org.sonar.auth.ldap.LdapContextFactory.testConnection(LdapContextFactory.java:214)
at org.sonar.auth.ldap.LdapRealm.init(LdapRealm.java:63)
at org.sonar.server.user.SecurityRealmFactory.start(SecurityRealmFactory.java:87)
... 19 common frames omitted
Caused by: javax.naming.AuthenticationException: [LDAP: error code 49 - Invalid Credentials]
at java.naming/com.sun.jndi.ldap.LdapCtx.mapErrorCode(Unknown Source)
at java.naming/com.sun.jndi.ldap.LdapCtx.processReturnCode(Unknown Source)
at java.naming/com.sun.jndi.ldap.LdapCtx.processReturnCode(Unknown Source)
at java.naming/com.sun.jndi.ldap.LdapCtx.connect(Unknown Source)
at java.naming/com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source)
at java.naming/javax.naming.spi.NamingManager.getInitialContext(Unknown Source)
at java.naming/javax.naming.InitialContext.getDefaultInitCtx(Unknown Source)
at java.naming/javax.naming.InitialContext.init(Unknown Source)
at java.naming/javax.naming.ldap.InitialLdapContext.<init>(Unknown Source)
at org.sonar.auth.ldap.LdapContextFactory.createInitialDirContext(LdapContextFactory.java:137)
at org.sonar.auth.ldap.LdapContextFactory.createBindContext(LdapContextFactory.java:95)
at org.sonar.auth.ldap.LdapContextFactory.testConnection(LdapContextFactory.java:210)
... 21 common frames omitted
2020.08.10 15:57:22 DEBUG web[][o.s.s.p.Platform] Background initialization of SonarQube done
2020.08.10 15:57:22 INFO web[][o.s.p.ProcessEntryPoint] Hard stopping process
2020.08.10 15:57:22 INFO web[][o.s.s.p.d.EmbeddedDatabase] Embedded database stopped
2020.08.10 15:57:22 DEBUG web[][o.s.s.a.TomcatAccessLog] Tomcat is stopped
2020.08.10 15:57:23 DEBUG app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [web]: 0
2020.08.10 15:57:23 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] StopWatcher[web] tryToMoveTo web from STARTED to HARD_STOPPING => true
2020.08.10 15:57:23 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] StopWatcher[web] tryToMoveTo web from HARD_STOPPING to FINALIZE_STOPPING => true
2020.08.10 15:57:23 INFO app[][o.s.a.SchedulerImpl] Process[web] is stopped
2020.08.10 15:57:23 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] StopWatcher[web] tryToMoveTo web from FINALIZE_STOPPING to STOPPED => true
2020.08.10 15:57:23 DEBUG app[][o.s.a.NodeLifecycle] HardStopper-0 tryToMoveTo from STARTING to HARD_STOPPING => true
2020.08.10 15:57:23 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] HardStopper-0 tryToMoveTo ce from INIT to HARD_STOPPING => false
2020.08.10 15:57:23 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] HardStopper-0 tryToMoveTo web from STOPPED to HARD_STOPPING => false
2020.08.10 15:57:23 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] HardStopper-0 tryToMoveTo es from STARTED to HARD_STOPPING => true
2020.08.10 15:57:23 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] HardStopper-0 tryToMoveTo es from HARD_STOPPING to FINALIZE_STOPPING => true
2020.08.10 15:57:23 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 143
2020.08.10 15:57:23 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
2020.08.10 15:57:23 DEBUG app[][o.s.a.NodeLifecycle] HardStopper-0 tryToMoveTo from HARD_STOPPING to FINALIZE_STOPPING => true
2020.08.10 15:57:23 DEBUG app[][o.s.a.NodeLifecycle] HardStopper-0 tryToMoveTo from FINALIZE_STOPPING to STOPPED => true
2020.08.10 15:57:23 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
2020.08.10 15:57:23 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] HardStopper-0 tryToMoveTo es from FINALIZE_STOPPING to STOPPED => true
2020.08.10 15:57:23 DEBUG app[][o.s.a.NodeLifecycle] HardStopper-0 tryToMoveTo from STOPPED to FINALIZE_STOPPING => false
2020.08.10 15:57:23 DEBUG app[][o.s.a.p.ManagedProcessLifecycle] StopWatcher[es] tryToMoveTo es from STOPPED to HARD_STOPPING => false
2020.08.10 15:57:23 DEBUG app[][o.s.a.NodeLifecycle] Shutdown Hook tryToMoveTo from STOPPED to STOPPING => false
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment