Skip to content

Instantly share code, notes, and snippets.

@hossainemruz
Created August 15, 2018 08:59
Show Gist options
  • Save hossainemruz/0d12329936367727e55ba09ef97e16f1 to your computer and use it in GitHub Desktop.
Save hossainemruz/0d12329936367727e55ba09ef97e16f1 to your computer and use it in GitHub Desktop.

Here is log from elasticsearch node.

+ /fsloader/run_sgadmin.sh
chown: /elasticsearch/plugins/search-guard-6/sgconfig/..data: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/sg_roles.yml: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/sg_internal_users.yml: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/sg_config.yml: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/sg_action_groups.yml: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/READALL_PASSWORD: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/ADMIN_PASSWORD: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/sg_roles_mapping.yml: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/..2018_08_15_07_45_39.034900946/sg_config.yml: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/..2018_08_15_07_45_39.034900946/sg_action_groups.yml: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/..2018_08_15_07_45_39.034900946/READALL_PASSWORD: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/..2018_08_15_07_45_39.034900946/ADMIN_PASSWORD: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/..2018_08_15_07_45_39.034900946/sg_roles_mapping.yml: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/..2018_08_15_07_45_39.034900946/sg_roles.yml: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/..2018_08_15_07_45_39.034900946/sg_internal_users.yml: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/..2018_08_15_07_45_39.034900946: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig/..2018_08_15_07_45_39.034900946: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig: Read-only file system
chown: /elasticsearch/plugins/search-guard-6/sgconfig: Read-only file system
chown: /elasticsearch/config/certs/..data: Read-only file system
chown: /elasticsearch/config/certs/root.jks: Read-only file system
chown: /elasticsearch/config/certs/sgadmin.jks: Read-only file system
chown: /elasticsearch/config/certs/node.jks: Read-only file system
chown: /elasticsearch/config/certs/..2018_08_15_07_45_39.617899785/root.jks: Read-only file system
chown: /elasticsearch/config/certs/..2018_08_15_07_45_39.617899785/sgadmin.jks: Read-only file system
chown: /elasticsearch/config/certs/..2018_08_15_07_45_39.617899785/node.jks: Read-only file system
chown: /elasticsearch/config/certs/..2018_08_15_07_45_39.617899785: Read-only file system
chown: /elasticsearch/config/certs/..2018_08_15_07_45_39.617899785: Read-only file system
chown: /elasticsearch/config/certs: Read-only file system
chown: /elasticsearch/config/certs: Read-only file system
[2018-08-15T07:48:01,622][INFO ][o.e.n.Node               ] [es-monitoring-cluster-0] initializing ...
[2018-08-15T07:48:02,057][INFO ][o.e.e.NodeEnvironment    ] [es-monitoring-cluster-0] using [1] data paths, mounts [[/data (/dev/sda1)]], net usable_space [13.5gb], net total_space [16.1gb], types [ext4]
[2018-08-15T07:48:02,057][INFO ][o.e.e.NodeEnvironment    ] [es-monitoring-cluster-0] heap size [123.7mb], compressed ordinary object pointers [true]
[2018-08-15T07:48:02,059][INFO ][o.e.n.Node               ] [es-monitoring-cluster-0] node name [es-monitoring-cluster-0], node ID [BZ0H7D8hQWWgOa7Zl7OGMg]
[2018-08-15T07:48:02,059][INFO ][o.e.n.Node               ] [es-monitoring-cluster-0] version[6.3.0], pid[13], build[default/tar/424e937/2018-06-11T23:38:03.357887Z], OS[Linux/4.15.0/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_151/25.151-b12]
[2018-08-15T07:48:02,060][INFO ][o.e.n.Node               ] [es-monitoring-cluster-0] JVM arguments [-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Xms134217728, -Xmx134217728, -Des.path.home=/elasticsearch, -Des.path.conf=/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2018-08-15T07:48:05,391][WARN ][c.f.s.SearchGuardPlugin  ] Search Guard plugin installed but disabled. This can expose your configuration (including passwords) to the public.
[2018-08-15T07:48:05,401][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [aggs-matrix-stats]
[2018-08-15T07:48:05,407][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [analysis-common]
[2018-08-15T07:48:05,407][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [ingest-common]
[2018-08-15T07:48:05,408][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [lang-expression]
[2018-08-15T07:48:05,414][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [lang-mustache]
[2018-08-15T07:48:05,416][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [lang-painless]
[2018-08-15T07:48:05,418][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [mapper-extras]
[2018-08-15T07:48:05,418][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [parent-join]
[2018-08-15T07:48:05,419][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [percolator]
[2018-08-15T07:48:05,419][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [rank-eval]
[2018-08-15T07:48:05,431][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [reindex]
[2018-08-15T07:48:05,431][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [repository-url]
[2018-08-15T07:48:05,431][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [transport-netty4]
[2018-08-15T07:48:05,431][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [tribe]
[2018-08-15T07:48:05,445][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [x-pack-core]
[2018-08-15T07:48:05,453][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [x-pack-deprecation]
[2018-08-15T07:48:05,453][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [x-pack-graph]
[2018-08-15T07:48:05,463][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [x-pack-logstash]
[2018-08-15T07:48:05,463][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [x-pack-monitoring]
[2018-08-15T07:48:05,468][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [x-pack-rollup]
[2018-08-15T07:48:05,468][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [x-pack-security]
[2018-08-15T07:48:05,468][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [x-pack-sql]
[2018-08-15T07:48:05,468][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [x-pack-upgrade]
[2018-08-15T07:48:05,468][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded module [x-pack-watcher]
[2018-08-15T07:48:05,469][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded plugin [ingest-attachment]
[2018-08-15T07:48:05,469][INFO ][o.e.p.PluginsService     ] [es-monitoring-cluster-0] loaded plugin [search-guard-6]
[2018-08-15T07:48:10,266][WARN ][o.e.d.c.s.Settings       ] [http.enabled] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
[2018-08-15T07:48:12,512][INFO ][o.e.d.DiscoveryModule    ] [es-monitoring-cluster-0] using discovery type [zen]
[2018-08-15T07:48:13,230][INFO ][c.f.s.SearchGuardPlugin  ] 0 Search Guard modules loaded so far: []
[2018-08-15T07:48:13,232][INFO ][o.e.n.Node               ] [es-monitoring-cluster-0] initialized
[2018-08-15T07:48:13,232][INFO ][o.e.n.Node               ] [es-monitoring-cluster-0] starting ...
[2018-08-15T07:48:13,413][INFO ][o.e.t.TransportService   ] [es-monitoring-cluster-0] publish_address {172.17.0.5:9300}, bound_addresses {0.0.0.0:9300}
[2018-08-15T07:48:13,425][INFO ][o.e.b.BootstrapChecks    ] [es-monitoring-cluster-0] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-08-15T07:48:16,512][INFO ][o.e.c.s.MasterService    ] [es-monitoring-cluster-0] zen-disco-elected-as-master ([0] nodes joined)[, ], reason: new_master {es-monitoring-cluster-0}{BZ0H7D8hQWWgOa7Zl7OGMg}{dz0XEwXnQtGRZoCzJh76Ug}{172.17.0.5}{172.17.0.5:9300}{xpack.installed=true}
[2018-08-15T07:48:16,518][INFO ][o.e.c.s.ClusterApplierService] [es-monitoring-cluster-0] new_master {es-monitoring-cluster-0}{BZ0H7D8hQWWgOa7Zl7OGMg}{dz0XEwXnQtGRZoCzJh76Ug}{172.17.0.5}{172.17.0.5:9300}{xpack.installed=true}, reason: apply cluster state (from master [master {es-monitoring-cluster-0}{BZ0H7D8hQWWgOa7Zl7OGMg}{dz0XEwXnQtGRZoCzJh76Ug}{172.17.0.5}{172.17.0.5:9300}{xpack.installed=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)[, ]]])
[2018-08-15T07:48:16,571][INFO ][o.e.h.n.Netty4HttpServerTransport] [es-monitoring-cluster-0] publish_address {172.17.0.5:9200}, bound_addresses {0.0.0.0:9200}
[2018-08-15T07:48:16,571][INFO ][o.e.n.Node               ] [es-monitoring-cluster-0] started
{
  "name" : "es-monitoring-cluster-0",
  "cluster_name" : "es-monitoring-cluster",
  "cluster_uuid" : "_na_",
  "version" : {
    "number" : "6.3.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "424e937",
    "build_date" : "2018-06-11T23:38:03.357887Z",
    "build_snapshot" : false,
    "lucene_version" : "7.3.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}
[2018-08-15T07:48:17,075][INFO ][o.e.g.GatewayService     ] [es-monitoring-cluster-0] recovered [0] indices into cluster_state
[2018-08-15T07:48:17,571][INFO ][o.e.c.m.MetaDataIndexTemplateService] [es-monitoring-cluster-0] adding template [.watch-history-7] for index patterns [.watcher-history-7*]
Search Guard Admin v6
Will connect to localhost:9300 ... done
[2018-08-15T07:48:17,913][INFO ][o.e.c.m.MetaDataIndexTemplateService] [es-monitoring-cluster-0] adding template [.watches] for index patterns [.watches*]
[2018-08-15T07:48:18,334][INFO ][o.e.c.m.MetaDataIndexTemplateService] [es-monitoring-cluster-0] adding template [.triggered_watches] for index patterns [.triggered_watches*]
[2018-08-15T07:48:18,724][INFO ][o.e.c.m.MetaDataIndexTemplateService] [es-monitoring-cluster-0] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-6-*]
[2018-08-15T07:48:19,076][INFO ][o.e.c.m.MetaDataIndexTemplateService] [es-monitoring-cluster-0] adding template [.monitoring-es] for index patterns [.monitoring-es-6-*]
[2018-08-15T07:48:19,608][INFO ][o.e.c.m.MetaDataIndexTemplateService] [es-monitoring-cluster-0] adding template [.monitoring-beats] for index patterns [.monitoring-beats-6-*]
[2018-08-15T07:48:20,072][INFO ][o.e.c.m.MetaDataIndexTemplateService] [es-monitoring-cluster-0] adding template [.monitoring-alerts] for index patterns [.monitoring-alerts-6]
[2018-08-15T07:48:20,544][INFO ][o.e.c.m.MetaDataIndexTemplateService] [es-monitoring-cluster-0] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
[2018-08-15T07:48:20,947][WARN ][o.e.t.n.Netty4Transport  ] [es-monitoring-cluster-0] exception caught on transport layer [NettyTcpChannel{localAddress=/127.0.0.1:9300, remoteAddress=/127.0.0.1:37476}], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (16,3,3,0)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.16.Final.jar:4.1.16.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (16,3,3,0)
	at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1315) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
	... 19 more
[2018-08-15T07:48:20,960][WARN ][o.e.t.n.Netty4Transport  ] [es-monitoring-cluster-0] exception caught on transport layer [NettyTcpChannel{localAddress=/127.0.0.1:9300, remoteAddress=/127.0.0.1:37476}], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (16,3,3,0)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:392) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:359) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:342) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.logging.LoggingHandler.channelInactive(LoggingHandler.java:167) [netty-handler-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1354) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:917) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:822) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-common-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) [netty-common-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.16.Final.jar:4.1.16.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (16,3,3,0)
	at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1315) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
	... 20 more
[2018-08-15T07:48:21,136][INFO ][o.e.l.LicenseService     ] [es-monitoring-cluster-0] license [d37afedc-d3ae-4d2a-94e6-f7ef98f116dd] mode [basic] - valid
Unable to check whether cluster is sane: None of the configured nodes are available: [{#transport#-1}{TZd6Y1EqQZ2aaQaKpq6S3A}{localhost}{127.0.0.1:9300}]
ERR: Cannot connect to Elasticsearch. Please refer to elasticsearch logfile for more information
Trace:
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{TZd6Y1EqQZ2aaQaKpq6S3A}{localhost}{127.0.0.1:9300}]]
	at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:347)
	at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:245)
	at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:60)
	at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:378)
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:405)
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:394)
	at com.floragunn.searchguard.tools.SearchGuardAdmin.main0(SearchGuardAdmin.java:451)
	at com.floragunn.searchguard.tools.SearchGuardAdmin.main(SearchGuardAdmin.java:124)
+ set -o errexit
+ set -o pipefail
+ searchguard=/elasticsearch/plugins/search-guard-6
+ sync
+ case "$MODE" in
+ ordinal=0
+ '[' 0 == 0 ']'
+ /fsloader/run_sgadmin.sh
{
  "name" : "es-monitoring-cluster-0",
  "cluster_name" : "es-monitoring-cluster",
  "cluster_uuid" : "3c_8cmQpSRK334Qw3Ate5w",
  "version" : {
    "number" : "6.3.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "424e937",
    "build_date" : "2018-06-11T23:38:03.357887Z",
    "build_snapshot" : false,
    "lucene_version" : "7.3.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}
Search Guard Admin v6
Will connect to localhost:9300 ... done
[2018-08-15T07:48:46,762][WARN ][o.e.t.n.Netty4Transport  ] [es-monitoring-cluster-0] exception caught on transport layer [NettyTcpChannel{localAddress=/127.0.0.1:9300, remoteAddress=/127.0.0.1:37632}], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (16,3,3,0)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.16.Final.jar:4.1.16.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (16,3,3,0)
	at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1315) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
	... 19 more
[2018-08-15T07:48:46,766][WARN ][o.e.t.n.Netty4Transport  ] [es-monitoring-cluster-0] exception caught on transport layer [NettyTcpChannel{localAddress=/127.0.0.1:9300, remoteAddress=/127.0.0.1:37632}], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (16,3,3,0)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:392) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:359) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:342) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.handler.logging.LoggingHandler.channelInactive(LoggingHandler.java:167) [netty-handler-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1354) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:917) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:822) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-common-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) [netty-common-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.16.Final.jar:4.1.16.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (16,3,3,0)
	at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1315) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
	... 20 more
Unable to check whether cluster is sane: None of the configured nodes are available: [{#transport#-1}{xkMUWjEnQh6z4yuez8ulaQ}{localhost}{127.0.0.1:9300}]
ERR: Cannot connect to Elasticsearch. Please refer to elasticsearch logfile for more information
Trace:
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{xkMUWjEnQh6z4yuez8ulaQ}{localhost}{127.0.0.1:9300}]]
	at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:347)
	at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:245)
	at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:60)
	at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:378)
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:405)
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:394)
	at com.floragunn.searchguard.tools.SearchGuardAdmin.main0(SearchGuardAdmin.java:451)
	at com.floragunn.searchguard.tools.SearchGuardAdmin.main(SearchGuardAdmin.java:124)

and this error repeat again and again...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment