Skip to content

Instantly share code, notes, and snippets.

View wangxiangyu's full-sized avatar

wangxiangyu wangxiangyu

  • bilibili
  • shanghai
View GitHub Profile
@wangxiangyu
wangxiangyu / gist:ceae49818d29d6ae67fcd11f3aa550f7
Created May 10, 2019 06:09
GET /_cluster/stats?human&pretty
{
"_nodes": {
"total": 74,
"successful": 74,
"failed": 0
},
"cluster_name": "billions-online-jssz03",
"timestamp": 1557468503796,
"status": "green",
"indices": {
This file has been truncated, but you can view the full file.
index shard prirep state docs store ip node
billions-sjptb-lancer-gateway-logstream-@2019.05.08-jssz03-1 1 p STARTED 17071457 1gb 10.69.67.11 jssz-billions-es-16-datanode_stale
billions-sjptb-lancer-gateway-logstream-@2019.05.08-jssz03-1 1 r STARTED 17071457 1gb 10.69.175.19 jssz-billions-es-48-datanode_stale
billions-sjptb-lancer-gateway-logstream-@2019.05.08-jssz03-1 0 r STARTED 17053877 1gb 10.69.175.20 jssz-billions-es-49-datanode_stale
billions-sjptb-lancer-gateway-logstream-@2019.05.08-jssz03-1 0 p STARTED 17053877 1gb 10.69.175.19 jssz-billions-es-48-datanode_stale01
billions-openplatform-gobase-@2019.05.08-jssz03-1 1 r STARTED 317903 52.3mb 10.69.175.19 jssz-billions-es-48-datanode_stale01
billions-openplatform-gobase-@2019.05.08-jssz03-1 1
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
close billions-bplus-access-dynamic_bstore_comm_biz-@2019.04.28-jssz03-1 3hsq2QVVSdGfSjyHOmbcDA
green open billions-main.app-svr.resource-service-@2019.05.09-jssz03-1 HrNR7TFiShCjHlHe3wGaVw 2 0 29037170 0 49gb 49gb
close billions-ops.billions.fake_flow.000161-@2019.04.26-jssz03-1 PfovucqXTraoOycP9wfyuA
green open billions-ops.apm.cdn-qos-report-@2019.01.10 3Wo_N9kCQpWlymFgrMIsqw 1 1 10000 0 12.9mb 6.4mb
green open billions-open-reconciliation-@2019.04.19-jssz03-1 Y0HRckhaTNWaZlj72K2AaA 2 1 119668 0 43.6mb 21.8mb
green ope
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1557388215 15:50:15 billions-online-jssz03 green 74 69 6518 4314 0 0 0 0 - 100.0%
[2019-04-21T22:02:54,642][WARN ][o.e.a.b.TransportShardBulkAction] [jssz-billions-es-02-datanode_hot] [[billions-main.app-svr.app-feed-@2019.04.21-jssz01-0][0]] failed to perform indices:data/write/bulk[s] on replica [billions-main.app-svr.app-feed-@2019.04.21-jssz01-0]
[0], node[rrGzzIqxTGS8-7UcmBCWVQ], [R], s[STARTED], a[id=8cojHrnlQ6WQj54m16pLOg]
org.elasticsearch.transport.NodeDisconnectedException: [jssz-billions-es-01-datanode_hot][10.69.23.23:9300][indices:data/write/bulk[s][r]] disconnected
[2019-04-21T22:02:54,642][WARN ][o.e.a.b.TransportShardBulkAction] [jssz-billions-es-02-datanode_hot] [[billions-main.app-svr.app-feed-@2019.04.21-jssz01-0][0]] failed to perform indices:data/write/bulk[s] on replica [billions-main.app-svr.app-feed-@2019.04.21-jssz01-0]
[0], node[rrGzzIqxTGS8-7UcmBCWVQ], [R], s[STARTED], a[id=8cojHrnlQ6WQj54m16pLOg]
org.elasticsearch.transport.NodeDisconnectedException: [jssz-billions-es-01-datanode_hot][10.69.23.23:9300][indices:data/write/bulk[s][r]] disconnected
[2019-04-21T22:0
[2019-04-21T12:55:59,254][WARN ][i.n.c.AbstractChannelHandlerContext] An exception 'java.lang.OutOfMemoryError: unable to create new native thread' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method) ~[?:1.8.0_162]
at java.lang.Thread.start(Thread.java:717) ~[?:1.8.0_162]
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957) ~[?:1.8.0_162]
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378) ~[?:1.8.0_162]
at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.doExecute(EsThreadPoolExecutor.java:94) ~[elasticsearch-5.4.3.jar:5.4.3]
at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.execute(EsThreadPoolExecutor.java:89) ~[elasticsearch-5.4.3.jar:5.4.3]
at org.elasticsearch.transport.TcpTran
@wangxiangyu
wangxiangyu / gist:548fedec87560a5cf5fc6cf80c75d285
Created April 9, 2019 10:26
cluster state update task [zen-disco-receive(from master [master ,,, above the warn threshold of 30s
[2019-04-09T17:58:54,867][TRACE][o.e.c.s.ClusterService ] [jssz-billions-es-05-datanode_stale] will process [zen-disco-receive(from master [master {jssz-billions-es-01-masternode}{IfiUfj6nRKSRpQqSL2tmkQ}{D22n7s9YRwieNT67hyPbUg}{10.69.23.23}{10.69.23.23:9310} committed ve362]])]
[2019-04-09T17:58:54,867][DEBUG][o.e.c.s.ClusterService ] [jssz-billions-es-05-datanode_stale] processing [zen-disco-receive(from master [master {jssz-billions-es-01-masternode}{IfiUfj6nRKSRpQqSL2tmkQ}{D22n7s9YRwieNT67hyPbUg}{10.69.23.23}{10.69.23.23:9310} committed vers2]])]: execute
[2019-04-09T17:58:55,054][TRACE][o.e.c.s.ClusterService ] [jssz-billions-es-05-datanode_stale] cluster state updated, source [zen-disco-receive(from master [master {jssz-billions-es-01-masternode}{IfiUfj6nRKSRpQqSL2tmkQ}{D22n7s9YRwieNT67hyPbUg}{10.69.23.23}{10.69.23.23:9ted version [1461362]])]
cluster uuid: lf3z-gwATBiM-tTTtdTtAg
version: 1461362
state uuid: BtuRtWbNS-O2H6YsBpa78g
from_diff: false
meta data version: 1460066
[billions-link.im.app
#common settings
log_path: /srv/kafka-logtailer/log/kafka-logtailer.log
log_level: info
#log settings
logsettings:
- logname: bigdata_v5_access
zookeeper_list: 10.0.11.105:2181,10.0.11.138:2181,10.0.11.103:2181
topic_name: bigdata_v5_access
log_max_length_byte: 1000000
file_path: /var/log/nginx/
@wangxiangyu
wangxiangyu / gist:2fb73be1266fd74cd9e5
Created December 3, 2014 07:11
Two consumers belong to the same group get the same message
from kafka import KafkaClient,SimpleConsumer
broker_list='172.16.29.216:9093,172.16.29.216:9092'
kafka = KafkaClient(broker_list)
consumer=SimpleConsumer(kafka,'my-group','test')
for message in consumer:
print message
cdcscscscsd