Skip to content

Instantly share code, notes, and snippets.

@ceocoder
Created November 8, 2011 01:14
Show Gist options
  • Save ceocoder/1346737 to your computer and use it in GitHub Desktop.
Save ceocoder/1346737 to your computer and use it in GitHub Desktop.
ElasticSearch Unicast Bug
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.timeout: 30s
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: "172.21.120.91:9200"
[2011-11-07 16:33:37,658][INFO ][node ] [Widget] {0.18.2}[10295]: initializing ...
[2011-11-07 16:33:37,662][INFO ][plugins ] [Widget] loaded [], sites []
[2011-11-07 16:33:38,257][DEBUG][threadpool ] [Widget] creating thread_pool [cached], type [cached], keep_alive [30s]
[2011-11-07 16:33:38,259][DEBUG][threadpool ] [Widget] creating thread_pool [index], type [cached], keep_alive [5m]
[2011-11-07 16:33:38,259][DEBUG][threadpool ] [Widget] creating thread_pool [search], type [cached], keep_alive [5m]
[2011-11-07 16:33:38,260][DEBUG][threadpool ] [Widget] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2011-11-07 16:33:38,260][DEBUG][threadpool ] [Widget] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2011-11-07 16:33:38,262][DEBUG][threadpool ] [Widget] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2011-11-07 16:33:38,262][DEBUG][threadpool ] [Widget] creating thread_pool [snapshot], type [scaling], min [1], size [80], keep_alive [5m]
[2011-11-07 16:33:38,270][DEBUG][transport.netty ] [Widget] using worker_count[32], port[9300-9400], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_p
er_node[2/4/1]
[2011-11-07 16:33:38,279][DEBUG][discovery.zen.ping.multicast] [Widget] using group [224.2.2.4], with port [54328], ttl [3], and address [null]
[2011-11-07 16:33:38,282][DEBUG][discovery.zen.ping.unicast] [Widget] using initial hosts [], with concurrent_connects [10]
[2011-11-07 16:33:38,283][DEBUG][discovery.zen ] [Widget] using ping.timeout [3s]
[2011-11-07 16:33:38,286][DEBUG][discovery.zen.elect ] [Widget] using minimum_master_nodes [-1]
[2011-11-07 16:33:38,286][DEBUG][discovery.zen.fd ] [Widget] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2011-11-07 16:33:38,289][DEBUG][discovery.zen.fd ] [Widget] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2011-11-07 16:33:38,302][DEBUG][monitor.jvm ] [Widget] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2011-11-07 16:33:38,810][DEBUG][monitor.os ] [Widget] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@6405ce40] with refresh_interval [1s]
[2011-11-07 16:33:38,815][DEBUG][monitor.process ] [Widget] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@3529c445] with refresh_interval [1s]
[2011-11-07 16:33:38,817][DEBUG][monitor.jvm ] [Widget] Using refresh_interval [1s]
[2011-11-07 16:33:38,818][DEBUG][monitor.network ] [Widget] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@1b275a34] with refresh_interval [5s]
[2011-11-07 16:33:38,823][DEBUG][monitor.network ] [Widget] net_info
host [skunk-back-01]
eth2 display_name [eth2]
address [/fe80:0:0:0:21b:21ff:fe96:8368%4] [/172.21.120.91]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo display_name [lo]
address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16436] multicast [false] ptp [false] loopback [true] up [true] virtual [false]
[2011-11-07 16:33:38,831][DEBUG][env ] [Widget] using node location [[/home/nosql/deploy/elastic/data/skunktest/nodes/0]], local_node_id [0]
[2011-11-07 16:33:38,955][DEBUG][cache.memory ] [Widget] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], d
irect [true]
[2011-11-07 16:33:38,962][DEBUG][cluster.routing.allocation.decider] [Widget] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2011-11-07 16:33:38,963][DEBUG][cluster.routing.allocation.decider] [Widget] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2011-11-07 16:33:38,963][DEBUG][cluster.routing.allocation.decider] [Widget] using [cluster_concurrent_rebalance] with [2]
[2011-11-07 16:33:38,965][DEBUG][gateway.local ] [Widget] using initial_shards [quorum], list_timeout [30s]
[2011-11-07 16:33:38,977][DEBUG][indices.recovery ] [Widget] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compr
ess [true]
[2011-11-07 16:33:39,067][DEBUG][http.netty ] [Widget] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2011-11-07 16:33:39,071][DEBUG][indices.memory ] [Widget] using index_buffer_size [2.9gb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time
[30m]
[2011-11-07 16:33:39,077][DEBUG][indices.cache.filter ] [Widget] using [node] filter cache with size [20%], actual_size [5.9gb]
[2011-11-07 16:33:39,116][INFO ][node ] [Widget] {0.18.2}[10295]: initialized
[2011-11-07 16:33:39,116][INFO ][node ] [Widget] {0.18.2}[10295]: starting ...
[2011-11-07 16:33:39,131][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2011-11-07 16:33:39,164][DEBUG][transport.netty ] [Widget] Bound to address [/0:0:0:0:0:0:0:0:9300]
[2011-11-07 16:33:39,166][INFO ][transport ] [Widget] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/172.21.120.91:9300]}
[2011-11-07 16:33:42,200][DEBUG][discovery.zen ] [Widget] ping responses: {none}
[2011-11-07 16:33:42,202][DEBUG][cluster.service ] [Widget] processing [zen-disco-join (elected_as_master)]: execute
[2011-11-07 16:33:42,203][DEBUG][cluster.service ] [Widget] cluster state updated, version [1], source [zen-disco-join (elected_as_master)]
[2011-11-07 16:33:42,204][INFO ][cluster.service ] [Widget] new_master [Widget][9YaE9rz3S4CxyxzK2iHk5A][inet[/172.21.120.91:9300]], reason: zen-disco-join (elected_as_master)
[2011-11-07 16:33:42,240][DEBUG][transport.netty ] [Widget] Connected to node [[Widget][9YaE9rz3S4CxyxzK2iHk5A][inet[/172.21.120.91:9300]]]
[2011-11-07 16:33:42,242][DEBUG][river.cluster ] [Widget] processing [reroute_rivers_node_changed]: execute
[2011-11-07 16:33:42,242][DEBUG][cluster.service ] [Widget] processing [zen-disco-join (elected_as_master)]: done applying updated cluster_state
[2011-11-07 16:33:42,242][DEBUG][river.cluster ] [Widget] processing [reroute_rivers_node_changed]: no change in cluster_state
[2011-11-07 16:33:42,242][INFO ][discovery ] [Widget] skunktest/9YaE9rz3S4CxyxzK2iHk5A
[2011-11-07 16:33:42,309][DEBUG][gateway.local ] [Widget] [find_latest_state]: loading metadata from [/home/nosql/deploy/elastic/data/skunktest/nodes/0/_state/metadata-4]
[2011-11-07 16:33:42,311][DEBUG][gateway.local ] [Widget] [find_latest_state]: loading started shards from [/home/nosql/deploy/elastic/data/skunktest/nodes/0/_state/shards-4]
[2011-11-07 16:33:42,316][DEBUG][gateway.local ] [Widget] elected state from [[Widget][9YaE9rz3S4CxyxzK2iHk5A][inet[/172.21.120.91:9300]]]
[2011-11-07 16:33:42,316][DEBUG][cluster.service ] [Widget] processing [local-gateway-elected-state]: execute
[2011-11-07 16:33:42,320][INFO ][http ] [Widget] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.21.120.91:9200]}
[2011-11-07 16:33:42,320][INFO ][node ] [Widget] {0.18.2}[10295]: started
[2011-11-07 16:33:42,325][DEBUG][gateway.local ] [Widget] [lomabucket-0][0]: allocating [[lomabucket-0][0], node[null], [P], s[UNASSIGNED]] to [[Widget][9YaE9rz3S4CxyxzK2iHk5A][inet[/172.2
1.120.91:9300]]] on primary allocation
[2011-11-07 16:33:42,325][DEBUG][gateway.local ] [Widget] [lomabucket-0][1]: allocating [[lomabucket-0][1], node[null], [P], s[UNASSIGNED]] to [[Widget][9YaE9rz3S4CxyxzK2iHk5A][inet[/172.2
1.120.91:9300]]] on primary allocation
[2011-11-07 16:33:42,325][DEBUG][gateway.local ] [Widget] [lomabucket-0][2]: allocating [[lomabucket-0][2], node[null], [P], s[UNASSIGNED]] to [[Widget][9YaE9rz3S4CxyxzK2iHk5A][inet[/172.2
1.120.91:9300]]] on primary allocation
[2011-11-07 16:33:42,326][DEBUG][gateway.local ] [Widget] [lomabucket-0][3]: allocating [[lomabucket-0][3], node[null], [P], s[UNASSIGNED]] to [[Widget][9YaE9rz3S4CxyxzK2iHk5A][inet[/172.2
1.120.91:9300]]] on primary allocation
[2011-11-07 16:33:42,326][DEBUG][gateway.local ] [Widget] [lomabucket-0][4]: throttling allocation [[lomabucket-0][4], node[null], [P], s[UNASSIGNED]] to [[[Widget][9YaE9rz3S4CxyxzK2iHk5A]
[inet[/172.21.120.91:9300]]]] on primary allocation
[2011-11-07 16:33:42,328][DEBUG][cluster.service ] [Widget] cluster state updated, version [5], source [local-gateway-elected-state]
[2011-11-07 16:33:42,328][DEBUG][river.cluster ] [Widget] processing [reroute_rivers_node_changed]: execute
[2011-11-07 16:33:42,328][DEBUG][river.cluster ] [Widget] processing [reroute_rivers_node_changed]: no change in cluster_state
[2011-11-07 16:33:42,328][DEBUG][indices.cluster ] [Widget] [lomabucket-0] creating index
[2011-11-07 16:33:42,329][DEBUG][indices ] [Widget] creating Index [lomabucket-0], shards [5]/[0]
[2011-11-07 16:33:42,453][DEBUG][index.mapper ] [Widget] [lomabucket-0] using dynamic[true], default mapping: location[null] and source[{
"_default_" : {
}
}]
[2011-11-07 16:33:42,453][DEBUG][index.cache.field.data.resident] [Widget] [lomabucket-0] using [resident] field cache with max_size [-1], expire [null]
[2011-11-07 16:33:42,455][DEBUG][index.cache ] [Widget] [lomabucket-0] Using stats.refresh_interval [1s]
[2011-11-07 16:33:42,463][DEBUG][indices.cluster ] [Widget] [lomabucket-0] adding mapping [2], source [{"2":{"_source":{"compress":true,"excludes":["body*"]},"properties":{"trunc_body":{"typ
e":"string","index":"no","store":"yes","include_in_all":false}}}}]
[2011-11-07 16:33:42,503][DEBUG][indices.cluster ] [Widget] [lomabucket-0] adding mapping [1], source [{"1":{"_source":{"compress":true,"excludes":["body*"]},"properties":{"trunc_body":{"typ
e":"string","index":"no","store":"yes","include_in_all":false}}}}]
[2011-11-07 16:33:42,505][DEBUG][indices.cluster ] [Widget] [lomabucket-0][0] creating shard
[2011-11-07 16:33:42,505][DEBUG][index.service ] [Widget] [lomabucket-0] creating shard_id [0]
[2011-11-07 16:33:42,580][DEBUG][index.deletionpolicy ] [Widget] [lomabucket-0][0] Using [keep_only_last] deletion policy
[2011-11-07 16:33:42,581][DEBUG][index.merge.policy ] [Widget] [lomabucket-0][0] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_m
erge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2011-11-07 16:33:42,582][DEBUG][index.merge.scheduler ] [Widget] [lomabucket-0][0] using [concurrent] merge scheduler with max_thread_count[3]
[2011-11-07 16:33:42,583][DEBUG][index.shard.service ] [Widget] [lomabucket-0][0] state: [CREATED]
[2011-11-07 16:33:42,586][DEBUG][index.translog ] [Widget] [lomabucket-0][0] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2011-11-07 16:33:42,588][DEBUG][index.shard.service ] [Widget] [lomabucket-0][0] state: [CREATED]->[RECOVERING], reason [from gateway]
[2011-11-07 16:33:42,589][DEBUG][indices.cluster ] [Widget] [lomabucket-0][1] creating shard
[2011-11-07 16:33:42,589][DEBUG][index.service ] [Widget] [lomabucket-0] creating shard_id [1]
[2011-11-07 16:33:42,589][DEBUG][index.gateway ] [Widget] [lomabucket-0][0] starting recovery from local ...
[2011-11-07 16:33:42,602][DEBUG][index.engine.robin ] [Widget] [lomabucket-0][0] Starting engine
[2011-11-07 16:33:42,614][DEBUG][index.deletionpolicy ] [Widget] [lomabucket-0][1] Using [keep_only_last] deletion policy
[2011-11-07 16:33:42,615][DEBUG][index.merge.policy ] [Widget] [lomabucket-0][1] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_m
erge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2011-11-07 16:33:42,615][DEBUG][index.merge.scheduler ] [Widget] [lomabucket-0][1] using [concurrent] merge scheduler with max_thread_count[3]
[2011-11-07 16:33:42,615][DEBUG][index.shard.service ] [Widget] [lomabucket-0][1] state: [CREATED]
[2011-11-07 16:33:42,618][DEBUG][index.translog ] [Widget] [lomabucket-0][1] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2011-11-07 16:33:42,618][DEBUG][indices.memory ] [Widget] recalculating shard indexing buffer (reason=created_shard[lomabucket-0][1]), total is [2.9gb] with [1] active shards, each shard s
et to [512mb]
[2011-11-07 16:33:42,649][DEBUG][index.shard.service ] [Widget] [lomabucket-0][1] state: [CREATED]->[RECOVERING], reason [from gateway]
[2011-11-07 16:33:42,649][DEBUG][indices.cluster ] [Widget] [lomabucket-0][2] creating shard
[2011-11-07 16:33:42,649][DEBUG][index.gateway ] [Widget] [lomabucket-0][1] starting recovery from local ...
[2011-11-07 16:33:42,649][DEBUG][index.service ] [Widget] [lomabucket-0] creating shard_id [2]
[2011-11-07 16:33:42,650][DEBUG][index.engine.robin ] [Widget] [lomabucket-0][1] Starting engine
[2011-11-07 16:33:42,651][DEBUG][index.shard.service ] [Widget] [lomabucket-0][0] state: [RECOVERING]->[STARTED], reason [post recovery]
[2011-11-07 16:33:42,651][DEBUG][index.shard.service ] [Widget] [lomabucket-0][0] scheduling refresher every 1s
[2011-11-07 16:33:42,652][DEBUG][index.shard.service ] [Widget] [lomabucket-0][0] scheduling optimizer / merger every 1s
[2011-11-07 16:33:42,652][DEBUG][index.gateway ] [Widget] [lomabucket-0][0] recovery completed from local, took [63ms]
index : files [3] with total_size [86b], took[13ms]
: recovered_files [0] with total_size [0b]
: reusing_files [3] with total_size [86b]
translog : number_of_operations [0], took [63ms]
[2011-11-07 16:33:42,652][DEBUG][cluster.action.shard ] [Widget] sending shard started for [lomabucket-0][0], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], reason [after recovery from gatew
ay]
[2011-11-07 16:33:42,653][DEBUG][cluster.action.shard ] [Widget] received shard started for [lomabucket-0][0], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], reason [after recovery from gate
way]
[2011-11-07 16:33:42,653][DEBUG][index.shard.service ] [Widget] [lomabucket-0][1] state: [RECOVERING]->[STARTED], reason [post recovery]
[2011-11-07 16:33:42,653][DEBUG][index.shard.service ] [Widget] [lomabucket-0][1] scheduling refresher every 1s
[2011-11-07 16:33:42,653][DEBUG][index.shard.service ] [Widget] [lomabucket-0][1] scheduling optimizer / merger every 1s
[2011-11-07 16:33:42,654][DEBUG][index.gateway ] [Widget] [lomabucket-0][1] recovery completed from local, took [5ms]
index : files [3] with total_size [86b], took[1ms]
: recovered_files [0] with total_size [0b]
: reusing_files [3] with total_size [86b]
translog : number_of_operations [0], took [5ms]
[2011-11-07 16:33:42,654][DEBUG][cluster.action.shard ] [Widget] sending shard started for [lomabucket-0][1], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], reason [after recovery from gatew
ay]
[2011-11-07 16:33:42,654][DEBUG][cluster.action.shard ] [Widget] received shard started for [lomabucket-0][1], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], reason [after recovery from gate
way]
[2011-11-07 16:33:42,686][DEBUG][index.deletionpolicy ] [Widget] [lomabucket-0][2] Using [keep_only_last] deletion policy
[2011-11-07 16:33:42,687][DEBUG][index.merge.policy ] [Widget] [lomabucket-0][2] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_m
erge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2011-11-07 16:33:42,687][DEBUG][index.merge.scheduler ] [Widget] [lomabucket-0][2] using [concurrent] merge scheduler with max_thread_count[3]
[2011-11-07 16:33:42,687][DEBUG][index.shard.service ] [Widget] [lomabucket-0][2] state: [CREATED]
[2011-11-07 16:33:42,689][DEBUG][index.translog ] [Widget] [lomabucket-0][2] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2011-11-07 16:33:42,690][DEBUG][indices.memory ] [Widget] recalculating shard indexing buffer (reason=created_shard[lomabucket-0][2]), total is [2.9gb] with [2] active shards, each shard s
et to [512mb]
[2011-11-07 16:33:42,690][DEBUG][index.shard.service ] [Widget] [lomabucket-0][2] state: [CREATED]->[RECOVERING], reason [from gateway]
[2011-11-07 16:33:42,690][DEBUG][indices.cluster ] [Widget] [lomabucket-0][3] creating shard
[2011-11-07 16:33:42,690][DEBUG][index.gateway ] [Widget] [lomabucket-0][2] starting recovery from local ...
[2011-11-07 16:33:42,690][DEBUG][index.service ] [Widget] [lomabucket-0] creating shard_id [3]
[2011-11-07 16:33:42,691][DEBUG][index.engine.robin ] [Widget] [lomabucket-0][2] Starting engine
[2011-11-07 16:33:42,693][DEBUG][index.shard.service ] [Widget] [lomabucket-0][2] state: [RECOVERING]->[STARTED], reason [post recovery]
[2011-11-07 16:33:42,693][DEBUG][index.shard.service ] [Widget] [lomabucket-0][2] scheduling refresher every 1s
[2011-11-07 16:33:42,693][DEBUG][index.shard.service ] [Widget] [lomabucket-0][2] scheduling optimizer / merger every 1s
[2011-11-07 16:33:42,693][DEBUG][index.gateway ] [Widget] [lomabucket-0][2] recovery completed from local, took [3ms]
index : files [3] with total_size [86b], took[1ms]
: recovered_files [0] with total_size [0b]
: reusing_files [3] with total_size [86b]
translog : number_of_operations [0], took [3ms]
[2011-11-07 16:33:42,693][DEBUG][cluster.action.shard ] [Widget] sending shard started for [lomabucket-0][2], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], reason [after recovery from gatew
ay]
[2011-11-07 16:33:42,693][DEBUG][cluster.action.shard ] [Widget] received shard started for [lomabucket-0][2], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], reason [after recovery from gate
way]
[2011-11-07 16:33:42,714][DEBUG][index.deletionpolicy ] [Widget] [lomabucket-0][3] Using [keep_only_last] deletion policy
[2011-11-07 16:33:42,714][DEBUG][index.merge.policy ] [Widget] [lomabucket-0][3] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_m
erge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2011-11-07 16:33:42,714][DEBUG][index.merge.scheduler ] [Widget] [lomabucket-0][3] using [concurrent] merge scheduler with max_thread_count[3]
[2011-11-07 16:33:42,715][DEBUG][index.shard.service ] [Widget] [lomabucket-0][3] state: [CREATED]
[2011-11-07 16:33:42,717][DEBUG][index.translog ] [Widget] [lomabucket-0][3] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2011-11-07 16:33:42,717][DEBUG][indices.memory ] [Widget] recalculating shard indexing buffer (reason=created_shard[lomabucket-0][3]), total is [2.9gb] with [3] active shards, each shard s
et to [512mb]
[2011-11-07 16:33:42,717][DEBUG][index.shard.service ] [Widget] [lomabucket-0][3] state: [CREATED]->[RECOVERING], reason [from gateway]
[2011-11-07 16:33:42,717][DEBUG][index.gateway ] [Widget] [lomabucket-0][3] starting recovery from local ...
[2011-11-07 16:33:42,718][DEBUG][index.engine.robin ] [Widget] [lomabucket-0][3] Starting engine
[2011-11-07 16:33:42,720][DEBUG][index.shard.service ] [Widget] [lomabucket-0][3] state: [RECOVERING]->[STARTED], reason [post recovery]
[2011-11-07 16:33:42,720][INFO ][gateway ] [Widget] recovered [1] indices into cluster_state
[2011-11-07 16:33:42,720][DEBUG][index.shard.service ] [Widget] [lomabucket-0][3] scheduling refresher every 1s
[2011-11-07 16:33:42,720][DEBUG][cluster.service ] [Widget] processing [local-gateway-elected-state]: done applying updated cluster_state
[2011-11-07 16:33:42,720][DEBUG][index.shard.service ] [Widget] [lomabucket-0][3] scheduling optimizer / merger every 1s
[2011-11-07 16:33:42,721][DEBUG][cluster.service ] [Widget] processing [shard-started ([lomabucket-0][0], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]), reason [after recovery from gat
eway]]: execute
[2011-11-07 16:33:42,721][DEBUG][index.gateway ] [Widget] [lomabucket-0][3] recovery completed from local, took [4ms]
index : files [3] with total_size [86b], took[0s]
: recovered_files [0] with total_size [0b]
: reusing_files [3] with total_size [86b]
translog : number_of_operations [0], took [3ms]
[2011-11-07 16:33:42,721][DEBUG][cluster.action.shard ] [Widget] sending shard started for [lomabucket-0][3], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], reason [after recovery from gatew
ay]
[2011-11-07 16:33:42,721][DEBUG][cluster.action.shard ] [Widget] applying started shards [[lomabucket-0][0], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], [lomabucket-0][1], node[9YaE9rz3S4
CxyxzK2iHk5A], [P], s[INITIALIZING], [lomabucket-0][2], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]], reason [after recovery from gateway]
[2011-11-07 16:33:42,721][DEBUG][cluster.action.shard ] [Widget] received shard started for [lomabucket-0][3], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], reason [after recovery from gate
way]
[2011-11-07 16:33:42,721][DEBUG][gateway.local ] [Widget] [lomabucket-0][4]: allocating [[lomabucket-0][4], node[null], [P], s[UNASSIGNED]] to [[Widget][9YaE9rz3S4CxyxzK2iHk5A][inet[/172.2
1.120.91:9300]]] on primary allocation
[2011-11-07 16:33:42,722][DEBUG][cluster.service ] [Widget] cluster state updated, version [6], source [shard-started ([lomabucket-0][0], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]),
reason [after recovery from gateway]]
[2011-11-07 16:33:42,722][DEBUG][river.cluster ] [Widget] processing [reroute_rivers_node_changed]: execute
[2011-11-07 16:33:42,722][DEBUG][river.cluster ] [Widget] processing [reroute_rivers_node_changed]: no change in cluster_state
[2011-11-07 16:33:42,722][DEBUG][cluster.action.shard ] [Widget] sending shard started for [lomabucket-0][3], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], reason [master [Widget][9YaE9rz3S
4CxyxzK2iHk5A][inet[/172.21.120.91:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2011-11-07 16:33:42,723][DEBUG][cluster.action.shard ] [Widget] received shard started for [lomabucket-0][3], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], reason [master [Widget][9YaE9rz3
S4CxyxzK2iHk5A][inet[/172.21.120.91:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2011-11-07 16:33:42,723][DEBUG][indices.cluster ] [Widget] [lomabucket-0][4] creating shard
[2011-11-07 16:33:42,723][DEBUG][index.service ] [Widget] [lomabucket-0] creating shard_id [4]
[2011-11-07 16:33:42,746][DEBUG][index.deletionpolicy ] [Widget] [lomabucket-0][4] Using [keep_only_last] deletion policy
[2011-11-07 16:33:42,747][DEBUG][index.merge.policy ] [Widget] [lomabucket-0][4] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_m
erge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2011-11-07 16:33:42,747][DEBUG][index.merge.scheduler ] [Widget] [lomabucket-0][4] using [concurrent] merge scheduler with max_thread_count[3]
[2011-11-07 16:33:42,747][DEBUG][index.shard.service ] [Widget] [lomabucket-0][4] state: [CREATED]
[2011-11-07 16:33:42,749][DEBUG][index.translog ] [Widget] [lomabucket-0][4] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2011-11-07 16:33:42,750][DEBUG][indices.memory ] [Widget] recalculating shard indexing buffer (reason=created_shard[lomabucket-0][4]), total is [2.9gb] with [4] active shards, each shard s
et to [512mb]
[2011-11-07 16:33:42,750][DEBUG][index.shard.service ] [Widget] [lomabucket-0][4] state: [CREATED]->[RECOVERING], reason [from gateway]
[2011-11-07 16:33:42,750][DEBUG][index.gateway ] [Widget] [lomabucket-0][4] starting recovery from local ...
[2011-11-07 16:33:42,751][DEBUG][index.engine.robin ] [Widget] [lomabucket-0][4] Starting engine
[2011-11-07 16:33:42,751][DEBUG][cluster.service ] [Widget] processing [shard-started ([lomabucket-0][0], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]), reason [after recovery from gat
eway]]: done applying updated cluster_state
[2011-11-07 16:33:42,751][DEBUG][cluster.service ] [Widget] processing [shard-started ([lomabucket-0][1], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]), reason [after recovery from gat
eway]]: execute
[2011-11-07 16:33:42,751][DEBUG][cluster.action.shard ] [Widget] applying started shards [[lomabucket-0][3], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], [lomabucket-0][3], node[9YaE9rz3S4
CxyxzK2iHk5A], [P], s[INITIALIZING]], reason [after recovery from gateway]
[2011-11-07 16:33:42,752][DEBUG][cluster.service ] [Widget] cluster state updated, version [7], source [shard-started ([lomabucket-0][1], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]),
reason [after recovery from gateway]]
[2011-11-07 16:33:42,752][DEBUG][river.cluster ] [Widget] processing [reroute_rivers_node_changed]: execute
[2011-11-07 16:33:42,752][DEBUG][river.cluster ] [Widget] processing [reroute_rivers_node_changed]: no change in cluster_state
[2011-11-07 16:33:42,753][DEBUG][cluster.service ] [Widget] processing [shard-started ([lomabucket-0][1], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]), reason [after recovery from gat
eway]]: done applying updated cluster_state
[2011-11-07 16:33:42,753][DEBUG][cluster.service ] [Widget] processing [shard-started ([lomabucket-0][2], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]), reason [after recovery from gat
eway]]: execute
[2011-11-07 16:33:42,753][DEBUG][index.shard.service ] [Widget] [lomabucket-0][4] state: [RECOVERING]->[STARTED], reason [post recovery]
[2011-11-07 16:33:42,753][DEBUG][cluster.service ] [Widget] processing [shard-started ([lomabucket-0][2], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]), reason [after recovery from gat
eway]]: no change in cluster_state
[2011-11-07 16:33:42,753][DEBUG][index.shard.service ] [Widget] [lomabucket-0][4] scheduling refresher every 1s
[2011-11-07 16:33:42,753][DEBUG][cluster.service ] [Widget] processing [shard-started ([lomabucket-0][3], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]), reason [after recovery from gat
eway]]: execute
[2011-11-07 16:33:42,753][DEBUG][index.shard.service ] [Widget] [lomabucket-0][4] scheduling optimizer / merger every 1s
[2011-11-07 16:33:42,753][DEBUG][cluster.service ] [Widget] processing [shard-started ([lomabucket-0][3], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]), reason [after recovery from gat
eway]]: no change in cluster_state
[2011-11-07 16:33:42,753][DEBUG][cluster.service ] [Widget] processing [shard-started ([lomabucket-0][3], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]), reason [master [Widget][9YaE9rz
3S4CxyxzK2iHk5A][inet[/172.21.120.91:9300]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2011-11-07 16:33:42,753][DEBUG][index.gateway ] [Widget] [lomabucket-0][4] recovery completed from local, took [3ms]
index : files [3] with total_size [86b], took[1ms]
: recovered_files [0] with total_size [0b]
: reusing_files [3] with total_size [86b]
translog : number_of_operations [0], took [3ms]
[2011-11-07 16:33:42,753][DEBUG][cluster.service ] [Widget] processing [shard-started ([lomabucket-0][3], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]), reason [master [Widget][9YaE9rz
3S4CxyxzK2iHk5A][inet[/172.21.120.91:9300]] marked shard as initializing, but shard already started, mark shard as started]]: no change in cluster_state
[2011-11-07 16:33:42,753][DEBUG][cluster.action.shard ] [Widget] sending shard started for [lomabucket-0][4], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], reason [after recovery from gatew
ay]
[2011-11-07 16:33:42,754][DEBUG][cluster.action.shard ] [Widget] received shard started for [lomabucket-0][4], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING], reason [after recovery from gate
way]
[2011-11-07 16:33:42,754][DEBUG][cluster.service ] [Widget] processing [shard-started ([lomabucket-0][4], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]), reason [after recovery from gat
eway]]: execute
[2011-11-07 16:33:42,754][DEBUG][cluster.action.shard ] [Widget] applying started shards [[lomabucket-0][4], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]], reason [after recovery from gatew
ay]
[2011-11-07 16:33:42,754][DEBUG][cluster.service ] [Widget] cluster state updated, version [8], source [shard-started ([lomabucket-0][4], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]),
reason [after recovery from gateway]]
[2011-11-07 16:33:42,754][DEBUG][river.cluster ] [Widget] processing [reroute_rivers_node_changed]: execute
[2011-11-07 16:33:42,754][DEBUG][river.cluster ] [Widget] processing [reroute_rivers_node_changed]: no change in cluster_state
[2011-11-07 16:33:42,755][DEBUG][cluster.service ] [Widget] processing [shard-started ([lomabucket-0][4], node[9YaE9rz3S4CxyxzK2iHk5A], [P], s[INITIALIZING]), reason [after recovery from gat
eway]]: done applying updated cluster_state
[2011-11-07 16:33:52,243][DEBUG][cluster.service ] [Widget] processing [routing-table-updater]: execute
[2011-11-07 16:33:52,243][DEBUG][cluster.service ] [Widget] processing [routing-table-updater]: no change in cluster_state
[2011-11-07 16:40:34,885][INFO ][node ] [Widget] {0.18.2}[10295]: stopping ...
[2011-11-07 16:40:34,893][DEBUG][index.shard.service ] [Widget] [lomabucket-0][0] state: [STARTED]->[CLOSED], reason [shutdown]
[2011-11-07 16:40:34,893][DEBUG][index.shard.service ] [Widget] [lomabucket-0][1] state: [STARTED]->[CLOSED], reason [shutdown]
[2011-11-07 16:40:34,893][DEBUG][index.shard.service ] [Widget] [lomabucket-0][3] state: [STARTED]->[CLOSED], reason [shutdown]
[2011-11-07 16:40:34,894][DEBUG][index.shard.service ] [Widget] [lomabucket-0][2] state: [STARTED]->[CLOSED], reason [shutdown]
[2011-11-07 16:40:34,894][DEBUG][index.shard.service ] [Widget] [lomabucket-0][4] state: [STARTED]->[CLOSED], reason [shutdown]
[2011-11-07 16:40:34,907][INFO ][node ] [Widget] {0.18.2}[10295]: stopped
[2011-11-07 16:40:34,908][INFO ][node ] [Widget] {0.18.2}[10295]: closing ...
[2011-11-07 16:40:34,916][INFO ][node ] [Widget] {0.18.2}[10295]: closed
[2011-11-07 16:45:51,107][INFO ][node ] [Banner, Betty Ross] {0.18.2}[12231]: initializing ...
[2011-11-07 16:45:51,112][INFO ][plugins ] [Banner, Betty Ross] loaded [], sites []
[2011-11-07 16:45:51,708][DEBUG][threadpool ] [Banner, Betty Ross] creating thread_pool [cached], type [cached], keep_alive [30s]
[2011-11-07 16:45:51,710][DEBUG][threadpool ] [Banner, Betty Ross] creating thread_pool [index], type [cached], keep_alive [5m]
[2011-11-07 16:45:51,710][DEBUG][threadpool ] [Banner, Betty Ross] creating thread_pool [search], type [cached], keep_alive [5m]
[2011-11-07 16:45:51,710][DEBUG][threadpool ] [Banner, Betty Ross] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2011-11-07 16:45:51,710][DEBUG][threadpool ] [Banner, Betty Ross] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2011-11-07 16:45:51,712][DEBUG][threadpool ] [Banner, Betty Ross] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2011-11-07 16:45:51,712][DEBUG][threadpool ] [Banner, Betty Ross] creating thread_pool [snapshot], type [scaling], min [1], size [80], keep_alive [5m]
[2011-11-07 16:45:51,720][DEBUG][transport.netty ] [Banner, Betty Ross] using worker_count[32], port[9300-9400], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2011-11-07 16:45:51,730][DEBUG][discovery.zen.ping.unicast] [Banner, Betty Ross] using initial hosts [172.21.120.91:9200], with concurrent_connects [10]
[2011-11-07 16:45:51,732][DEBUG][discovery.zen ] [Banner, Betty Ross] using ping.timeout [30s]
[2011-11-07 16:45:51,735][DEBUG][discovery.zen.elect ] [Banner, Betty Ross] using minimum_master_nodes [-1]
[2011-11-07 16:45:51,736][DEBUG][discovery.zen.fd ] [Banner, Betty Ross] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2011-11-07 16:45:51,738][DEBUG][discovery.zen.fd ] [Banner, Betty Ross] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2011-11-07 16:45:51,751][DEBUG][monitor.jvm ] [Banner, Betty Ross] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2011-11-07 16:45:52,259][DEBUG][monitor.os ] [Banner, Betty Ross] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@2fbef1ac] with refresh_interval [1s]
[2011-11-07 16:45:52,264][DEBUG][monitor.process ] [Banner, Betty Ross] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@2d388e5e] with refresh_interval [1s]
[2011-11-07 16:45:52,266][DEBUG][monitor.jvm ] [Banner, Betty Ross] Using refresh_interval [1s]
[2011-11-07 16:45:52,267][DEBUG][monitor.network ] [Banner, Betty Ross] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@67e779a1] with refresh_interval [5s]
[2011-11-07 16:45:52,272][DEBUG][monitor.network ] [Banner, Betty Ross] net_info
host [skunk-back-01]
eth2 display_name [eth2]
address [/fe80:0:0:0:21b:21ff:fe96:8368%4] [/172.21.120.91]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo display_name [lo]
address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16436] multicast [false] ptp [false] loopback [true] up [true] virtual [false]
[2011-11-07 16:45:52,283][DEBUG][env ] [Banner, Betty Ross] using node location [[/home/nosql/deploy/elastic/data/skunktest/nodes/0]], local_node_id [0]
[2011-11-07 16:45:52,414][DEBUG][cache.memory ] [Banner, Betty Ross] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2011-11-07 16:45:52,421][DEBUG][cluster.routing.allocation.decider] [Banner, Betty Ross] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2011-11-07 16:45:52,422][DEBUG][cluster.routing.allocation.decider] [Banner, Betty Ross] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2011-11-07 16:45:52,422][DEBUG][cluster.routing.allocation.decider] [Banner, Betty Ross] using [cluster_concurrent_rebalance] with [2]
[2011-11-07 16:45:52,424][DEBUG][gateway.local ] [Banner, Betty Ross] using initial_shards [quorum], list_timeout [30s]
[2011-11-07 16:45:52,436][DEBUG][indices.recovery ] [Banner, Betty Ross] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2011-11-07 16:45:52,525][DEBUG][http.netty ] [Banner, Betty Ross] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2011-11-07 16:45:52,529][DEBUG][indices.memory ] [Banner, Betty Ross] using index_buffer_size [2.9gb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2011-11-07 16:45:52,535][DEBUG][indices.cache.filter ] [Banner, Betty Ross] using [node] filter cache with size [20%], actual_size [5.9gb]
[2011-11-07 16:45:52,574][INFO ][node ] [Banner, Betty Ross] {0.18.2}[12231]: initialized
[2011-11-07 16:45:52,574][INFO ][node ] [Banner, Betty Ross] {0.18.2}[12231]: starting ...
[2011-11-07 16:45:52,588][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2011-11-07 16:45:52,622][DEBUG][transport.netty ] [Banner, Betty Ross] Bound to address [/0:0:0:0:0:0:0:0:9300]
[2011-11-07 16:45:52,623][INFO ][transport ] [Banner, Betty Ross] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/172.21.120.91:9300]}
[2011-11-07 16:46:22,631][WARN ][discovery ] [Banner, Betty Ross] waited for 30s and no initial state was set by the discovery
[2011-11-07 16:46:22,631][INFO ][discovery ] [Banner, Betty Ross] skunktest/lE7HFacEQae0R5JhUtYTnw
[2011-11-07 16:46:22,636][DEBUG][discovery.zen ] [Banner, Betty Ross] ping responses: {none}
[2011-11-07 16:46:22,640][DEBUG][cluster.service ] [Banner, Betty Ross] processing [zen-disco-join (elected_as_master)]: execute
[2011-11-07 16:46:22,641][DEBUG][cluster.service ] [Banner, Betty Ross] cluster state updated, version [1], source [zen-disco-join (elected_as_master)]
[2011-11-07 16:46:22,644][INFO ][cluster.service ] [Banner, Betty Ross] new_master [Banner, Betty Ross][lE7HFacEQae0R5JhUtYTnw][inet[/172.21.120.91:9300]], reason: zen-disco-join (elected_as_master)
[2011-11-07 16:46:22,653][DEBUG][transport.netty ] [Banner, Betty Ross] Connected to node [[Banner, Betty Ross][lE7HFacEQae0R5JhUtYTnw][inet[/172.21.120.91:9300]]]
[2011-11-07 16:46:22,658][DEBUG][river.cluster ] [Banner, Betty Ross] processing [reroute_rivers_node_changed]: execute
[2011-11-07 16:46:22,658][DEBUG][river.cluster ] [Banner, Betty Ross] processing [reroute_rivers_node_changed]: no change in cluster_state
[2011-11-07 16:46:22,663][DEBUG][cluster.service ] [Banner, Betty Ross] processing [zen-disco-join (elected_as_master)]: done applying updated cluster_state
[2011-11-07 16:46:22,717][DEBUG][gateway.local ] [Banner, Betty Ross] [find_latest_state]: loading metadata from [/home/nosql/deploy/elastic/data/skunktest/nodes/0/_state/metadata-5]
[2011-11-07 16:46:22,718][DEBUG][gateway.local ] [Banner, Betty Ross] [find_latest_state]: loading started shards from [/home/nosql/deploy/elastic/data/skunktest/nodes/0/_state/shards-8]
[2011-11-07 16:46:22,719][DEBUG][gateway ] [Banner, Betty Ross] can't wait on start for (possibly) reading state from gateway, will do it asynchronously
[2011-11-07 16:46:22,722][INFO ][http ] [Banner, Betty Ross] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.21.120.91:9200]}
[2011-11-07 16:46:22,722][INFO ][node ] [Banner, Betty Ross] {0.18.2}[12231]: started
[2011-11-07 16:46:32,657][DEBUG][cluster.service ] [Banner, Betty Ross] processing [routing-table-updater]: execute
[2011-11-07 16:46:32,659][DEBUG][cluster.service ] [Banner, Betty Ross] processing [routing-table-updater]: no change in cluster_state
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment