Skip to content

Instantly share code, notes, and snippets.

@treff7es
Created January 17, 2012 20:06
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save treff7es/1628571 to your computer and use it in GitHub Desktop.
Save treff7es/1628571 to your computer and use it in GitHub Desktop.
my "does not have us registered with it..." issue
transport.tcp.port: 9301
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["localhost:9300"]
transport.tcp.port: 9300
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["localhost:9301"]
[2012-01-16 09:03:51,239][INFO ][node ] [Match] {0.18.7}[11639]: initializing ...
[2012-01-16 09:03:51,368][INFO ][plugins ] [Match] loaded [], sites []
[2012-01-16 09:03:56,296][INFO ][node ] [Match] {0.18.7}[11639]: initialized
[2012-01-16 09:03:56,297][INFO ][node ] [Match] {0.18.7}[11639]: starting ...
[2012-01-16 09:03:56,463][INFO ][transport ] [Match] bound_address {inet[/0.0.0.0:9301]}, publish_address {inet[/10.0.1.5:9301]}
[2012-01-16 09:03:59,825][INFO ][cluster.service ] [Match] detected_master [Topolov, Yuri][p00_52YWQ_i3uOs9MrkvwA][inet[/10.0.1.5:9300]], added {[Topolov, Yuri][p00_52YWQ_i3uOs9MrkvwA][inet[/10.0.1.5:9300]],}, reason: zen-disco-receive(from master [[Topolov, Yuri][p00_52YWQ_i3uOs9MrkvwA][inet[/10.0.1.5:9300]]])
[2012-01-16 09:03:59,826][INFO ][discovery ] [Match] elasticsearch/WmAM68HkTpCsqpRNHyaF0Q
[2012-01-16 09:03:59,854][INFO ][http ] [Match] bound_address {inet[/0.0.0.0:9201]}, publish_address {inet[/10.0.1.5:9201]}
[2012-01-16 09:03:59,855][INFO ][node ] [Match] {0.18.7}[11639]: started
[2012-01-16 09:04:07,413][INFO ][node ] [Match] {0.18.7}[11639]: stopping ...
[2012-01-16 09:04:07,445][INFO ][node ] [Match] {0.18.7}[11639]: stopped
[2012-01-16 09:04:07,445][INFO ][node ] [Match] {0.18.7}[11639]: closing ...
[2012-01-16 09:04:07,493][INFO ][node ] [Match] {0.18.7}[11639]: closed
[2012-01-16 09:04:13,349][INFO ][node ] [Aralune] {0.18.7}[11657]: initializing ...
[2012-01-16 09:04:13,358][INFO ][plugins ] [Aralune] loaded [], sites []
[2012-01-16 09:04:16,121][INFO ][node ] [Aralune] {0.18.7}[11657]: initialized
[2012-01-16 09:04:16,122][INFO ][node ] [Aralune] {0.18.7}[11657]: starting ...
[2012-01-16 09:04:16,238][INFO ][transport ] [Aralune] bound_address {inet[/0.0.0.0:9301]}, publish_address {inet[/10.0.1.5:9301]}
[2012-01-16 09:04:19,393][INFO ][cluster.service ] [Aralune] detected_master [Topolov, Yuri][p00_52YWQ_i3uOs9MrkvwA][inet[/10.0.1.5:9300]], added {[Topolov, Yuri][p00_52YWQ_i3uOs9MrkvwA][inet[/10.0.1.5:9300]],}, reason: zen-disco-receive(from master [[Topolov, Yuri][p00_52YWQ_i3uOs9MrkvwA][inet[/10.0.1.5:9300]]])
[2012-01-16 09:04:19,395][INFO ][discovery ] [Aralune] elasticsearch/itHyqcQ_RgGh3fy-zO3G6g
[2012-01-16 09:04:19,404][INFO ][http ] [Aralune] bound_address {inet[/0.0.0.0:9201]}, publish_address {inet[/10.0.1.5:9201]}
[2012-01-16 09:04:19,405][INFO ][node ] [Aralune] {0.18.7}[11657]: started
[2012-01-16 09:04:24,359][INFO ][discovery.zen ] [Aralune] master_left [[Topolov, Yuri][p00_52YWQ_i3uOs9MrkvwA][inet[/10.0.1.5:9300]]], reason [shut_down]
[2012-01-16 09:04:24,364][INFO ][cluster.service ] [Aralune] master {new [Aralune][itHyqcQ_RgGh3fy-zO3G6g][inet[/10.0.1.5:9301]], previous [Topolov, Yuri][p00_52YWQ_i3uOs9MrkvwA][inet[/10.0.1.5:9300]]}, removed {[Topolov, Yuri][p00_52YWQ_i3uOs9MrkvwA][inet[/10.0.1.5:9300]],}, reason: zen-disco-master_failed ([Topolov, Yuri][p00_52YWQ_i3uOs9MrkvwA][inet[/10.0.1.5:9300]])
[2012-01-16 09:04:35,029][INFO ][cluster.service ] [Aralune] added {[Nth Man: the Ultimate Ninja][BItSUFFaQzy94R7Z93vhbA][inet[/10.0.1.5:9300]],}, reason: zen-disco-receive(join from node[[Nth Man: the Ultimate Ninja][BItSUFFaQzy94R7Z93vhbA][inet[/10.0.1.5:9300]]])
[2012-01-16 09:04:49,868][INFO ][cluster.service ] [Aralune] removed {[Nth Man: the Ultimate Ninja][BItSUFFaQzy94R7Z93vhbA][inet[/10.0.1.5:9300]],}, reason: zen-disco-node_left([Nth Man: the Ultimate Ninja][BItSUFFaQzy94R7Z93vhbA][inet[/10.0.1.5:9300]])
[2012-01-16 09:04:52,320][INFO ][node ] [Aralune] {0.18.7}[11657]: stopping ...
[2012-01-16 09:04:52,337][INFO ][node ] [Aralune] {0.18.7}[11657]: stopped
[2012-01-16 09:04:52,337][INFO ][node ] [Aralune] {0.18.7}[11657]: closing ...
[2012-01-16 09:04:52,355][INFO ][node ] [Aralune] {0.18.7}[11657]: closed
[2012-01-16 09:05:36,017][INFO ][node ] [Loki] {0.18.7}[11728]: initializing ...
[2012-01-16 09:05:36,027][INFO ][plugins ] [Loki] loaded [], sites []
[2012-01-16 09:05:37,264][DEBUG][threadpool ] [Loki] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:05:37,267][DEBUG][threadpool ] [Loki] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:05:37,267][DEBUG][threadpool ] [Loki] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:05:37,267][DEBUG][threadpool ] [Loki] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:05:37,268][DEBUG][threadpool ] [Loki] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:05:37,271][DEBUG][threadpool ] [Loki] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:05:37,271][DEBUG][threadpool ] [Loki] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:05:37,283][DEBUG][transport.netty ] [Loki] using worker_count[4], port[9301], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:05:37,302][DEBUG][discovery.zen.ping.unicast] [Loki] using initial hosts [localhost:9300], with concurrent_connects [10]
[2012-01-16 09:05:37,308][DEBUG][discovery.zen ] [Loki] using ping.timeout [3s]
[2012-01-16 09:05:37,314][DEBUG][discovery.zen.elect ] [Loki] using minimum_master_nodes [-1]
[2012-01-16 09:05:37,315][DEBUG][discovery.zen.fd ] [Loki] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:05:37,319][DEBUG][discovery.zen.fd ] [Loki] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:05:37,342][DEBUG][monitor.jvm ] [Loki] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:05:37,868][DEBUG][monitor.os ] [Loki] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@7b34c5ff] with refresh_interval [1s]
[2012-01-16 09:05:37,873][DEBUG][monitor.process ] [Loki] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@34115512] with refresh_interval [1s]
[2012-01-16 09:05:37,878][DEBUG][monitor.jvm ] [Loki] Using refresh_interval [1s]
[2012-01-16 09:05:37,879][DEBUG][monitor.network ] [Loki] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@da2da17] with refresh_interval [5s]
[2012-01-16 09:05:37,890][DEBUG][monitor.network ] [Loki] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:05:37,893][TRACE][monitor.network ] [Loki] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:16588 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:16588 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3583285 (3.4M) TX bytes:3583285 (3.4M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2833759 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1502709 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3818223156 (3.6G) TX bytes:117488532 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:05:37,895][TRACE][env ] [Loki] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0 ...
[2012-01-16 09:05:37,932][DEBUG][env ] [Loki] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:05:37,933][TRACE][env ] [Loki] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0, free_space [221.7gb, usable_space [221.4gb
[2012-01-16 09:05:38,266][DEBUG][cache.memory ] [Loki] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:05:38,279][DEBUG][cluster.routing.allocation.decider] [Loki] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:05:38,280][DEBUG][cluster.routing.allocation.decider] [Loki] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:05:38,280][DEBUG][cluster.routing.allocation.decider] [Loki] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:05:38,283][DEBUG][gateway.local ] [Loki] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:05:38,312][DEBUG][indices.recovery ] [Loki] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:05:38,507][TRACE][jmx ] [Loki] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:05:38,507][TRACE][jmx ] [Loki] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:38,508][TRACE][jmx ] [Loki] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:38,510][TRACE][jmx ] [Loki] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:05:38,511][TRACE][jmx ] [Loki] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:05:38,511][TRACE][jmx ] [Loki] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:38,511][TRACE][jmx ] [Loki] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:05:38,512][TRACE][jmx ] [Loki] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:38,512][TRACE][jmx ] [Loki] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:05:38,512][TRACE][jmx ] [Loki] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:38,512][TRACE][jmx ] [Loki] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:05:38,513][TRACE][jmx ] [Loki] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:38,513][TRACE][jmx ] [Loki] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:38,513][TRACE][jmx ] [Loki] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:38,514][DEBUG][http.netty ] [Loki] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:05:38,522][DEBUG][indices.memory ] [Loki] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:05:38,532][DEBUG][indices.cache.filter ] [Loki] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:05:38,619][INFO ][node ] [Loki] {0.18.7}[11728]: initialized
[2012-01-16 09:05:38,619][INFO ][node ] [Loki] {0.18.7}[11728]: starting ...
[2012-01-16 09:05:38,644][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:05:38,754][DEBUG][transport.netty ] [Loki] Bound to address [/0.0.0.0:9301]
[2012-01-16 09:05:38,757][INFO ][transport ] [Loki] bound_address {inet[/0.0.0.0:9301]}, publish_address {inet[/10.0.1.5:9301]}
[2012-01-16 09:05:38,838][TRACE][discovery ] [Loki] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:05:38,866][DEBUG][transport.netty ] [Loki] Connected to node [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:05:38,867][TRACE][discovery.zen.ping.unicast] [Loki] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:05:38,931][TRACE][discovery.zen.ping.unicast] [Loki] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]], master [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:05:40,344][TRACE][discovery.zen.ping.unicast] [Loki] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:05:40,348][TRACE][discovery.zen.ping.unicast] [Loki] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]], master [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:05:41,851][TRACE][discovery.zen.ping.unicast] [Loki] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:05:41,854][TRACE][discovery.zen.ping.unicast] [Loki] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]], master [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:05:41,857][DEBUG][discovery.zen ] [Loki] ping responses:
--> target [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]], master [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:05:41,858][DEBUG][transport.netty ] [Loki] Disconnected from [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:05:41,884][DEBUG][transport.netty ] [Loki] Connected to node [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:05:41,899][TRACE][transport.netty ] [Loki] channel opened: [id: 0x108a9d2a, /10.0.1.5:62677 => /10.0.1.5:9301]
[2012-01-16 09:05:41,904][TRACE][transport.netty ] [Loki] channel opened: [id: 0x6063f5af, /10.0.1.5:62678 => /10.0.1.5:9301]
[2012-01-16 09:05:41,912][TRACE][transport.netty ] [Loki] channel opened: [id: 0x1d3c66d8, /10.0.1.5:62679 => /10.0.1.5:9301]
[2012-01-16 09:05:41,916][TRACE][transport.netty ] [Loki] channel opened: [id: 0x72b398da, /10.0.1.5:62680 => /10.0.1.5:9301]
[2012-01-16 09:05:41,918][DEBUG][discovery.zen.fd ] [Loki] [master] starting fault detection against master [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]], reason [initial_join]
[2012-01-16 09:05:41,927][TRACE][transport.netty ] [Loki] channel opened: [id: 0x461d318f, /10.0.1.5:62681 => /10.0.1.5:9301]
[2012-01-16 09:05:41,930][TRACE][transport.netty ] [Loki] channel opened: [id: 0x16fa21a4, /10.0.1.5:62682 => /10.0.1.5:9301]
[2012-01-16 09:05:41,930][TRACE][transport.netty ] [Loki] channel opened: [id: 0x7fb6a1c4, /10.0.1.5:62683 => /10.0.1.5:9301]
[2012-01-16 09:05:41,952][DEBUG][cluster.service ] [Loki] processing [zen-disco-join (detected master)]: execute
[2012-01-16 09:05:41,971][TRACE][cluster.service ] [Loki] cluster state updated:
version [2], source [zen-disco-join (detected master)]
nodes:
[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:05:41,981][TRACE][transport.netty ] [Loki] channel opened: [id: 0x10bcc8f4, /10.0.1.5:62684 => /10.0.1.5:9301]
[2012-01-16 09:05:41,983][DEBUG][transport.netty ] [Loki] Connected to node [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:05:41,983][TRACE][transport.netty ] [Loki] channel opened: [id: 0x36101d01, /10.0.1.5:62685 => /10.0.1.5:9301]
[2012-01-16 09:05:41,983][DEBUG][cluster.service ] [Loki] processing [zen-disco-join (detected master)]: done applying updated cluster_state
[2012-01-16 09:05:41,984][TRACE][transport.netty ] [Loki] channel opened: [id: 0x5be04861, /10.0.1.5:62686 => /10.0.1.5:9301]
[2012-01-16 09:05:41,984][TRACE][transport.netty ] [Loki] channel opened: [id: 0x61b00766, /10.0.1.5:62687 => /10.0.1.5:9301]
[2012-01-16 09:05:41,984][TRACE][transport.netty ] [Loki] channel opened: [id: 0x6bb5eba4, /10.0.1.5:62688 => /10.0.1.5:9301]
[2012-01-16 09:05:41,984][TRACE][transport.netty ] [Loki] channel opened: [id: 0x7481933a, /10.0.1.5:62689 => /10.0.1.5:9301]
[2012-01-16 09:05:41,985][TRACE][transport.netty ] [Loki] channel opened: [id: 0x66e90097, /10.0.1.5:62690 => /10.0.1.5:9301]
[2012-01-16 09:05:41,985][DEBUG][cluster.service ] [Loki] processing [zen-disco-receive(from master [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:05:41,987][TRACE][cluster.service ] [Loki] cluster state updated:
version [3], source [zen-disco-receive(from master [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]])]
nodes:
[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]], master
[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:05:41,987][INFO ][cluster.service ] [Loki] detected_master [Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]], added {[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]],}, reason: zen-disco-receive(from master [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]])
[2012-01-16 09:05:41,988][DEBUG][cluster.service ] [Loki] processing [zen-disco-receive(from master [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:05:41,989][TRACE][discovery ] [Loki] initial state set from discovery
[2012-01-16 09:05:41,989][INFO ][discovery ] [Loki] elasticsearch/phAiMOQZTZyq8wPJlgfUOg
[2012-01-16 09:05:41,990][TRACE][gateway.local ] [Loki] [find_latest_state]: processing [metadata-1]
[2012-01-16 09:05:42,074][DEBUG][gateway.local ] [Loki] [find_latest_state]: loading metadata from [/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0/_state/metadata-1]
[2012-01-16 09:05:42,075][TRACE][gateway.local ] [Loki] [find_latest_state]: processing [metadata-1]
[2012-01-16 09:05:42,075][DEBUG][gateway.local ] [Loki] [find_latest_state]: no started shards loaded
[2012-01-16 09:05:42,081][INFO ][http ] [Loki] bound_address {inet[/0.0.0.0:9201]}, publish_address {inet[/10.0.1.5:9201]}
[2012-01-16 09:05:42,082][TRACE][jmx ] [Loki] Registered org.elasticsearch.jmx.ResourceDMBean@675926d1 under org.elasticsearch:service=transport
[2012-01-16 09:05:42,082][TRACE][jmx ] [Loki] Registered org.elasticsearch.jmx.ResourceDMBean@e039859 under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:05:42,082][INFO ][node ] [Loki] {0.18.7}[11728]: started
[2012-01-16 09:05:50,372][INFO ][discovery.zen ] [Loki] master_left [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]], reason [shut_down]
[2012-01-16 09:05:50,373][DEBUG][cluster.service ] [Loki] processing [zen-disco-master_failed ([Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]])]: execute
[2012-01-16 09:05:50,374][DEBUG][discovery.zen.fd ] [Loki] [master] stopping fault detection against master [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]], reason [got elected as new master since master left (reason = shut_down)]
[2012-01-16 09:05:50,374][TRACE][cluster.service ] [Loki] cluster state updated:
version [4], source [zen-disco-master_failed ([Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]])]
nodes:
[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:05:50,374][INFO ][cluster.service ] [Loki] master {new [Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]], previous [Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]}, removed {[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]],}, reason: zen-disco-master_failed ([Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]])
[2012-01-16 09:05:50,378][TRACE][transport.netty ] [Loki] channel closed: [id: 0x108a9d2a, /10.0.1.5:62677 :> /10.0.1.5:9301]
[2012-01-16 09:05:50,381][DEBUG][river.cluster ] [Loki] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:05:50,381][DEBUG][river.cluster ] [Loki] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:05:50,390][DEBUG][cluster.service ] [Loki] processing [zen-disco-master_failed ([Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]])]: done applying updated cluster_state
[2012-01-16 09:05:50,390][DEBUG][cluster.service ] [Loki] processing [routing-table-updater]: execute
[2012-01-16 09:05:50,391][DEBUG][cluster.service ] [Loki] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:05:50,391][TRACE][transport.netty ] [Loki] channel closed: [id: 0x6063f5af, /10.0.1.5:62678 :> /10.0.1.5:9301]
[2012-01-16 09:05:50,407][TRACE][transport.netty ] [Loki] channel closed: [id: 0x461d318f, /10.0.1.5:62681 :> /10.0.1.5:9301]
[2012-01-16 09:05:50,420][TRACE][transport.netty ] [Loki] channel closed: [id: 0x1d3c66d8, /10.0.1.5:62679 :> /10.0.1.5:9301]
[2012-01-16 09:05:50,458][TRACE][transport.netty ] [Loki] channel closed: [id: 0x72b398da, /10.0.1.5:62680 :> /10.0.1.5:9301]
[2012-01-16 09:05:50,478][TRACE][transport.netty ] [Loki] channel closed: [id: 0x16fa21a4, /10.0.1.5:62682 :> /10.0.1.5:9301]
[2012-01-16 09:05:50,479][TRACE][transport.netty ] [Loki] channel closed: [id: 0x7fb6a1c4, /10.0.1.5:62683 :> /10.0.1.5:9301]
[2012-01-16 09:05:50,488][DEBUG][transport.netty ] [Loki] Disconnected from [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:05:56,329][TRACE][transport.netty ] [Loki] channel opened: [id: 0x1a170b6d, /127.0.0.1:62693 => /127.0.0.1:9301]
[2012-01-16 09:05:59,325][TRACE][transport.netty ] [Loki] channel closed: [id: 0x1a170b6d, /127.0.0.1:62693 :> /127.0.0.1:9301]
[2012-01-16 09:05:59,327][TRACE][transport.netty ] [Loki] channel opened: [id: 0x6aa218a5, /10.0.1.5:62694 => /10.0.1.5:9301]
[2012-01-16 09:05:59,327][TRACE][transport.netty ] [Loki] channel opened: [id: 0x38002f54, /10.0.1.5:62695 => /10.0.1.5:9301]
[2012-01-16 09:05:59,328][TRACE][transport.netty ] [Loki] channel opened: [id: 0x1a7b5617, /10.0.1.5:62696 => /10.0.1.5:9301]
[2012-01-16 09:05:59,328][TRACE][transport.netty ] [Loki] channel opened: [id: 0x17510d96, /10.0.1.5:62697 => /10.0.1.5:9301]
[2012-01-16 09:05:59,328][TRACE][transport.netty ] [Loki] channel opened: [id: 0x4a52fecf, /10.0.1.5:62698 => /10.0.1.5:9301]
[2012-01-16 09:05:59,329][TRACE][transport.netty ] [Loki] channel opened: [id: 0x7b8353cf, /10.0.1.5:62699 => /10.0.1.5:9301]
[2012-01-16 09:05:59,331][TRACE][transport.netty ] [Loki] channel opened: [id: 0x16e7eec9, /10.0.1.5:62700 => /10.0.1.5:9301]
[2012-01-16 09:05:59,357][DEBUG][transport.netty ] [Loki] Connected to node [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:05:59,361][DEBUG][cluster.service ] [Loki] processing [zen-disco-receive(join from node[[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:05:59,364][TRACE][cluster.service ] [Loki] cluster state updated:
version [5], source [zen-disco-receive(join from node[[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]])]
nodes:
[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]
[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:05:59,364][INFO ][cluster.service ] [Loki] added {[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]],}, reason: zen-disco-receive(join from node[[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]])
[2012-01-16 09:05:59,366][DEBUG][river.cluster ] [Loki] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:05:59,366][DEBUG][river.cluster ] [Loki] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:05:59,367][DEBUG][cluster.service ] [Loki] processing [zen-disco-receive(join from node[[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:06:00,376][DEBUG][cluster.service ] [Loki] processing [routing-table-updater]: execute
[2012-01-16 09:06:00,376][DEBUG][cluster.service ] [Loki] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:06:05,943][INFO ][node ] [Loki] {0.18.7}[11728]: stopping ...
[2012-01-16 09:06:05,953][TRACE][transport.netty ] [Loki] channel closed: [id: 0x10bcc8f4, /10.0.1.5:62684 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,954][TRACE][transport.netty ] [Loki] channel closed: [id: 0x36101d01, /10.0.1.5:62685 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,958][TRACE][transport.netty ] [Loki] channel closed: [id: 0x5be04861, /10.0.1.5:62686 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,959][TRACE][transport.netty ] [Loki] channel closed: [id: 0x6bb5eba4, /10.0.1.5:62688 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,959][TRACE][transport.netty ] [Loki] channel closed: [id: 0x61b00766, /10.0.1.5:62687 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,961][TRACE][transport.netty ] [Loki] channel closed: [id: 0x7481933a, /10.0.1.5:62689 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,962][TRACE][transport.netty ] [Loki] channel closed: [id: 0x66e90097, /10.0.1.5:62690 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,964][TRACE][transport.netty ] [Loki] channel closed: [id: 0x16e7eec9, /10.0.1.5:62700 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,965][TRACE][transport.netty ] [Loki] channel closed: [id: 0x38002f54, /10.0.1.5:62695 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,965][TRACE][transport.netty ] [Loki] channel closed: [id: 0x1a7b5617, /10.0.1.5:62696 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,965][TRACE][transport.netty ] [Loki] channel closed: [id: 0x6aa218a5, /10.0.1.5:62694 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,966][TRACE][transport.netty ] [Loki] channel closed: [id: 0x7b8353cf, /10.0.1.5:62699 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,966][TRACE][transport.netty ] [Loki] channel closed: [id: 0x4a52fecf, /10.0.1.5:62698 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,967][TRACE][transport.netty ] [Loki] channel closed: [id: 0x17510d96, /10.0.1.5:62697 :> /10.0.1.5:9301]
[2012-01-16 09:06:05,979][TRACE][jmx ] [Loki] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:06:05,983][TRACE][jmx ] [Loki] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:06:05,983][INFO ][node ] [Loki] {0.18.7}[11728]: stopped
[2012-01-16 09:06:05,983][INFO ][node ] [Loki] {0.18.7}[11728]: closing ...
[2012-01-16 09:06:06,008][TRACE][node ] [Loki] Close times for each service:
StopWatch 'node_close': running time = 11ms
-----------------------------------------
ms % Task name
-----------------------------------------
00001 009% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00001 009% indices
00000 000% routing
00000 000% cluster
00002 018% discovery
00001 009% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00000 000% node_cache
00000 000% script
00005 045% thread_pool
00001 009% thread_pool_force_shutdown
[2012-01-16 09:06:06,011][INFO ][node ] [Loki] {0.18.7}[11728]: closed
[2012-01-16 09:06:07,843][INFO ][node ] [Nekra] {0.18.7}[11765]: initializing ...
[2012-01-16 09:06:07,852][INFO ][plugins ] [Nekra] loaded [], sites []
[2012-01-16 09:06:09,055][DEBUG][threadpool ] [Nekra] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:06:09,059][DEBUG][threadpool ] [Nekra] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:06:09,059][DEBUG][threadpool ] [Nekra] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:06:09,059][DEBUG][threadpool ] [Nekra] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:06:09,060][DEBUG][threadpool ] [Nekra] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:06:09,063][DEBUG][threadpool ] [Nekra] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:06:09,064][DEBUG][threadpool ] [Nekra] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:06:09,077][DEBUG][transport.netty ] [Nekra] using worker_count[4], port[9301], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:06:09,098][DEBUG][discovery.zen.ping.unicast] [Nekra] using initial hosts [localhost:9300], with concurrent_connects [10]
[2012-01-16 09:06:09,102][DEBUG][discovery.zen ] [Nekra] using ping.timeout [3s]
[2012-01-16 09:06:09,109][DEBUG][discovery.zen.elect ] [Nekra] using minimum_master_nodes [-1]
[2012-01-16 09:06:09,111][DEBUG][discovery.zen.fd ] [Nekra] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:06:09,114][DEBUG][discovery.zen.fd ] [Nekra] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:06:09,140][DEBUG][monitor.jvm ] [Nekra] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:06:09,663][DEBUG][monitor.os ] [Nekra] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@a51064e] with refresh_interval [1s]
[2012-01-16 09:06:09,669][DEBUG][monitor.process ] [Nekra] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@7463e563] with refresh_interval [1s]
[2012-01-16 09:06:09,674][DEBUG][monitor.jvm ] [Nekra] Using refresh_interval [1s]
[2012-01-16 09:06:09,674][DEBUG][monitor.network ] [Nekra] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@40c07527] with refresh_interval [5s]
[2012-01-16 09:06:09,684][DEBUG][monitor.network ] [Nekra] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:06:09,687][TRACE][monitor.network ] [Nekra] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:17131 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:17131 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3615137 (3.4M) TX bytes:3615137 (3.4M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2833764 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1502719 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3818224205 (3.6G) TX bytes:117489989 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:06:09,689][TRACE][env ] [Nekra] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0 ...
[2012-01-16 09:06:09,720][DEBUG][env ] [Nekra] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:06:09,721][TRACE][env ] [Nekra] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0, free_space [221.7gb, usable_space [221.4gb
[2012-01-16 09:06:10,035][DEBUG][cache.memory ] [Nekra] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:06:10,048][DEBUG][cluster.routing.allocation.decider] [Nekra] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:06:10,049][DEBUG][cluster.routing.allocation.decider] [Nekra] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:06:10,050][DEBUG][cluster.routing.allocation.decider] [Nekra] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:06:10,053][DEBUG][gateway.local ] [Nekra] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:06:10,079][DEBUG][indices.recovery ] [Nekra] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:06:10,265][TRACE][jmx ] [Nekra] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:06:10,266][TRACE][jmx ] [Nekra] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:10,266][TRACE][jmx ] [Nekra] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:10,269][TRACE][jmx ] [Nekra] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:06:10,270][TRACE][jmx ] [Nekra] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:06:10,270][TRACE][jmx ] [Nekra] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:10,270][TRACE][jmx ] [Nekra] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:06:10,271][TRACE][jmx ] [Nekra] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:10,271][TRACE][jmx ] [Nekra] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:06:10,273][TRACE][jmx ] [Nekra] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:10,273][TRACE][jmx ] [Nekra] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:06:10,273][TRACE][jmx ] [Nekra] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:10,273][TRACE][jmx ] [Nekra] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:10,274][TRACE][jmx ] [Nekra] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:10,274][DEBUG][http.netty ] [Nekra] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:06:10,281][DEBUG][indices.memory ] [Nekra] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:06:10,290][DEBUG][indices.cache.filter ] [Nekra] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:06:10,376][INFO ][node ] [Nekra] {0.18.7}[11765]: initialized
[2012-01-16 09:06:10,377][INFO ][node ] [Nekra] {0.18.7}[11765]: starting ...
[2012-01-16 09:06:10,402][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:06:10,479][DEBUG][transport.netty ] [Nekra] Bound to address [/0.0.0.0:9301]
[2012-01-16 09:06:10,482][INFO ][transport ] [Nekra] bound_address {inet[/0.0.0.0:9301]}, publish_address {inet[/10.0.1.5:9301]}
[2012-01-16 09:06:10,566][TRACE][discovery ] [Nekra] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:06:10,594][DEBUG][transport.netty ] [Nekra] Connected to node [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:06:10,596][TRACE][discovery.zen.ping.unicast] [Nekra] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:06:10,652][TRACE][discovery.zen.ping.unicast] [Nekra] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], master [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:06:12,071][TRACE][discovery.zen.ping.unicast] [Nekra] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:06:12,076][TRACE][discovery.zen.ping.unicast] [Nekra] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], master [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:06:13,574][TRACE][discovery.zen.ping.unicast] [Nekra] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:06:13,576][TRACE][discovery.zen.ping.unicast] [Nekra] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], master [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:06:13,577][DEBUG][discovery.zen ] [Nekra] ping responses:
--> target [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], master [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:06:13,579][DEBUG][transport.netty ] [Nekra] Disconnected from [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:06:13,596][DEBUG][transport.netty ] [Nekra] Connected to node [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:06:13,604][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x4c61a7e6, /10.0.1.5:62725 => /10.0.1.5:9301]
[2012-01-16 09:06:13,606][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x108a9d2a, /10.0.1.5:62726 => /10.0.1.5:9301]
[2012-01-16 09:06:13,607][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x40bbc1f6, /10.0.1.5:62727 => /10.0.1.5:9301]
[2012-01-16 09:06:13,615][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x31923ca5, /10.0.1.5:62728 => /10.0.1.5:9301]
[2012-01-16 09:06:13,618][DEBUG][discovery.zen.fd ] [Nekra] [master] starting fault detection against master [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], reason [initial_join]
[2012-01-16 09:06:13,621][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x0094b318, /10.0.1.5:62729 => /10.0.1.5:9301]
[2012-01-16 09:06:13,621][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x72b398da, /10.0.1.5:62730 => /10.0.1.5:9301]
[2012-01-16 09:06:13,621][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x6cf84b0a, /10.0.1.5:62731 => /10.0.1.5:9301]
[2012-01-16 09:06:13,627][DEBUG][cluster.service ] [Nekra] processing [zen-disco-join (detected master)]: execute
[2012-01-16 09:06:13,628][TRACE][cluster.service ] [Nekra] cluster state updated:
version [5], source [zen-disco-join (detected master)]
nodes:
[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:13,635][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x3be7a755, /10.0.1.5:62732 => /10.0.1.5:9301]
[2012-01-16 09:06:13,637][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x3f70119f, /10.0.1.5:62733 => /10.0.1.5:9301]
[2012-01-16 09:06:13,638][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x456c1227, /10.0.1.5:62734 => /10.0.1.5:9301]
[2012-01-16 09:06:13,638][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x3a1be20c, /10.0.1.5:62735 => /10.0.1.5:9301]
[2012-01-16 09:06:13,639][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x7c959fa1, /10.0.1.5:62736 => /10.0.1.5:9301]
[2012-01-16 09:06:13,640][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x3ffef80a, /10.0.1.5:62737 => /10.0.1.5:9301]
[2012-01-16 09:06:13,640][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x0400c02a, /10.0.1.5:62738 => /10.0.1.5:9301]
[2012-01-16 09:06:13,645][DEBUG][transport.netty ] [Nekra] Connected to node [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:06:13,645][DEBUG][cluster.service ] [Nekra] processing [zen-disco-join (detected master)]: done applying updated cluster_state
[2012-01-16 09:06:13,645][DEBUG][cluster.service ] [Nekra] processing [zen-disco-receive(from master [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:06:13,646][TRACE][cluster.service ] [Nekra] cluster state updated:
version [6], source [zen-disco-receive(from master [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]])]
nodes:
[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]], local
[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]], master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:13,646][INFO ][cluster.service ] [Nekra] detected_master [Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]], added {[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]],}, reason: zen-disco-receive(from master [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]])
[2012-01-16 09:06:13,648][DEBUG][cluster.service ] [Nekra] processing [zen-disco-receive(from master [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:06:13,648][TRACE][discovery ] [Nekra] initial state set from discovery
[2012-01-16 09:06:13,649][INFO ][discovery ] [Nekra] elasticsearch/3VWDdHUNTx6Twhw_QvsNFA
[2012-01-16 09:06:13,650][TRACE][gateway.local ] [Nekra] [find_latest_state]: processing [metadata-2]
[2012-01-16 09:06:13,655][DEBUG][gateway.local ] [Nekra] [find_latest_state]: loading metadata from [/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0/_state/metadata-2]
[2012-01-16 09:06:13,655][TRACE][gateway.local ] [Nekra] [find_latest_state]: processing [metadata-2]
[2012-01-16 09:06:13,655][DEBUG][gateway.local ] [Nekra] [find_latest_state]: no started shards loaded
[2012-01-16 09:06:13,668][INFO ][http ] [Nekra] bound_address {inet[/0.0.0.0:9201]}, publish_address {inet[/10.0.1.5:9201]}
[2012-01-16 09:06:13,669][TRACE][jmx ] [Nekra] Registered org.elasticsearch.jmx.ResourceDMBean@24c759f5 under org.elasticsearch:service=transport
[2012-01-16 09:06:13,669][TRACE][jmx ] [Nekra] Registered org.elasticsearch.jmx.ResourceDMBean@1be2f6b0 under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:06:13,669][INFO ][node ] [Nekra] {0.18.7}[11765]: started
[2012-01-16 09:06:25,306][INFO ][discovery.zen ] [Nekra] master_left [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], reason [shut_down]
[2012-01-16 09:06:25,312][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x4c61a7e6, /10.0.1.5:62725 :> /10.0.1.5:9301]
[2012-01-16 09:06:25,313][DEBUG][cluster.service ] [Nekra] processing [zen-disco-master_failed ([Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]])]: execute
[2012-01-16 09:06:25,313][DEBUG][discovery.zen.fd ] [Nekra] [master] stopping fault detection against master [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], reason [got elected as new master since master left (reason = shut_down)]
[2012-01-16 09:06:25,313][TRACE][cluster.service ] [Nekra] cluster state updated:
version [7], source [zen-disco-master_failed ([Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]])]
nodes:
[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:25,314][INFO ][cluster.service ] [Nekra] master {new [Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]], previous [Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]}, removed {[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]],}, reason: zen-disco-master_failed ([Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]])
[2012-01-16 09:06:25,317][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x31923ca5, /10.0.1.5:62728 :> /10.0.1.5:9301]
[2012-01-16 09:06:25,318][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x108a9d2a, /10.0.1.5:62726 :> /10.0.1.5:9301]
[2012-01-16 09:06:25,318][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x0094b318, /10.0.1.5:62729 :> /10.0.1.5:9301]
[2012-01-16 09:06:25,318][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x40bbc1f6, /10.0.1.5:62727 :> /10.0.1.5:9301]
[2012-01-16 09:06:25,335][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x72b398da, /10.0.1.5:62730 :> /10.0.1.5:9301]
[2012-01-16 09:06:25,336][DEBUG][river.cluster ] [Nekra] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:06:25,336][DEBUG][river.cluster ] [Nekra] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:06:25,336][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x6cf84b0a, /10.0.1.5:62731 :> /10.0.1.5:9301]
[2012-01-16 09:06:25,339][DEBUG][transport.netty ] [Nekra] Disconnected from [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:06:25,351][DEBUG][cluster.service ] [Nekra] processing [zen-disco-master_failed ([Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]])]: done applying updated cluster_state
[2012-01-16 09:06:25,352][DEBUG][cluster.service ] [Nekra] processing [routing-table-updater]: execute
[2012-01-16 09:06:25,353][DEBUG][cluster.service ] [Nekra] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:06:30,829][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x1a170b6d, /127.0.0.1:62741 => /127.0.0.1:9301]
[2012-01-16 09:06:33,826][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x38002f54, /10.0.1.5:62742 => /10.0.1.5:9301]
[2012-01-16 09:06:33,826][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x1a170b6d, /127.0.0.1:62741 :> /127.0.0.1:9301]
[2012-01-16 09:06:33,827][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x1a7b5617, /10.0.1.5:62743 => /10.0.1.5:9301]
[2012-01-16 09:06:33,827][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x17510d96, /10.0.1.5:62744 => /10.0.1.5:9301]
[2012-01-16 09:06:33,828][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x0ed6ee28, /10.0.1.5:62745 => /10.0.1.5:9301]
[2012-01-16 09:06:33,828][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x41aef798, /10.0.1.5:62746 => /10.0.1.5:9301]
[2012-01-16 09:06:33,828][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x7b8353cf, /10.0.1.5:62747 => /10.0.1.5:9301]
[2012-01-16 09:06:33,828][TRACE][transport.netty ] [Nekra] channel opened: [id: 0x16e7eec9, /10.0.1.5:62748 => /10.0.1.5:9301]
[2012-01-16 09:06:33,868][DEBUG][transport.netty ] [Nekra] Connected to node [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:06:33,869][DEBUG][cluster.service ] [Nekra] processing [zen-disco-receive(join from node[[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:06:33,879][TRACE][cluster.service ] [Nekra] cluster state updated:
version [8], source [zen-disco-receive(join from node[[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]])]
nodes:
[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]], local, master
[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:33,879][INFO ][cluster.service ] [Nekra] added {[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]],}, reason: zen-disco-receive(join from node[[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]])
[2012-01-16 09:06:33,880][DEBUG][river.cluster ] [Nekra] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:06:33,881][DEBUG][river.cluster ] [Nekra] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:06:33,886][DEBUG][cluster.service ] [Nekra] processing [zen-disco-receive(join from node[[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:06:35,318][DEBUG][cluster.service ] [Nekra] processing [routing-table-updater]: execute
[2012-01-16 09:06:35,319][DEBUG][cluster.service ] [Nekra] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:06:40,292][INFO ][node ] [Nekra] {0.18.7}[11765]: stopping ...
[2012-01-16 09:06:40,304][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x3be7a755, /10.0.1.5:62732 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,305][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x3f70119f, /10.0.1.5:62733 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,310][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x456c1227, /10.0.1.5:62734 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,311][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x7c959fa1, /10.0.1.5:62736 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,311][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x3ffef80a, /10.0.1.5:62737 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,311][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x3a1be20c, /10.0.1.5:62735 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,312][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x0ed6ee28, /10.0.1.5:62745 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,312][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x0400c02a, /10.0.1.5:62738 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,313][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x41aef798, /10.0.1.5:62746 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,313][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x16e7eec9, /10.0.1.5:62748 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,314][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x38002f54, /10.0.1.5:62742 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,314][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x1a7b5617, /10.0.1.5:62743 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,314][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x7b8353cf, /10.0.1.5:62747 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,315][TRACE][transport.netty ] [Nekra] channel closed: [id: 0x17510d96, /10.0.1.5:62744 :> /10.0.1.5:9301]
[2012-01-16 09:06:40,336][TRACE][jmx ] [Nekra] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:06:40,338][TRACE][jmx ] [Nekra] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:06:40,344][INFO ][node ] [Nekra] {0.18.7}[11765]: stopped
[2012-01-16 09:06:40,344][INFO ][node ] [Nekra] {0.18.7}[11765]: closing ...
[2012-01-16 09:06:40,362][TRACE][node ] [Nekra] Close times for each service:
StopWatch 'node_close': running time = 10ms
-----------------------------------------
ms % Task name
-----------------------------------------
00000 000% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00001 010% indices
00000 000% routing
00000 000% cluster
00007 070% discovery
00000 000% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00001 010% node_cache
00000 000% script
00001 010% thread_pool
00000 000% thread_pool_force_shutdown
[2012-01-16 09:06:40,364][INFO ][node ] [Nekra] {0.18.7}[11765]: closed
[2012-01-16 09:06:42,572][INFO ][node ] [Bradley, Isaiah] {0.18.7}[11800]: initializing ...
[2012-01-16 09:06:42,580][INFO ][plugins ] [Bradley, Isaiah] loaded [], sites []
[2012-01-16 09:06:43,785][DEBUG][threadpool ] [Bradley, Isaiah] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:06:43,789][DEBUG][threadpool ] [Bradley, Isaiah] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:06:43,789][DEBUG][threadpool ] [Bradley, Isaiah] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:06:43,790][DEBUG][threadpool ] [Bradley, Isaiah] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:06:43,790][DEBUG][threadpool ] [Bradley, Isaiah] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:06:43,793][DEBUG][threadpool ] [Bradley, Isaiah] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:06:43,794][DEBUG][threadpool ] [Bradley, Isaiah] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:06:43,807][DEBUG][transport.netty ] [Bradley, Isaiah] using worker_count[4], port[9301], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:06:43,825][DEBUG][discovery.zen.ping.unicast] [Bradley, Isaiah] using initial hosts [localhost:9300], with concurrent_connects [10]
[2012-01-16 09:06:43,830][DEBUG][discovery.zen ] [Bradley, Isaiah] using ping.timeout [3s]
[2012-01-16 09:06:43,837][DEBUG][discovery.zen.elect ] [Bradley, Isaiah] using minimum_master_nodes [-1]
[2012-01-16 09:06:43,839][DEBUG][discovery.zen.fd ] [Bradley, Isaiah] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:06:43,844][DEBUG][discovery.zen.fd ] [Bradley, Isaiah] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:06:43,868][DEBUG][monitor.jvm ] [Bradley, Isaiah] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:06:44,380][DEBUG][monitor.os ] [Bradley, Isaiah] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@a51064e] with refresh_interval [1s]
[2012-01-16 09:06:44,398][DEBUG][monitor.process ] [Bradley, Isaiah] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@3414a97b] with refresh_interval [1s]
[2012-01-16 09:06:44,402][DEBUG][monitor.jvm ] [Bradley, Isaiah] Using refresh_interval [1s]
[2012-01-16 09:06:44,403][DEBUG][monitor.network ] [Bradley, Isaiah] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@76d78df0] with refresh_interval [5s]
[2012-01-16 09:06:44,412][DEBUG][monitor.network ] [Bradley, Isaiah] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:06:44,415][TRACE][monitor.network ] [Bradley, Isaiah] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:17689 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:17689 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3647176 (3.5M) TX bytes:3647176 (3.5M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2833767 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1502722 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3818224480 (3.6G) TX bytes:117490144 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:06:44,419][TRACE][env ] [Bradley, Isaiah] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0 ...
[2012-01-16 09:06:44,450][DEBUG][env ] [Bradley, Isaiah] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:06:44,451][TRACE][env ] [Bradley, Isaiah] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0, free_space [221.7gb, usable_space [221.4gb
[2012-01-16 09:06:44,763][DEBUG][cache.memory ] [Bradley, Isaiah] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:06:44,777][DEBUG][cluster.routing.allocation.decider] [Bradley, Isaiah] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:06:44,778][DEBUG][cluster.routing.allocation.decider] [Bradley, Isaiah] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:06:44,779][DEBUG][cluster.routing.allocation.decider] [Bradley, Isaiah] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:06:44,782][DEBUG][gateway.local ] [Bradley, Isaiah] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:06:44,811][DEBUG][indices.recovery ] [Bradley, Isaiah] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:06:45,082][TRACE][jmx ] [Bradley, Isaiah] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:06:45,083][TRACE][jmx ] [Bradley, Isaiah] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:45,083][TRACE][jmx ] [Bradley, Isaiah] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:45,085][TRACE][jmx ] [Bradley, Isaiah] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:06:45,086][TRACE][jmx ] [Bradley, Isaiah] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:06:45,086][TRACE][jmx ] [Bradley, Isaiah] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:45,086][TRACE][jmx ] [Bradley, Isaiah] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:06:45,086][TRACE][jmx ] [Bradley, Isaiah] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:45,086][TRACE][jmx ] [Bradley, Isaiah] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:06:45,086][TRACE][jmx ] [Bradley, Isaiah] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:45,086][TRACE][jmx ] [Bradley, Isaiah] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:06:45,087][TRACE][jmx ] [Bradley, Isaiah] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:45,087][TRACE][jmx ] [Bradley, Isaiah] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:45,087][TRACE][jmx ] [Bradley, Isaiah] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:45,088][DEBUG][http.netty ] [Bradley, Isaiah] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:06:45,096][DEBUG][indices.memory ] [Bradley, Isaiah] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:06:45,108][DEBUG][indices.cache.filter ] [Bradley, Isaiah] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:06:45,200][INFO ][node ] [Bradley, Isaiah] {0.18.7}[11800]: initialized
[2012-01-16 09:06:45,200][INFO ][node ] [Bradley, Isaiah] {0.18.7}[11800]: starting ...
[2012-01-16 09:06:45,228][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:06:45,307][DEBUG][transport.netty ] [Bradley, Isaiah] Bound to address [/0.0.0.0:9301]
[2012-01-16 09:06:45,309][INFO ][transport ] [Bradley, Isaiah] bound_address {inet[/0.0.0.0:9301]}, publish_address {inet[/10.0.1.5:9301]}
[2012-01-16 09:06:45,389][TRACE][discovery ] [Bradley, Isaiah] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:06:45,418][DEBUG][transport.netty ] [Bradley, Isaiah] Connected to node [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:06:45,419][TRACE][discovery.zen.ping.unicast] [Bradley, Isaiah] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:06:45,469][TRACE][discovery.zen.ping.unicast] [Bradley, Isaiah] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], master [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:06:46,893][TRACE][discovery.zen.ping.unicast] [Bradley, Isaiah] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:06:46,897][TRACE][discovery.zen.ping.unicast] [Bradley, Isaiah] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], master [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:06:48,396][TRACE][discovery.zen.ping.unicast] [Bradley, Isaiah] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:06:48,400][TRACE][discovery.zen.ping.unicast] [Bradley, Isaiah] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], master [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:06:48,404][DEBUG][discovery.zen ] [Bradley, Isaiah] ping responses:
--> target [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], master [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:06:48,408][DEBUG][transport.netty ] [Bradley, Isaiah] Disconnected from [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:06:48,427][DEBUG][transport.netty ] [Bradley, Isaiah] Connected to node [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:06:48,449][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x305e9d7a, /10.0.1.5:62773 => /10.0.1.5:9301]
[2012-01-16 09:06:48,453][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x7058d7c2, /10.0.1.5:62774 => /10.0.1.5:9301]
[2012-01-16 09:06:48,463][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x4c4936f3, /10.0.1.5:62775 => /10.0.1.5:9301]
[2012-01-16 09:06:48,464][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x4c9d22fc, /10.0.1.5:62776 => /10.0.1.5:9301]
[2012-01-16 09:06:48,467][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x2279ecf4, /10.0.1.5:62777 => /10.0.1.5:9301]
[2012-01-16 09:06:48,468][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x12b27c38, /10.0.1.5:62778 => /10.0.1.5:9301]
[2012-01-16 09:06:48,469][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x63713b42, /10.0.1.5:62779 => /10.0.1.5:9301]
[2012-01-16 09:06:48,470][DEBUG][discovery.zen.fd ] [Bradley, Isaiah] [master] starting fault detection against master [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], reason [initial_join]
[2012-01-16 09:06:48,474][DEBUG][discovery.zen ] [Bradley, Isaiah] got a new state from master node, though we are already trying to rejoin the cluster
[2012-01-16 09:06:48,477][DEBUG][cluster.service ] [Bradley, Isaiah] processing [zen-disco-join (detected master)]: execute
[2012-01-16 09:06:48,478][TRACE][cluster.service ] [Bradley, Isaiah] cluster state updated:
version [9], source [zen-disco-join (detected master)]
nodes:
[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:48,480][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x420253af, /10.0.1.5:62780 => /10.0.1.5:9301]
[2012-01-16 09:06:48,481][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x52aa77d9, /10.0.1.5:62781 => /10.0.1.5:9301]
[2012-01-16 09:06:48,481][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x3f70119f, /10.0.1.5:62782 => /10.0.1.5:9301]
[2012-01-16 09:06:48,481][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x4fc0cb76, /10.0.1.5:62783 => /10.0.1.5:9301]
[2012-01-16 09:06:48,482][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x7a6dd8e1, /10.0.1.5:62784 => /10.0.1.5:9301]
[2012-01-16 09:06:48,482][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x41b9da92, /10.0.1.5:62785 => /10.0.1.5:9301]
[2012-01-16 09:06:48,482][TRACE][transport.netty ] [Bradley, Isaiah] channel opened: [id: 0x10bcc8f4, /10.0.1.5:62786 => /10.0.1.5:9301]
[2012-01-16 09:06:48,487][DEBUG][transport.netty ] [Bradley, Isaiah] Connected to node [[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:06:48,488][DEBUG][cluster.service ] [Bradley, Isaiah] processing [zen-disco-join (detected master)]: done applying updated cluster_state
[2012-01-16 09:06:48,500][DEBUG][cluster.service ] [Bradley, Isaiah] processing [zen-disco-receive(from master [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:06:48,501][TRACE][cluster.service ] [Bradley, Isaiah] cluster state updated:
version [10], source [zen-disco-receive(from master [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]])]
nodes:
[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]], local
[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]], master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:48,501][INFO ][cluster.service ] [Bradley, Isaiah] detected_master [Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]], added {[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]],}, reason: zen-disco-receive(from master [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]])
[2012-01-16 09:06:48,502][DEBUG][cluster.service ] [Bradley, Isaiah] processing [zen-disco-receive(from master [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:06:48,503][TRACE][discovery ] [Bradley, Isaiah] initial state set from discovery
[2012-01-16 09:06:48,503][INFO ][discovery ] [Bradley, Isaiah] elasticsearch/tPDDcLRYTXez2j0_7c69Wg
[2012-01-16 09:06:48,504][TRACE][gateway.local ] [Bradley, Isaiah] [find_latest_state]: processing [metadata-2]
[2012-01-16 09:06:48,507][DEBUG][gateway.local ] [Bradley, Isaiah] [find_latest_state]: loading metadata from [/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0/_state/metadata-2]
[2012-01-16 09:06:48,508][TRACE][gateway.local ] [Bradley, Isaiah] [find_latest_state]: processing [metadata-2]
[2012-01-16 09:06:48,508][DEBUG][gateway.local ] [Bradley, Isaiah] [find_latest_state]: no started shards loaded
[2012-01-16 09:06:48,515][INFO ][http ] [Bradley, Isaiah] bound_address {inet[/0.0.0.0:9201]}, publish_address {inet[/10.0.1.5:9201]}
[2012-01-16 09:06:48,516][TRACE][jmx ] [Bradley, Isaiah] Registered org.elasticsearch.jmx.ResourceDMBean@4c767fb3 under org.elasticsearch:service=transport
[2012-01-16 09:06:48,516][TRACE][jmx ] [Bradley, Isaiah] Registered org.elasticsearch.jmx.ResourceDMBean@77b9e7fc under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:06:48,516][INFO ][node ] [Bradley, Isaiah] {0.18.7}[11800]: started
[2012-01-16 09:06:53,358][INFO ][node ] [Bradley, Isaiah] {0.18.7}[11800]: stopping ...
[2012-01-16 09:06:53,408][DEBUG][discovery.zen.fd ] [Bradley, Isaiah] [master] stopping fault detection against master [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], reason [zen disco stop]
[2012-01-16 09:06:53,419][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x420253af, /10.0.1.5:62780 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,420][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x52aa77d9, /10.0.1.5:62781 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,423][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x10bcc8f4, /10.0.1.5:62786 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,426][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x41b9da92, /10.0.1.5:62785 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,429][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x3f70119f, /10.0.1.5:62782 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,429][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x7058d7c2, /10.0.1.5:62774 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,429][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x4fc0cb76, /10.0.1.5:62783 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,432][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x305e9d7a, /10.0.1.5:62773 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,434][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x7a6dd8e1, /10.0.1.5:62784 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,434][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x2279ecf4, /10.0.1.5:62777 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,437][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x63713b42, /10.0.1.5:62779 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,437][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x4c4936f3, /10.0.1.5:62775 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,434][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x12b27c38, /10.0.1.5:62778 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,435][TRACE][transport.netty ] [Bradley, Isaiah] channel closed: [id: 0x4c9d22fc, /10.0.1.5:62776 :> /10.0.1.5:9301]
[2012-01-16 09:06:53,446][TRACE][jmx ] [Bradley, Isaiah] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:06:53,447][TRACE][jmx ] [Bradley, Isaiah] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:06:53,447][INFO ][node ] [Bradley, Isaiah] {0.18.7}[11800]: stopped
[2012-01-16 09:06:53,447][INFO ][node ] [Bradley, Isaiah] {0.18.7}[11800]: closing ...
[2012-01-16 09:06:53,485][TRACE][node ] [Bradley, Isaiah] Close times for each service:
StopWatch 'node_close': running time = 6ms
-----------------------------------------
ms % Task name
-----------------------------------------
00001 017% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00001 017% indices
00000 000% routing
00000 000% cluster
00001 017% discovery
00001 017% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00000 000% node_cache
00000 000% script
00002 033% thread_pool
00000 000% thread_pool_force_shutdown
[2012-01-16 09:06:53,490][INFO ][node ] [Bradley, Isaiah] {0.18.7}[11800]: closed
[2012-01-16 09:08:35,481][INFO ][node ] [Rom the Spaceknight] {0.18.7}[11860]: initializing ...
[2012-01-16 09:08:35,496][INFO ][plugins ] [Rom the Spaceknight] loaded [], sites []
[2012-01-16 09:08:36,682][DEBUG][threadpool ] [Rom the Spaceknight] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:08:36,685][DEBUG][threadpool ] [Rom the Spaceknight] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:08:36,685][DEBUG][threadpool ] [Rom the Spaceknight] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:08:36,686][DEBUG][threadpool ] [Rom the Spaceknight] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:08:36,686][DEBUG][threadpool ] [Rom the Spaceknight] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:08:36,689][DEBUG][threadpool ] [Rom the Spaceknight] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:08:36,690][DEBUG][threadpool ] [Rom the Spaceknight] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:08:36,704][DEBUG][transport.netty ] [Rom the Spaceknight] using worker_count[4], port[9301], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:08:36,725][DEBUG][discovery.zen.ping.unicast] [Rom the Spaceknight] using initial hosts [localhost:9300, localhost:9301], with concurrent_connects [10]
[2012-01-16 09:08:36,730][DEBUG][discovery.zen ] [Rom the Spaceknight] using ping.timeout [3s]
[2012-01-16 09:08:36,739][DEBUG][discovery.zen.elect ] [Rom the Spaceknight] using minimum_master_nodes [-1]
[2012-01-16 09:08:36,741][DEBUG][discovery.zen.fd ] [Rom the Spaceknight] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:08:36,745][DEBUG][discovery.zen.fd ] [Rom the Spaceknight] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:08:36,770][DEBUG][monitor.jvm ] [Rom the Spaceknight] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:08:37,283][DEBUG][monitor.os ] [Rom the Spaceknight] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@74e8f8c5] with refresh_interval [1s]
[2012-01-16 09:08:37,288][DEBUG][monitor.process ] [Rom the Spaceknight] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@357c7988] with refresh_interval [1s]
[2012-01-16 09:08:37,294][DEBUG][monitor.jvm ] [Rom the Spaceknight] Using refresh_interval [1s]
[2012-01-16 09:08:37,295][DEBUG][monitor.network ] [Rom the Spaceknight] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@7cbdb375] with refresh_interval [5s]
[2012-01-16 09:08:37,304][DEBUG][monitor.network ] [Rom the Spaceknight] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:08:37,307][TRACE][monitor.network ] [Rom the Spaceknight] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:18000 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:18000 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3667204 (3.5M) TX bytes:3667204 (3.5M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2833842 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1502806 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3818244990 (3.6G) TX bytes:117505881 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:08:37,310][TRACE][env ] [Rom the Spaceknight] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0 ...
[2012-01-16 09:08:37,429][DEBUG][env ] [Rom the Spaceknight] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:08:37,430][TRACE][env ] [Rom the Spaceknight] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0, free_space [221.7gb, usable_space [221.4gb
[2012-01-16 09:08:37,763][DEBUG][cache.memory ] [Rom the Spaceknight] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:08:37,776][DEBUG][cluster.routing.allocation.decider] [Rom the Spaceknight] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:08:37,777][DEBUG][cluster.routing.allocation.decider] [Rom the Spaceknight] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:08:37,778][DEBUG][cluster.routing.allocation.decider] [Rom the Spaceknight] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:08:37,781][DEBUG][gateway.local ] [Rom the Spaceknight] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:08:37,809][DEBUG][indices.recovery ] [Rom the Spaceknight] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:08:38,009][TRACE][jmx ] [Rom the Spaceknight] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:08:38,010][TRACE][jmx ] [Rom the Spaceknight] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:38,010][TRACE][jmx ] [Rom the Spaceknight] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:38,012][TRACE][jmx ] [Rom the Spaceknight] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:08:38,012][TRACE][jmx ] [Rom the Spaceknight] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:08:38,012][TRACE][jmx ] [Rom the Spaceknight] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:38,013][TRACE][jmx ] [Rom the Spaceknight] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:08:38,013][TRACE][jmx ] [Rom the Spaceknight] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:38,013][TRACE][jmx ] [Rom the Spaceknight] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:08:38,013][TRACE][jmx ] [Rom the Spaceknight] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:38,013][TRACE][jmx ] [Rom the Spaceknight] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:08:38,013][TRACE][jmx ] [Rom the Spaceknight] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:38,013][TRACE][jmx ] [Rom the Spaceknight] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:38,013][TRACE][jmx ] [Rom the Spaceknight] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:38,014][DEBUG][http.netty ] [Rom the Spaceknight] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:08:38,021][DEBUG][indices.memory ] [Rom the Spaceknight] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:08:38,051][DEBUG][indices.cache.filter ] [Rom the Spaceknight] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:08:38,128][INFO ][node ] [Rom the Spaceknight] {0.18.7}[11860]: initialized
[2012-01-16 09:08:38,129][INFO ][node ] [Rom the Spaceknight] {0.18.7}[11860]: starting ...
[2012-01-16 09:08:38,165][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:08:38,259][DEBUG][transport.netty ] [Rom the Spaceknight] Bound to address [/0.0.0.0:9301]
[2012-01-16 09:08:38,262][INFO ][transport ] [Rom the Spaceknight] bound_address {inet[/0.0.0.0:9301]}, publish_address {inet[/10.0.1.5:9301]}
[2012-01-16 09:08:38,366][TRACE][discovery ] [Rom the Spaceknight] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:08:38,395][TRACE][discovery.zen.ping.unicast] [Rom the Spaceknight] [1] failed to connect to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
org.elasticsearch.transport.ConnectTransportException: [][inet[localhost/127.0.0.1:9300]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:533)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:499)
at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:478)
at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:128)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$3.run(UnicastZenPing.java:273)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
... 3 more
[2012-01-16 09:08:38,395][TRACE][transport.netty ] [Rom the Spaceknight] (Ignoring) Exception caught on netty layer [[id: 0x4ad61301]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:08:38,414][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x148e7f54, /127.0.0.1:62799 => /127.0.0.1:9301]
[2012-01-16 09:08:38,422][DEBUG][transport.netty ] [Rom the Spaceknight] Connected to node [[#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:08:38,424][TRACE][discovery.zen.ping.unicast] [Rom the Spaceknight] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:08:38,475][TRACE][discovery.zen.ping.unicast] [Rom the Spaceknight] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:08:39,871][TRACE][discovery.zen.ping.unicast] [Rom the Spaceknight] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:08:39,878][TRACE][transport.netty ] [Rom the Spaceknight] (Ignoring) Exception caught on netty layer [[id: 0x21caefb0]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:08:39,879][TRACE][discovery.zen.ping.unicast] [Rom the Spaceknight] [1] failed to connect to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
org.elasticsearch.transport.ConnectTransportException: [][inet[localhost/127.0.0.1:9300]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:533)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:499)
at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:478)
at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:128)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$3.run(UnicastZenPing.java:273)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
... 3 more
[2012-01-16 09:08:39,880][TRACE][discovery.zen.ping.unicast] [Rom the Spaceknight] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:08:41,376][TRACE][discovery.zen.ping.unicast] [Rom the Spaceknight] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:08:41,378][TRACE][discovery.zen.ping.unicast] [Rom the Spaceknight] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:08:41,380][TRACE][discovery.zen.ping.unicast] [Rom the Spaceknight] [1] failed to connect to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
org.elasticsearch.transport.ConnectTransportException: [][inet[localhost/127.0.0.1:9300]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:533)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:499)
at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:478)
at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:128)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$3.run(UnicastZenPing.java:273)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
... 3 more
[2012-01-16 09:08:41,380][TRACE][transport.netty ] [Rom the Spaceknight] (Ignoring) Exception caught on netty layer [[id: 0x6d3d7254]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:08:41,384][DEBUG][transport.netty ] [Rom the Spaceknight] Disconnected from [[#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:08:41,384][DEBUG][discovery.zen ] [Rom the Spaceknight] ping responses: {none}
[2012-01-16 09:08:41,385][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x148e7f54, /127.0.0.1:62799 :> /127.0.0.1:9301]
[2012-01-16 09:08:41,389][DEBUG][cluster.service ] [Rom the Spaceknight] processing [zen-disco-join (elected_as_master)]: execute
[2012-01-16 09:08:41,392][TRACE][cluster.service ] [Rom the Spaceknight] cluster state updated:
version [1], source [zen-disco-join (elected_as_master)]
nodes:
[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:08:41,394][INFO ][cluster.service ] [Rom the Spaceknight] new_master [Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]], reason: zen-disco-join (elected_as_master)
[2012-01-16 09:08:41,395][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x67635aad, /10.0.1.5:62802 => /10.0.1.5:9301]
[2012-01-16 09:08:41,404][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x6cf84b0a, /10.0.1.5:62803 => /10.0.1.5:9301]
[2012-01-16 09:08:41,406][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x12b27c38, /10.0.1.5:62804 => /10.0.1.5:9301]
[2012-01-16 09:08:41,407][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x26a0c73f, /10.0.1.5:62805 => /10.0.1.5:9301]
[2012-01-16 09:08:41,408][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x05790ce9, /10.0.1.5:62806 => /10.0.1.5:9301]
[2012-01-16 09:08:41,408][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x4e3e97cd, /10.0.1.5:62807 => /10.0.1.5:9301]
[2012-01-16 09:08:41,408][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x16fa21a4, /10.0.1.5:62808 => /10.0.1.5:9301]
[2012-01-16 09:08:41,415][DEBUG][transport.netty ] [Rom the Spaceknight] Connected to node [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:08:41,419][DEBUG][cluster.service ] [Rom the Spaceknight] processing [zen-disco-join (elected_as_master)]: done applying updated cluster_state
[2012-01-16 09:08:41,419][TRACE][discovery ] [Rom the Spaceknight] initial state set from discovery
[2012-01-16 09:08:41,420][INFO ][discovery ] [Rom the Spaceknight] elasticsearch/PgTrOQtYT1Oo9CgNiw0Dzg
[2012-01-16 09:08:41,420][DEBUG][river.cluster ] [Rom the Spaceknight] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:08:41,420][DEBUG][river.cluster ] [Rom the Spaceknight] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:08:41,421][TRACE][gateway.local ] [Rom the Spaceknight] [find_latest_state]: processing [metadata-2]
[2012-01-16 09:08:41,425][DEBUG][gateway.local ] [Rom the Spaceknight] [find_latest_state]: loading metadata from [/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0/_state/metadata-2]
[2012-01-16 09:08:41,426][TRACE][gateway.local ] [Rom the Spaceknight] [find_latest_state]: processing [metadata-2]
[2012-01-16 09:08:41,426][DEBUG][gateway.local ] [Rom the Spaceknight] [find_latest_state]: no started shards loaded
[2012-01-16 09:08:41,522][DEBUG][gateway.local ] [Rom the Spaceknight] elected state from [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:08:41,533][INFO ][http ] [Rom the Spaceknight] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/10.0.1.5:9200]}
[2012-01-16 09:08:41,534][DEBUG][cluster.service ] [Rom the Spaceknight] processing [local-gateway-elected-state]: execute
[2012-01-16 09:08:41,535][TRACE][jmx ] [Rom the Spaceknight] Registered org.elasticsearch.jmx.ResourceDMBean@273f212a under org.elasticsearch:service=transport
[2012-01-16 09:08:41,537][TRACE][jmx ] [Rom the Spaceknight] Registered org.elasticsearch.jmx.ResourceDMBean@219a6087 under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:08:41,538][INFO ][node ] [Rom the Spaceknight] {0.18.7}[11860]: started
[2012-01-16 09:08:41,537][TRACE][cluster.service ] [Rom the Spaceknight] cluster state updated:
version [3], source [local-gateway-elected-state]
nodes:
[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:08:41,542][DEBUG][river.cluster ] [Rom the Spaceknight] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:08:41,542][DEBUG][river.cluster ] [Rom the Spaceknight] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:08:41,591][INFO ][gateway ] [Rom the Spaceknight] recovered [0] indices into cluster_state
[2012-01-16 09:08:41,592][DEBUG][cluster.service ] [Rom the Spaceknight] processing [local-gateway-elected-state]: done applying updated cluster_state
[2012-01-16 09:08:45,014][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x0aa1b4e7, /127.0.0.1:62811 => /127.0.0.1:9301]
[2012-01-16 09:08:48,012][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x4fb7a553, /10.0.1.5:62813 => /10.0.1.5:9301]
[2012-01-16 09:08:48,012][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x21c71508, /10.0.1.5:62814 => /10.0.1.5:9301]
[2012-01-16 09:08:48,013][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x0aa1b4e7, /127.0.0.1:62811 :> /127.0.0.1:9301]
[2012-01-16 09:08:48,014][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x061ffbcb, /10.0.1.5:62815 => /10.0.1.5:9301]
[2012-01-16 09:08:48,014][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x2fa847df, /10.0.1.5:62816 => /10.0.1.5:9301]
[2012-01-16 09:08:48,014][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x510699ea, /10.0.1.5:62817 => /10.0.1.5:9301]
[2012-01-16 09:08:48,015][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x2180e7a4, /10.0.1.5:62818 => /10.0.1.5:9301]
[2012-01-16 09:08:48,015][TRACE][transport.netty ] [Rom the Spaceknight] channel opened: [id: 0x26556949, /10.0.1.5:62819 => /10.0.1.5:9301]
[2012-01-16 09:08:48,044][DEBUG][transport.netty ] [Rom the Spaceknight] Connected to node [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]]
[2012-01-16 09:08:48,051][DEBUG][cluster.service ] [Rom the Spaceknight] processing [zen-disco-receive(join from node[[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:08:48,052][TRACE][cluster.service ] [Rom the Spaceknight] cluster state updated:
version [4], source [zen-disco-receive(join from node[[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]])]
nodes:
[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]
[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:08:48,055][INFO ][cluster.service ] [Rom the Spaceknight] added {[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]],}, reason: zen-disco-receive(join from node[[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]])
[2012-01-16 09:08:48,058][DEBUG][cluster.service ] [Rom the Spaceknight] processing [zen-disco-receive(join from node[[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:08:48,059][DEBUG][river.cluster ] [Rom the Spaceknight] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:08:48,060][DEBUG][river.cluster ] [Rom the Spaceknight] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:08:51,419][DEBUG][cluster.service ] [Rom the Spaceknight] processing [routing-table-updater]: execute
[2012-01-16 09:08:51,419][DEBUG][cluster.service ] [Rom the Spaceknight] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:09:04,721][INFO ][node ] [Rom the Spaceknight] {0.18.7}[11860]: stopping ...
[2012-01-16 09:09:04,769][DEBUG][netty.channel.socket.nio.SelectorUtil] CancelledKeyException raised by a Selector - JDK bug?
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:69)
at sun.nio.ch.KQueueSelectorImpl.updateSelectedKeys(KQueueSelectorImpl.java:105)
at sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:74)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:38)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:165)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:09:04,771][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x67635aad, /10.0.1.5:62802 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,773][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x6cf84b0a, /10.0.1.5:62803 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,774][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x26a0c73f, /10.0.1.5:62805 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,775][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x12b27c38, /10.0.1.5:62804 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,776][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x4e3e97cd, /10.0.1.5:62807 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,777][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x16fa21a4, /10.0.1.5:62808 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,777][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x05790ce9, /10.0.1.5:62806 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,777][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x26556949, /10.0.1.5:62819 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,778][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x4fb7a553, /10.0.1.5:62813 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,781][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x21c71508, /10.0.1.5:62814 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,781][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x061ffbcb, /10.0.1.5:62815 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,782][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x2fa847df, /10.0.1.5:62816 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,782][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x2180e7a4, /10.0.1.5:62818 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,782][TRACE][transport.netty ] [Rom the Spaceknight] channel closed: [id: 0x510699ea, /10.0.1.5:62817 :> /10.0.1.5:9301]
[2012-01-16 09:09:04,788][TRACE][jmx ] [Rom the Spaceknight] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:09:04,788][TRACE][jmx ] [Rom the Spaceknight] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:09:04,789][INFO ][node ] [Rom the Spaceknight] {0.18.7}[11860]: stopped
[2012-01-16 09:09:04,789][INFO ][node ] [Rom the Spaceknight] {0.18.7}[11860]: closing ...
[2012-01-16 09:09:04,828][TRACE][node ] [Rom the Spaceknight] Close times for each service:
StopWatch 'node_close': running time = 12ms
-----------------------------------------
ms % Task name
-----------------------------------------
00000 000% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00001 008% indices
00000 000% routing
00000 000% cluster
00001 008% discovery
00006 050% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00001 008% node_cache
00000 000% script
00003 025% thread_pool
00000 000% thread_pool_force_shutdown
[2012-01-16 09:09:04,831][INFO ][node ] [Rom the Spaceknight] {0.18.7}[11860]: closed
[2012-01-16 09:09:13,303][INFO ][node ] [Jade Dragon] {0.18.7}[11896]: initializing ...
[2012-01-16 09:09:13,313][INFO ][plugins ] [Jade Dragon] loaded [], sites []
[2012-01-16 09:09:14,522][DEBUG][threadpool ] [Jade Dragon] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:09:14,527][DEBUG][threadpool ] [Jade Dragon] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:09:14,527][DEBUG][threadpool ] [Jade Dragon] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:09:14,528][DEBUG][threadpool ] [Jade Dragon] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:09:14,528][DEBUG][threadpool ] [Jade Dragon] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:09:14,532][DEBUG][threadpool ] [Jade Dragon] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:09:14,532][DEBUG][threadpool ] [Jade Dragon] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:09:14,544][DEBUG][transport.netty ] [Jade Dragon] using worker_count[4], port[9301], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:09:14,564][DEBUG][discovery.zen.ping.unicast] [Jade Dragon] using initial hosts [localhost:9300, localhost:9301], with concurrent_connects [10]
[2012-01-16 09:09:14,568][DEBUG][discovery.zen ] [Jade Dragon] using ping.timeout [3s]
[2012-01-16 09:09:14,574][DEBUG][discovery.zen.elect ] [Jade Dragon] using minimum_master_nodes [-1]
[2012-01-16 09:09:14,575][DEBUG][discovery.zen.fd ] [Jade Dragon] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:09:14,580][DEBUG][discovery.zen.fd ] [Jade Dragon] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:09:14,606][DEBUG][monitor.jvm ] [Jade Dragon] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:09:15,133][DEBUG][monitor.os ] [Jade Dragon] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@a51064e] with refresh_interval [1s]
[2012-01-16 09:09:15,139][DEBUG][monitor.process ] [Jade Dragon] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@7463e563] with refresh_interval [1s]
[2012-01-16 09:09:15,143][DEBUG][monitor.jvm ] [Jade Dragon] Using refresh_interval [1s]
[2012-01-16 09:09:15,144][DEBUG][monitor.network ] [Jade Dragon] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@40c07527] with refresh_interval [5s]
[2012-01-16 09:09:15,154][DEBUG][monitor.network ] [Jade Dragon] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:09:15,157][TRACE][monitor.network ] [Jade Dragon] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:18419 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:18419 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3693339 (3.5M) TX bytes:3693339 (3.5M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2833847 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1502811 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3818245485 (3.6G) TX bytes:117506240 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:09:15,159][TRACE][env ] [Jade Dragon] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0 ...
[2012-01-16 09:09:15,191][DEBUG][env ] [Jade Dragon] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:09:15,192][TRACE][env ] [Jade Dragon] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0, free_space [221.7gb, usable_space [221.4gb
[2012-01-16 09:09:15,483][DEBUG][cache.memory ] [Jade Dragon] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:09:15,496][DEBUG][cluster.routing.allocation.decider] [Jade Dragon] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:09:15,497][DEBUG][cluster.routing.allocation.decider] [Jade Dragon] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:09:15,497][DEBUG][cluster.routing.allocation.decider] [Jade Dragon] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:09:15,501][DEBUG][gateway.local ] [Jade Dragon] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:09:15,527][DEBUG][indices.recovery ] [Jade Dragon] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:09:15,708][TRACE][jmx ] [Jade Dragon] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:09:15,708][TRACE][jmx ] [Jade Dragon] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:15,709][TRACE][jmx ] [Jade Dragon] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:15,712][TRACE][jmx ] [Jade Dragon] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:09:15,712][TRACE][jmx ] [Jade Dragon] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:09:15,712][TRACE][jmx ] [Jade Dragon] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:15,713][TRACE][jmx ] [Jade Dragon] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:09:15,713][TRACE][jmx ] [Jade Dragon] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:15,713][TRACE][jmx ] [Jade Dragon] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:09:15,713][TRACE][jmx ] [Jade Dragon] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:15,714][TRACE][jmx ] [Jade Dragon] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:09:15,714][TRACE][jmx ] [Jade Dragon] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:15,715][TRACE][jmx ] [Jade Dragon] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:15,717][TRACE][jmx ] [Jade Dragon] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:15,718][DEBUG][http.netty ] [Jade Dragon] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:09:15,724][DEBUG][indices.memory ] [Jade Dragon] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:09:15,737][DEBUG][indices.cache.filter ] [Jade Dragon] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:09:15,825][INFO ][node ] [Jade Dragon] {0.18.7}[11896]: initialized
[2012-01-16 09:09:15,826][INFO ][node ] [Jade Dragon] {0.18.7}[11896]: starting ...
[2012-01-16 09:09:15,853][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:09:15,942][DEBUG][transport.netty ] [Jade Dragon] Bound to address [/0.0.0.0:9301]
[2012-01-16 09:09:15,944][INFO ][transport ] [Jade Dragon] bound_address {inet[/0.0.0.0:9301]}, publish_address {inet[/10.0.1.5:9301]}
[2012-01-16 09:09:16,031][TRACE][discovery ] [Jade Dragon] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:09:16,059][DEBUG][transport.netty ] [Jade Dragon] Connected to node [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:09:16,060][TRACE][discovery.zen.ping.unicast] [Jade Dragon] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:09:16,092][DEBUG][transport.netty ] [Jade Dragon] Connected to node [[#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:09:16,093][TRACE][discovery.zen.ping.unicast] [Jade Dragon] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:09:16,097][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x785606f3, /127.0.0.1:62837 => /127.0.0.1:9301]
[2012-01-16 09:09:16,115][TRACE][discovery.zen.ping.unicast] [Jade Dragon] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:09:16,122][TRACE][discovery.zen.ping.unicast] [Jade Dragon] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:09:17,535][TRACE][discovery.zen.ping.unicast] [Jade Dragon] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:09:17,536][TRACE][discovery.zen.ping.unicast] [Jade Dragon] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:09:17,538][TRACE][discovery.zen.ping.unicast] [Jade Dragon] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:09:17,540][TRACE][discovery.zen.ping.unicast] [Jade Dragon] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:09:19,040][TRACE][discovery.zen.ping.unicast] [Jade Dragon] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:09:19,041][TRACE][discovery.zen.ping.unicast] [Jade Dragon] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:09:19,045][TRACE][discovery.zen.ping.unicast] [Jade Dragon] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:09:19,046][TRACE][discovery.zen.ping.unicast] [Jade Dragon] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:09:19,049][DEBUG][discovery.zen ] [Jade Dragon] ping responses:
--> target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]]
[2012-01-16 09:09:19,050][DEBUG][transport.netty ] [Jade Dragon] Disconnected from [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:09:19,055][DEBUG][transport.netty ] [Jade Dragon] Disconnected from [[#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:09:19,058][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x785606f3, /127.0.0.1:62837 :> /127.0.0.1:9301]
[2012-01-16 09:09:19,070][DEBUG][transport.netty ] [Jade Dragon] Connected to node [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]]
[2012-01-16 09:09:19,079][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x2f368c5d, /10.0.1.5:62845 => /10.0.1.5:9301]
[2012-01-16 09:09:19,081][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x05b31fd9, /10.0.1.5:62846 => /10.0.1.5:9301]
[2012-01-16 09:09:19,091][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x604ee1f1, /10.0.1.5:62847 => /10.0.1.5:9301]
[2012-01-16 09:09:19,094][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x0519549e, /10.0.1.5:62848 => /10.0.1.5:9301]
[2012-01-16 09:09:19,095][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x420253af, /10.0.1.5:62849 => /10.0.1.5:9301]
[2012-01-16 09:09:19,095][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x26c42804, /10.0.1.5:62850 => /10.0.1.5:9301]
[2012-01-16 09:09:19,096][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x181f327e, /10.0.1.5:62851 => /10.0.1.5:9301]
[2012-01-16 09:09:19,101][DEBUG][discovery.zen.fd ] [Jade Dragon] [master] starting fault detection against master [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], reason [initial_join]
[2012-01-16 09:09:19,107][DEBUG][discovery.zen ] [Jade Dragon] got a new state from master node, though we are already trying to rejoin the cluster
[2012-01-16 09:09:19,109][DEBUG][cluster.service ] [Jade Dragon] processing [zen-disco-join (detected master)]: execute
[2012-01-16 09:09:19,110][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [5], source [zen-disco-join (detected master)]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:09:19,119][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x083ba4f1, /10.0.1.5:62852 => /10.0.1.5:9301]
[2012-01-16 09:09:19,120][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x03c9ce70, /10.0.1.5:62853 => /10.0.1.5:9301]
[2012-01-16 09:09:19,121][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x1e37504d, /10.0.1.5:62854 => /10.0.1.5:9301]
[2012-01-16 09:09:19,121][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x24b6a561, /10.0.1.5:62855 => /10.0.1.5:9301]
[2012-01-16 09:09:19,123][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x140e3010, /10.0.1.5:62856 => /10.0.1.5:9301]
[2012-01-16 09:09:19,125][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x19176e5f, /10.0.1.5:62857 => /10.0.1.5:9301]
[2012-01-16 09:09:19,125][DEBUG][transport.netty ] [Jade Dragon] Connected to node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:09:19,126][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x1be2f6b0, /10.0.1.5:62858 => /10.0.1.5:9301]
[2012-01-16 09:09:19,126][DEBUG][cluster.service ] [Jade Dragon] processing [zen-disco-join (detected master)]: done applying updated cluster_state
[2012-01-16 09:09:19,126][DEBUG][cluster.service ] [Jade Dragon] processing [zen-disco-receive(from master [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:09:19,126][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [6], source [zen-disco-receive(from master [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]])]
nodes:
[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]], master
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:09:19,127][INFO ][cluster.service ] [Jade Dragon] detected_master [Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]], added {[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]],}, reason: zen-disco-receive(from master [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]])
[2012-01-16 09:09:19,127][DEBUG][cluster.service ] [Jade Dragon] processing [zen-disco-receive(from master [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:09:19,132][TRACE][discovery ] [Jade Dragon] initial state set from discovery
[2012-01-16 09:09:19,133][INFO ][discovery ] [Jade Dragon] elasticsearch/8tXWQ1MKTYCHwsZyprSdOA
[2012-01-16 09:09:19,133][TRACE][gateway.local ] [Jade Dragon] [find_latest_state]: processing [metadata-3]
[2012-01-16 09:09:19,138][DEBUG][gateway.local ] [Jade Dragon] [find_latest_state]: loading metadata from [/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0/_state/metadata-3]
[2012-01-16 09:09:19,139][TRACE][gateway.local ] [Jade Dragon] [find_latest_state]: processing [metadata-3]
[2012-01-16 09:09:19,139][DEBUG][gateway.local ] [Jade Dragon] [find_latest_state]: no started shards loaded
[2012-01-16 09:09:19,145][INFO ][http ] [Jade Dragon] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/10.0.1.5:9200]}
[2012-01-16 09:09:19,145][TRACE][jmx ] [Jade Dragon] Registered org.elasticsearch.jmx.ResourceDMBean@28a7bd7a under org.elasticsearch:service=transport
[2012-01-16 09:09:19,145][TRACE][jmx ] [Jade Dragon] Registered org.elasticsearch.jmx.ResourceDMBean@1c88a970 under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:09:19,146][INFO ][node ] [Jade Dragon] {0.18.7}[11896]: started
[2012-01-16 09:09:33,657][INFO ][discovery.zen ] [Jade Dragon] master_left [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], reason [shut_down]
[2012-01-16 09:09:33,659][DEBUG][cluster.service ] [Jade Dragon] processing [zen-disco-master_failed ([Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]])]: execute
[2012-01-16 09:09:33,659][DEBUG][discovery.zen.fd ] [Jade Dragon] [master] stopping fault detection against master [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], reason [got elected as new master since master left (reason = shut_down)]
[2012-01-16 09:09:33,660][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [7], source [zen-disco-master_failed ([Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:09:33,660][INFO ][cluster.service ] [Jade Dragon] master {new [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], previous [Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]}, removed {[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]],}, reason: zen-disco-master_failed ([Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]])
[2012-01-16 09:09:33,669][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:09:33,670][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:09:33,673][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x05b31fd9, /10.0.1.5:62846 :> /10.0.1.5:9301]
[2012-01-16 09:09:33,674][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x604ee1f1, /10.0.1.5:62847 :> /10.0.1.5:9301]
[2012-01-16 09:09:33,676][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x26c42804, /10.0.1.5:62850 :> /10.0.1.5:9301]
[2012-01-16 09:09:33,674][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x420253af, /10.0.1.5:62849 :> /10.0.1.5:9301]
[2012-01-16 09:09:33,682][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x181f327e, /10.0.1.5:62851 :> /10.0.1.5:9301]
[2012-01-16 09:09:33,674][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x0519549e, /10.0.1.5:62848 :> /10.0.1.5:9301]
[2012-01-16 09:09:33,690][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x2f368c5d, /10.0.1.5:62845 :> /10.0.1.5:9301]
[2012-01-16 09:09:33,697][DEBUG][transport.netty ] [Jade Dragon] Disconnected from [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]]
[2012-01-16 09:09:33,697][DEBUG][cluster.service ] [Jade Dragon] processing [zen-disco-master_failed ([Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]])]: done applying updated cluster_state
[2012-01-16 09:09:33,698][DEBUG][cluster.service ] [Jade Dragon] processing [routing-table-updater]: execute
[2012-01-16 09:09:33,699][DEBUG][cluster.service ] [Jade Dragon] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:09:40,278][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x14235085, /127.0.0.1:62861 => /127.0.0.1:9301]
[2012-01-16 09:09:43,258][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x17510d96, /10.0.1.5:62863 => /10.0.1.5:9301]
[2012-01-16 09:09:43,258][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x14235085, /127.0.0.1:62861 :> /127.0.0.1:9301]
[2012-01-16 09:09:43,258][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x0ed6ee28, /10.0.1.5:62864 => /10.0.1.5:9301]
[2012-01-16 09:09:43,259][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x41aef798, /10.0.1.5:62865 => /10.0.1.5:9301]
[2012-01-16 09:09:43,259][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x46013dd8, /10.0.1.5:62866 => /10.0.1.5:9301]
[2012-01-16 09:09:43,260][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x7b8353cf, /10.0.1.5:62867 => /10.0.1.5:9301]
[2012-01-16 09:09:43,260][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x6af37a62, /10.0.1.5:62868 => /10.0.1.5:9301]
[2012-01-16 09:09:43,261][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x16e7eec9, /10.0.1.5:62869 => /10.0.1.5:9301]
[2012-01-16 09:09:43,289][DEBUG][transport.netty ] [Jade Dragon] Connected to node [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]]
[2012-01-16 09:09:43,290][DEBUG][cluster.service ] [Jade Dragon] processing [zen-disco-receive(join from node[[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:09:43,291][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [8], source [zen-disco-receive(join from node[[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:09:43,292][INFO ][cluster.service ] [Jade Dragon] added {[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]],}, reason: zen-disco-receive(join from node[[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]])
[2012-01-16 09:09:43,295][DEBUG][cluster.service ] [Jade Dragon] processing [zen-disco-receive(join from node[[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:09:43,296][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:09:43,297][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:09:43,661][DEBUG][cluster.service ] [Jade Dragon] processing [routing-table-updater]: execute
[2012-01-16 09:09:43,662][DEBUG][cluster.service ] [Jade Dragon] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:09:56,943][DEBUG][cluster.service ] [Jade Dragon] processing [zen-disco-node_left([She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]])]: execute
[2012-01-16 09:09:56,943][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [9], source [zen-disco-node_left([She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:09:56,943][INFO ][cluster.service ] [Jade Dragon] removed {[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]],}, reason: zen-disco-node_left([She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]])
[2012-01-16 09:09:56,943][DEBUG][cluster.service ] [Jade Dragon] processing [zen-disco-node_left([She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]])]: done applying updated cluster_state
[2012-01-16 09:09:56,944][DEBUG][cluster.service ] [Jade Dragon] processing [routing-table-updater]: execute
[2012-01-16 09:09:56,944][DEBUG][cluster.service ] [Jade Dragon] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:09:56,947][DEBUG][transport.netty ] [Jade Dragon] Disconnected from [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]]
[2012-01-16 09:09:56,954][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:09:56,955][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:09:56,960][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x17510d96, /10.0.1.5:62863 :> /10.0.1.5:9301]
[2012-01-16 09:09:56,961][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x7b8353cf, /10.0.1.5:62867 :> /10.0.1.5:9301]
[2012-01-16 09:09:56,961][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x0ed6ee28, /10.0.1.5:62864 :> /10.0.1.5:9301]
[2012-01-16 09:09:56,961][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x6af37a62, /10.0.1.5:62868 :> /10.0.1.5:9301]
[2012-01-16 09:09:56,961][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x16e7eec9, /10.0.1.5:62869 :> /10.0.1.5:9301]
[2012-01-16 09:09:56,962][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x41aef798, /10.0.1.5:62865 :> /10.0.1.5:9301]
[2012-01-16 09:09:56,977][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x46013dd8, /10.0.1.5:62866 :> /10.0.1.5:9301]
[2012-01-16 09:10:02,164][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x461979eb, /127.0.0.1:62886 => /127.0.0.1:9301]
[2012-01-16 09:10:05,164][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x0576eeb9, /10.0.1.5:62888 => /10.0.1.5:9301]
[2012-01-16 09:10:05,165][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x4332b67c, /10.0.1.5:62889 => /10.0.1.5:9301]
[2012-01-16 09:10:05,165][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x366aa95b, /10.0.1.5:62890 => /10.0.1.5:9301]
[2012-01-16 09:10:05,166][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x1494b146, /10.0.1.5:62891 => /10.0.1.5:9301]
[2012-01-16 09:10:05,166][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x461979eb, /127.0.0.1:62886 :> /127.0.0.1:9301]
[2012-01-16 09:10:05,167][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x4f980c26, /10.0.1.5:62892 => /10.0.1.5:9301]
[2012-01-16 09:10:05,167][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x0745bb9d, /10.0.1.5:62893 => /10.0.1.5:9301]
[2012-01-16 09:10:05,167][TRACE][transport.netty ] [Jade Dragon] channel opened: [id: 0x4b5a142f, /10.0.1.5:62894 => /10.0.1.5:9301]
[2012-01-16 09:10:05,210][DEBUG][transport.netty ] [Jade Dragon] Connected to node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:10:05,211][DEBUG][cluster.service ] [Jade Dragon] processing [zen-disco-receive(join from node[[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:10:05,211][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [10], source [zen-disco-receive(join from node[[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:10:05,211][INFO ][cluster.service ] [Jade Dragon] added {[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]],}, reason: zen-disco-receive(join from node[[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]])
[2012-01-16 09:10:05,212][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:10:05,212][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:10:05,212][DEBUG][cluster.service ] [Jade Dragon] processing [zen-disco-receive(join from node[[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:10:13,664][DEBUG][cluster.service ] [Jade Dragon] processing [routing-table-updater]: execute
[2012-01-16 09:10:13,664][DEBUG][cluster.service ] [Jade Dragon] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:14:53,684][TRACE][http.netty ] [Jade Dragon] channel opened: [id: 0x1420ca8b, /0:0:0:0:0:0:0:1%0:62994 => /0:0:0:0:0:0:0:1%0:9200]
[2012-01-16 09:14:53,867][DEBUG][cluster.service ] [Jade Dragon] processing [create-index [twitter], cause [auto(index api)]]: execute
[2012-01-16 09:14:53,896][DEBUG][indices ] [Jade Dragon] creating Index [twitter], shards [5]/[1]
[2012-01-16 09:14:54,321][DEBUG][index.mapper ] [Jade Dragon] [twitter] using dynamic[true], default mapping: location[null] and source[{
"_default_" : {
}
}]
[2012-01-16 09:14:54,323][DEBUG][index.cache.field.data.resident] [Jade Dragon] [twitter] using [resident] field cache with max_size [-1], expire [null]
[2012-01-16 09:14:54,334][DEBUG][index.cache ] [Jade Dragon] [twitter] Using stats.refresh_interval [1s]
[2012-01-16 09:14:54,411][TRACE][jmx ] [Jade Dragon] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:54,412][TRACE][jmx ] [Jade Dragon] Registered org.elasticsearch.jmx.ResourceDMBean@28b7f2d0 under org.elasticsearch:service=indices,index=twitter
[2012-01-16 09:14:54,430][INFO ][cluster.metadata ] [Jade Dragon] [twitter] creating index, cause [auto(index api)], shards [5]/[1], mappings []
[2012-01-16 09:14:54,532][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [11], source [create-index [twitter], cause [auto(index api)]]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
--------[twitter][0], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][1]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
---- unassigned
--------[twitter][0], node[null], [R], s[UNASSIGNED]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:14:54,533][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:14:54,533][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:14:54,534][DEBUG][indices.cluster ] [Jade Dragon] [twitter][0] creating shard
[2012-01-16 09:14:54,534][DEBUG][index.service ] [Jade Dragon] [twitter] creating shard_id [0]
[2012-01-16 09:14:54,847][DEBUG][index.deletionpolicy ] [Jade Dragon] [twitter][0] Using [keep_only_last] deletion policy
[2012-01-16 09:14:54,850][DEBUG][index.merge.policy ] [Jade Dragon] [twitter][0] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:14:54,851][DEBUG][index.merge.scheduler ] [Jade Dragon] [twitter][0] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:14:54,853][DEBUG][index.shard.service ] [Jade Dragon] [twitter][0] state: [CREATED]
[2012-01-16 09:14:54,856][TRACE][jmx ] [Jade Dragon] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:14:54,856][TRACE][jmx ] [Jade Dragon] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:14:54,856][TRACE][jmx ] [Jade Dragon] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:14:54,856][TRACE][jmx ] [Jade Dragon] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:14:54,856][TRACE][jmx ] [Jade Dragon] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:54,856][TRACE][jmx ] [Jade Dragon] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:54,856][TRACE][jmx ] [Jade Dragon] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:54,857][TRACE][jmx ] [Jade Dragon] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:54,857][TRACE][jmx ] [Jade Dragon] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:14:54,857][TRACE][jmx ] [Jade Dragon] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:14:54,857][TRACE][jmx ] [Jade Dragon] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:54,857][TRACE][jmx ] [Jade Dragon] Registered org.elasticsearch.jmx.ResourceDMBean@7168c1e1 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0
[2012-01-16 09:14:54,858][TRACE][jmx ] [Jade Dragon] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:14:54,858][TRACE][jmx ] [Jade Dragon] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:54,858][TRACE][jmx ] [Jade Dragon] Registered org.elasticsearch.jmx.ResourceDMBean@308c666a under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0,shardType=store
[2012-01-16 09:14:54,859][DEBUG][index.translog ] [Jade Dragon] [twitter][0] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:14:54,864][DEBUG][index.shard.service ] [Jade Dragon] [twitter][0] state: [CREATED]->[RECOVERING], reason [from gateway]
[2012-01-16 09:14:54,865][DEBUG][indices.cluster ] [Jade Dragon] [twitter][2] creating shard
[2012-01-16 09:14:54,877][DEBUG][index.service ] [Jade Dragon] [twitter] creating shard_id [2]
[2012-01-16 09:14:54,865][DEBUG][index.gateway ] [Jade Dragon] [twitter][0] starting recovery from local ...
[2012-01-16 09:14:54,928][DEBUG][index.engine.robin ] [Jade Dragon] [twitter][0] Starting engine
[2012-01-16 09:14:54,974][DEBUG][index.deletionpolicy ] [Jade Dragon] [twitter][2] Using [keep_only_last] deletion policy
[2012-01-16 09:14:54,975][DEBUG][index.merge.policy ] [Jade Dragon] [twitter][2] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:14:54,976][DEBUG][index.merge.scheduler ] [Jade Dragon] [twitter][2] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:14:54,977][DEBUG][index.shard.service ] [Jade Dragon] [twitter][2] state: [CREATED]
[2012-01-16 09:14:54,980][TRACE][jmx ] [Jade Dragon] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:14:54,980][TRACE][jmx ] [Jade Dragon] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:14:54,980][TRACE][jmx ] [Jade Dragon] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:14:54,980][TRACE][jmx ] [Jade Dragon] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:14:54,980][TRACE][jmx ] [Jade Dragon] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:54,980][TRACE][jmx ] [Jade Dragon] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:54,980][TRACE][jmx ] [Jade Dragon] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:54,981][TRACE][jmx ] [Jade Dragon] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:54,981][TRACE][jmx ] [Jade Dragon] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:14:54,981][TRACE][jmx ] [Jade Dragon] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:14:54,981][TRACE][jmx ] [Jade Dragon] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:54,981][TRACE][jmx ] [Jade Dragon] Registered org.elasticsearch.jmx.ResourceDMBean@3ac58af4 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2
[2012-01-16 09:14:54,982][TRACE][jmx ] [Jade Dragon] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:14:54,982][TRACE][jmx ] [Jade Dragon] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:54,982][TRACE][jmx ] [Jade Dragon] Registered org.elasticsearch.jmx.ResourceDMBean@5262667 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2,shardType=store
[2012-01-16 09:14:54,983][DEBUG][index.translog ] [Jade Dragon] [twitter][2] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:14:54,984][DEBUG][indices.memory ] [Jade Dragon] recalculating shard indexing buffer (reason=created_shard[twitter][2]), total is [101.9mb] with [1] active shards, each shard set to [101.9mb]
[2012-01-16 09:14:55,200][DEBUG][index.shard.service ] [Jade Dragon] [twitter][2] state: [CREATED]->[RECOVERING], reason [from gateway]
[2012-01-16 09:14:55,202][DEBUG][indices.cluster ] [Jade Dragon] [twitter][4] creating shard
[2012-01-16 09:14:55,202][DEBUG][index.service ] [Jade Dragon] [twitter] creating shard_id [4]
[2012-01-16 09:14:55,202][DEBUG][index.gateway ] [Jade Dragon] [twitter][2] starting recovery from local ...
[2012-01-16 09:14:55,200][DEBUG][index.shard.service ] [Jade Dragon] [twitter][0] scheduling refresher every 1s
[2012-01-16 09:14:55,219][DEBUG][index.engine.robin ] [Jade Dragon] [twitter][2] Starting engine
[2012-01-16 09:14:55,306][DEBUG][index.shard.service ] [Jade Dragon] [twitter][0] scheduling optimizer / merger every 1s
[2012-01-16 09:14:55,306][DEBUG][index.shard.service ] [Jade Dragon] [twitter][0] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2012-01-16 09:14:55,306][TRACE][index.shard.service ] [Jade Dragon] [twitter][0] refresh with waitForOperations[false]
[2012-01-16 09:14:55,307][DEBUG][index.gateway ] [Jade Dragon] [twitter][0] recovery completed from local, took [441ms]
index : files [0] with total_size [0b], took[49ms]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
translog : number_of_operations [0], took [428ms]
[2012-01-16 09:14:55,307][DEBUG][cluster.action.shard ] [Jade Dragon] sending shard started for [twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:14:55,307][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:14:55,359][DEBUG][index.deletionpolicy ] [Jade Dragon] [twitter][4] Using [keep_only_last] deletion policy
[2012-01-16 09:14:55,359][DEBUG][index.merge.policy ] [Jade Dragon] [twitter][4] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:14:55,360][DEBUG][index.merge.scheduler ] [Jade Dragon] [twitter][4] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:14:55,360][DEBUG][index.shard.service ] [Jade Dragon] [twitter][4] state: [CREATED]
[2012-01-16 09:14:55,365][TRACE][jmx ] [Jade Dragon] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:14:55,366][TRACE][jmx ] [Jade Dragon] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:14:55,367][TRACE][jmx ] [Jade Dragon] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:14:55,367][TRACE][jmx ] [Jade Dragon] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:14:55,367][TRACE][jmx ] [Jade Dragon] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:55,367][TRACE][jmx ] [Jade Dragon] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:55,367][TRACE][jmx ] [Jade Dragon] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:55,367][TRACE][jmx ] [Jade Dragon] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:55,367][TRACE][jmx ] [Jade Dragon] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:14:55,367][TRACE][jmx ] [Jade Dragon] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:14:55,368][TRACE][jmx ] [Jade Dragon] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:55,369][TRACE][jmx ] [Jade Dragon] Registered org.elasticsearch.jmx.ResourceDMBean@7b537060 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4
[2012-01-16 09:14:55,370][TRACE][jmx ] [Jade Dragon] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:14:55,371][TRACE][jmx ] [Jade Dragon] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:55,371][TRACE][jmx ] [Jade Dragon] Registered org.elasticsearch.jmx.ResourceDMBean@17b60b6 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4,shardType=store
[2012-01-16 09:14:55,371][DEBUG][index.translog ] [Jade Dragon] [twitter][4] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:14:55,372][DEBUG][indices.memory ] [Jade Dragon] recalculating shard indexing buffer (reason=created_shard[twitter][4]), total is [101.9mb] with [2] active shards, each shard set to [50.9mb]
[2012-01-16 09:14:55,423][DEBUG][index.shard.service ] [Jade Dragon] [twitter][2] scheduling refresher every 1s
[2012-01-16 09:14:55,423][DEBUG][index.shard.service ] [Jade Dragon] [twitter][4] state: [CREATED]->[RECOVERING], reason [from gateway]
[2012-01-16 09:14:55,423][DEBUG][index.shard.service ] [Jade Dragon] [twitter][2] scheduling optimizer / merger every 1s
[2012-01-16 09:14:55,423][DEBUG][index.shard.service ] [Jade Dragon] [twitter][2] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2012-01-16 09:14:55,424][TRACE][index.shard.service ] [Jade Dragon] [twitter][2] refresh with waitForOperations[false]
[2012-01-16 09:14:55,424][DEBUG][index.gateway ] [Jade Dragon] [twitter][2] recovery completed from local, took [222ms]
index : files [0] with total_size [0b], took[0s]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
translog : number_of_operations [0], took [205ms]
[2012-01-16 09:14:55,424][DEBUG][cluster.action.shard ] [Jade Dragon] sending shard started for [twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:14:55,424][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:14:55,424][DEBUG][index.gateway ] [Jade Dragon] [twitter][4] starting recovery from local ...
[2012-01-16 09:14:55,425][DEBUG][index.engine.robin ] [Jade Dragon] [twitter][4] Starting engine
[2012-01-16 09:14:55,429][DEBUG][cluster.service ] [Jade Dragon] processing [create-index [twitter], cause [auto(index api)]]: done applying updated cluster_state
[2012-01-16 09:14:55,429][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2012-01-16 09:14:55,430][DEBUG][cluster.action.shard ] [Jade Dragon] applying started shards [[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING], [twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]], reason [after recovery from gateway]
[2012-01-16 09:14:55,431][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [12], source [shard-started ([twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][1]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
---- unassigned
--------[twitter][0], node[null], [R], s[UNASSIGNED]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:14:55,432][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:14:55,432][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:14:55,435][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]: done applying updated cluster_state
[2012-01-16 09:14:55,436][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2012-01-16 09:14:55,436][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]: no change in cluster_state
[2012-01-16 09:14:55,481][DEBUG][index.shard.service ] [Jade Dragon] [twitter][4] scheduling refresher every 1s
[2012-01-16 09:14:55,481][DEBUG][index.shard.service ] [Jade Dragon] [twitter][4] scheduling optimizer / merger every 1s
[2012-01-16 09:14:55,481][DEBUG][index.shard.service ] [Jade Dragon] [twitter][4] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2012-01-16 09:14:55,481][TRACE][index.shard.service ] [Jade Dragon] [twitter][4] refresh with waitForOperations[false]
[2012-01-16 09:14:55,481][DEBUG][index.gateway ] [Jade Dragon] [twitter][4] recovery completed from local, took [57ms]
index : files [0] with total_size [0b], took[1ms]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
translog : number_of_operations [0], took [57ms]
[2012-01-16 09:14:55,481][DEBUG][cluster.action.shard ] [Jade Dragon] sending shard started for [twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:14:55,482][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:14:55,482][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2012-01-16 09:14:55,482][DEBUG][cluster.action.shard ] [Jade Dragon] applying started shards [[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]], reason [after recovery from gateway]
[2012-01-16 09:14:55,484][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [13], source [shard-started ([twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][1]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
---- unassigned
--------[twitter][0], node[null], [R], s[UNASSIGNED]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:14:55,484][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:14:55,484][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:14:55,486][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]: done applying updated cluster_state
[2012-01-16 09:15:00,908][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:15:00,909][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2012-01-16 09:15:00,910][DEBUG][cluster.action.shard ] [Jade Dragon] applying started shards [[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]], reason [after recovery from gateway]
[2012-01-16 09:15:00,916][TRACE][gateway.local ] [Jade Dragon] [twitter][0], node[null], [R], s[UNASSIGNED]: checking node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:00,916][TRACE][gateway.local ] [Jade Dragon] [twitter][0], node[null], [R], s[UNASSIGNED]: checking node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:15:00,925][TRACE][gateway.local ] [Jade Dragon] [twitter][1], node[null], [R], s[UNASSIGNED]: checking node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:00,925][TRACE][gateway.local ] [Jade Dragon] [twitter][1], node[null], [R], s[UNASSIGNED]: checking node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:15:00,938][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING], reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:15:00,949][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING], reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:15:00,950][TRACE][gateway.local ] [Jade Dragon] [twitter][2], node[null], [R], s[UNASSIGNED]: checking node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:00,951][TRACE][gateway.local ] [Jade Dragon] [twitter][2], node[null], [R], s[UNASSIGNED]: checking node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:15:00,952][TRACE][gateway.local ] [Jade Dragon] [twitter][4], node[null], [R], s[UNASSIGNED]: checking node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:00,952][TRACE][gateway.local ] [Jade Dragon] [twitter][4], node[null], [R], s[UNASSIGNED]: checking node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:15:00,954][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [14], source [shard-started ([twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]), reason [after recovery from gateway]]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
---- unassigned
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:15:00,954][DEBUG][indices.cluster ] [Jade Dragon] [twitter][1] creating shard
[2012-01-16 09:15:00,954][DEBUG][index.service ] [Jade Dragon] [twitter] creating shard_id [1]
[2012-01-16 09:15:00,954][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:15:01,000][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:15:01,000][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:15:01,060][DEBUG][index.deletionpolicy ] [Jade Dragon] [twitter][1] Using [keep_only_last] deletion policy
[2012-01-16 09:15:01,061][DEBUG][index.merge.policy ] [Jade Dragon] [twitter][1] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:15:01,061][DEBUG][index.merge.scheduler ] [Jade Dragon] [twitter][1] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:15:01,061][DEBUG][index.shard.service ] [Jade Dragon] [twitter][1] state: [CREATED]
[2012-01-16 09:15:01,064][TRACE][jmx ] [Jade Dragon] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:01,064][TRACE][jmx ] [Jade Dragon] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:01,064][TRACE][jmx ] [Jade Dragon] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:01,064][TRACE][jmx ] [Jade Dragon] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:01,064][TRACE][jmx ] [Jade Dragon] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,064][TRACE][jmx ] [Jade Dragon] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,064][TRACE][jmx ] [Jade Dragon] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,064][TRACE][jmx ] [Jade Dragon] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,064][TRACE][jmx ] [Jade Dragon] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:15:01,064][TRACE][jmx ] [Jade Dragon] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:01,064][TRACE][jmx ] [Jade Dragon] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,075][TRACE][jmx ] [Jade Dragon] Registered org.elasticsearch.jmx.ResourceDMBean@6d386751 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1
[2012-01-16 09:15:01,076][TRACE][jmx ] [Jade Dragon] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:01,078][TRACE][jmx ] [Jade Dragon] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,082][TRACE][jmx ] [Jade Dragon] Registered org.elasticsearch.jmx.ResourceDMBean@3bdbe135 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1,shardType=store
[2012-01-16 09:15:01,082][DEBUG][index.translog ] [Jade Dragon] [twitter][1] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:15:01,113][DEBUG][indices.memory ] [Jade Dragon] recalculating shard indexing buffer (reason=created_shard[twitter][1]), total is [101.9mb] with [3] active shards, each shard set to [33.9mb]
[2012-01-16 09:15:01,210][DEBUG][index.shard.service ] [Jade Dragon] [twitter][1] state: [CREATED]->[RECOVERING], reason [from [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:15:01,216][TRACE][indices.recovery ] [Jade Dragon] [twitter][1] starting recovery from [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
[2012-01-16 09:15:01,280][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]), reason [after recovery from gateway]]: done applying updated cluster_state
[2012-01-16 09:15:01,280][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]), reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2012-01-16 09:15:01,280][DEBUG][cluster.action.shard ] [Jade Dragon] applying started shards [[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING], [twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]], reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:15:01,281][TRACE][gateway.local ] [Jade Dragon] [twitter][2], node[null], [R], s[UNASSIGNED]: checking node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:01,282][TRACE][gateway.local ] [Jade Dragon] [twitter][2], node[null], [R], s[UNASSIGNED]: checking node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:15:01,448][TRACE][gateway.local ] [Jade Dragon] [twitter][3], node[null], [R], s[UNASSIGNED]: checking node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:01,449][TRACE][gateway.local ] [Jade Dragon] [twitter][3], node[null], [R], s[UNASSIGNED]: checking node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:15:01,449][TRACE][gateway.local ] [Jade Dragon] [twitter][4], node[null], [R], s[UNASSIGNED]: checking node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:01,450][TRACE][gateway.local ] [Jade Dragon] [twitter][4], node[null], [R], s[UNASSIGNED]: checking node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:15:01,452][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [15], source [shard-started ([twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]), reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
---- unassigned
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:15:01,453][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:15:01,453][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:15:01,454][DEBUG][indices.cluster ] [Jade Dragon] [twitter][3] creating shard
[2012-01-16 09:15:01,454][DEBUG][index.service ] [Jade Dragon] [twitter] creating shard_id [3]
[2012-01-16 09:15:01,582][DEBUG][index.deletionpolicy ] [Jade Dragon] [twitter][3] Using [keep_only_last] deletion policy
[2012-01-16 09:15:01,676][DEBUG][index.merge.policy ] [Jade Dragon] [twitter][3] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:15:01,676][DEBUG][index.merge.scheduler ] [Jade Dragon] [twitter][3] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:15:01,677][DEBUG][index.shard.service ] [Jade Dragon] [twitter][3] state: [CREATED]
[2012-01-16 09:15:01,679][TRACE][jmx ] [Jade Dragon] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:01,679][TRACE][jmx ] [Jade Dragon] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:01,679][TRACE][jmx ] [Jade Dragon] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:01,679][TRACE][jmx ] [Jade Dragon] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:01,679][TRACE][jmx ] [Jade Dragon] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,679][TRACE][jmx ] [Jade Dragon] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,679][TRACE][jmx ] [Jade Dragon] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,679][TRACE][jmx ] [Jade Dragon] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,680][TRACE][jmx ] [Jade Dragon] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:15:01,680][TRACE][jmx ] [Jade Dragon] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:01,680][TRACE][jmx ] [Jade Dragon] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,680][TRACE][jmx ] [Jade Dragon] Registered org.elasticsearch.jmx.ResourceDMBean@59829c6b under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3
[2012-01-16 09:15:01,681][TRACE][jmx ] [Jade Dragon] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:01,681][TRACE][jmx ] [Jade Dragon] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,681][TRACE][jmx ] [Jade Dragon] Registered org.elasticsearch.jmx.ResourceDMBean@589da1dd under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3,shardType=store
[2012-01-16 09:15:01,682][DEBUG][index.translog ] [Jade Dragon] [twitter][3] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:15:01,683][DEBUG][indices.memory ] [Jade Dragon] recalculating shard indexing buffer (reason=created_shard[twitter][3]), total is [101.9mb] with [4] active shards, each shard set to [25.4mb]
[2012-01-16 09:15:01,683][DEBUG][index.shard.service ] [Jade Dragon] [twitter][3] state: [CREATED]->[RECOVERING], reason [from [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:15:01,695][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]), reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]]: done applying updated cluster_state
[2012-01-16 09:15:01,695][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]), reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2012-01-16 09:15:01,695][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]), reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]]: no change in cluster_state
[2012-01-16 09:15:01,695][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2012-01-16 09:15:01,696][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]), reason [after recovery from gateway]]: no change in cluster_state
[2012-01-16 09:15:01,696][TRACE][indices.recovery ] [Jade Dragon] [twitter][3] starting recovery from [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
[2012-01-16 09:15:01,761][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING], reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:15:01,763][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]), reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2012-01-16 09:15:01,763][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]), reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]]: no change in cluster_state
[2012-01-16 09:15:01,768][TRACE][indices.recovery ] [Jade Dragon] [twitter][0] starting recovery to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], mark_as_relocated false
[2012-01-16 09:15:01,905][DEBUG][index.engine.robin ] [Jade Dragon] [twitter][1] Starting engine
[2012-01-16 09:15:01,969][TRACE][indices.recovery ] [Jade Dragon] [twitter][0] recovery [phase1] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: recovering [segments_1], does not exists in remote
[2012-01-16 09:15:01,969][TRACE][indices.recovery ] [Jade Dragon] [twitter][0] recovery [phase1] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: recovering_files [1] with total_size [58b], reusing_files [0] with total_size [0b]
[2012-01-16 09:15:02,125][DEBUG][index.engine.robin ] [Jade Dragon] [twitter][3] Starting engine
[2012-01-16 09:15:02,134][TRACE][indices.recovery ] [Jade Dragon] [twitter][2] starting recovery to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], mark_as_relocated false
[2012-01-16 09:15:02,134][TRACE][indices.recovery ] [Jade Dragon] [twitter][2] recovery [phase1] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: recovering [segments_1], does not exists in remote
[2012-01-16 09:15:02,134][TRACE][indices.recovery ] [Jade Dragon] [twitter][2] recovery [phase1] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: recovering_files [1] with total_size [58b], reusing_files [0] with total_size [0b]
[2012-01-16 09:15:02,182][DEBUG][index.shard.service ] [Jade Dragon] [twitter][1] state: [RECOVERING]->[STARTED], reason [post recovery]
[2012-01-16 09:15:02,182][DEBUG][index.shard.service ] [Jade Dragon] [twitter][1] scheduling refresher every 1s
[2012-01-16 09:15:02,183][DEBUG][index.shard.service ] [Jade Dragon] [twitter][1] scheduling optimizer / merger every 1s
[2012-01-16 09:15:02,183][DEBUG][index.shard.service ] [Jade Dragon] [twitter][3] state: [RECOVERING]->[STARTED], reason [post recovery]
[2012-01-16 09:15:02,184][DEBUG][index.shard.service ] [Jade Dragon] [twitter][3] scheduling refresher every 1s
[2012-01-16 09:15:02,184][DEBUG][index.shard.service ] [Jade Dragon] [twitter][3] scheduling optimizer / merger every 1s
[2012-01-16 09:15:02,194][DEBUG][indices.recovery ] [Jade Dragon] [twitter][1] recovery completed from [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], took[914ms]
phase1: recovered_files [1] with total_size of [58b], took [336ms], throttling_wait [0s]
: reusing_files [0] with total_size of [0b]
phase2: recovered [0] transaction log operations, took [253ms]
phase3: recovered [0] transaction log operations, took [42ms]
[2012-01-16 09:15:02,195][DEBUG][cluster.action.shard ] [Jade Dragon] sending shard started for [twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:15:02,195][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:15:02,195][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]]]: execute
[2012-01-16 09:15:02,195][DEBUG][cluster.action.shard ] [Jade Dragon] applying started shards [[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]], reason [after recovery (replica) from node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:15:02,197][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [16], source [shard-started ([twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]]]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
---- unassigned
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:15:02,199][TRACE][indices.cluster ] [Jade Dragon] [{}][{}] master [{}] marked shard as initializing, but shard already created, mark shard as started
[2012-01-16 09:15:02,199][DEBUG][cluster.action.shard ] [Jade Dragon] sending shard started for [twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING], reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:15:02,199][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING], reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:15:02,200][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]]]: done applying updated cluster_state
[2012-01-16 09:15:02,200][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]), reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2012-01-16 09:15:02,200][DEBUG][cluster.action.shard ] [Jade Dragon] applying started shards [[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]], reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:15:02,195][TRACE][indices.recovery ] [Jade Dragon] [twitter][2] recovery [phase1] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: took [58ms]
[2012-01-16 09:15:02,206][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:15:02,207][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:15:02,204][TRACE][indices.recovery ] [Jade Dragon] [twitter][0] recovery [phase1] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: took [236ms]
[2012-01-16 09:15:02,197][DEBUG][indices.recovery ] [Jade Dragon] [twitter][3] recovery completed from [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], took[496ms]
phase1: recovered_files [1] with total_size of [58b], took [418ms], throttling_wait [0s]
: reusing_files [0] with total_size of [0b]
phase2: recovered [0] transaction log operations, took [31ms]
phase3: recovered [0] transaction log operations, took [39ms]
[2012-01-16 09:15:02,209][DEBUG][cluster.action.shard ] [Jade Dragon] sending shard started for [twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:15:02,209][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:15:02,220][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [17], source [shard-started ([twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]), reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
---- unassigned
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:15:02,223][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]), reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]]: done applying updated cluster_state
[2012-01-16 09:15:02,223][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]]]: execute
[2012-01-16 09:15:02,223][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]]]: no change in cluster_state
[2012-01-16 09:15:02,224][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:15:02,226][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:15:02,230][TRACE][indices.recovery ] [Jade Dragon] [twitter][2] recovery [phase2] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: sending transaction log operations
[2012-01-16 09:15:02,231][TRACE][indices.recovery ] [Jade Dragon] [twitter][0] recovery [phase2] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: sending transaction log operations
[2012-01-16 09:15:02,263][TRACE][indices.recovery ] [Jade Dragon] [twitter][0] recovery [phase2] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: took [32ms]
[2012-01-16 09:15:02,266][TRACE][indices.recovery ] [Jade Dragon] [twitter][0] recovery [phase3] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: sending transaction log operations
[2012-01-16 09:15:02,266][TRACE][indices.recovery ] [Jade Dragon] [twitter][2] recovery [phase2] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: took [36ms]
[2012-01-16 09:15:02,267][TRACE][indices.recovery ] [Jade Dragon] [twitter][2] recovery [phase3] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: sending transaction log operations
[2012-01-16 09:15:02,279][TRACE][indices.recovery ] [Jade Dragon] [twitter][0] recovery [phase3] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: took [12ms]
[2012-01-16 09:15:02,280][TRACE][indices.recovery ] [Jade Dragon] [twitter][2] recovery [phase3] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: took [13ms]
[2012-01-16 09:15:02,283][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:15:02,283][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]]: execute
[2012-01-16 09:15:02,283][DEBUG][cluster.action.shard ] [Jade Dragon] applying started shards [[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]], reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:15:02,284][TRACE][gateway.local ] [Jade Dragon] [twitter][4], node[null], [R], s[UNASSIGNED]: checking node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:02,303][TRACE][gateway.local ] [Jade Dragon] [twitter][4], node[null], [R], s[UNASSIGNED]: checking node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:15:02,298][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:15:02,306][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [18], source [shard-started ([twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
---- unassigned
[2012-01-16 09:15:02,309][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]]: done applying updated cluster_state
[2012-01-16 09:15:02,309][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]]: execute
[2012-01-16 09:15:02,309][DEBUG][cluster.action.shard ] [Jade Dragon] applying started shards [[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]], reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:15:02,310][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [19], source [shard-started ([twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
---- unassigned
[2012-01-16 09:15:02,311][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:15:02,317][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:15:02,332][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:15:02,332][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:15:02,333][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]]: done applying updated cluster_state
[2012-01-16 09:15:02,337][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING], reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:15:02,337][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]), reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2012-01-16 09:15:02,338][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]), reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]]: no change in cluster_state
[2012-01-16 09:15:02,553][TRACE][indices.recovery ] [Jade Dragon] [twitter][4] starting recovery to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], mark_as_relocated false
[2012-01-16 09:15:02,553][TRACE][indices.recovery ] [Jade Dragon] [twitter][4] recovery [phase1] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: recovering [segments_1], does not exists in remote
[2012-01-16 09:15:02,553][TRACE][indices.recovery ] [Jade Dragon] [twitter][4] recovery [phase1] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: recovering_files [1] with total_size [58b], reusing_files [0] with total_size [0b]
[2012-01-16 09:15:02,573][TRACE][indices.recovery ] [Jade Dragon] [twitter][4] recovery [phase1] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: took [20ms]
[2012-01-16 09:15:02,575][TRACE][indices.recovery ] [Jade Dragon] [twitter][4] recovery [phase2] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: sending transaction log operations
[2012-01-16 09:15:02,587][TRACE][indices.recovery ] [Jade Dragon] [twitter][4] recovery [phase2] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: took [12ms]
[2012-01-16 09:15:02,588][TRACE][indices.recovery ] [Jade Dragon] [twitter][4] recovery [phase3] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: sending transaction log operations
[2012-01-16 09:15:02,595][TRACE][indices.recovery ] [Jade Dragon] [twitter][4] recovery [phase3] to [Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]: took [7ms]
[2012-01-16 09:15:02,597][DEBUG][cluster.action.shard ] [Jade Dragon] received shard started for [twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:15:02,598][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]]: execute
[2012-01-16 09:15:02,598][DEBUG][cluster.action.shard ] [Jade Dragon] applying started shards [[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]], reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:15:02,599][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [20], source [shard-started ([twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
---- unassigned
[2012-01-16 09:15:02,605][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:15:02,605][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:15:02,606][DEBUG][cluster.service ] [Jade Dragon] processing [shard-started ([twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]]: done applying updated cluster_state
[2012-01-16 09:15:02,621][TRACE][index.shard.service ] [Jade Dragon] [twitter][2] index [Document<stored,binary,omitNorms,indexOptions=DOCS_ONLY<_source:[B@5e159d10> indexed,omitNorms,indexOptions=DOCS_ONLY<_type:tweet> stored,indexed,tokenized,omitNorms<_uid:> indexed,tokenized<user:kimchy> indexed,tokenized,omitNorms,indexOptions=DOCS_ONLY<post_date:> indexed,tokenized<message:trying out Elastic Search> indexed,tokenized<_all:>>]
[2012-01-16 09:15:02,729][DEBUG][cluster.service ] [Jade Dragon] processing [update-mapping [twitter][tweet]]: execute
[2012-01-16 09:15:02,748][DEBUG][cluster.metadata ] [Jade Dragon] [twitter] update_mapping [tweet] (dynamic) with source [{"tweet":{"properties":{"message":{"type":"string"},"post_date":{"type":"date","format":"dateOptionalTime"},"user":{"type":"string"}}}}]
[2012-01-16 09:15:02,786][TRACE][cluster.service ] [Jade Dragon] cluster state updated:
version [21], source [update-mapping [twitter][tweet]]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], local, master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
---- unassigned
[2012-01-16 09:15:02,788][DEBUG][cluster.service ] [Jade Dragon] processing [update-mapping [twitter][tweet]]: done applying updated cluster_state
[2012-01-16 09:15:02,788][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:15:02,806][DEBUG][river.cluster ] [Jade Dragon] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:15:02,830][TRACE][http.netty ] [Jade Dragon] channel closed: [id: 0x1420ca8b, /0:0:0:0:0:0:0:1%0:62994 :> /0:0:0:0:0:0:0:1%0:9200]
[2012-01-16 09:15:03,439][TRACE][index.shard.service ] [Jade Dragon] [twitter][2] refresh with waitForOperations[false]
[2012-01-16 09:15:14,177][INFO ][node ] [Jade Dragon] {0.18.7}[11896]: stopping ...
[2012-01-16 09:15:14,208][TRACE][jmx ] [Jade Dragon] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4
[2012-01-16 09:15:14,209][TRACE][jmx ] [Jade Dragon] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4,shardType=store
[2012-01-16 09:15:14,209][TRACE][jmx ] [Jade Dragon] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1
[2012-01-16 09:15:14,209][TRACE][jmx ] [Jade Dragon] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3
[2012-01-16 09:15:14,209][TRACE][jmx ] [Jade Dragon] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1,shardType=store
[2012-01-16 09:15:14,209][TRACE][jmx ] [Jade Dragon] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0
[2012-01-16 09:15:14,210][TRACE][jmx ] [Jade Dragon] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0,shardType=store
[2012-01-16 09:15:14,210][DEBUG][index.shard.service ] [Jade Dragon] [twitter][0] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:15:14,208][TRACE][jmx ] [Jade Dragon] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2
[2012-01-16 09:15:14,210][DEBUG][index.shard.service ] [Jade Dragon] [twitter][1] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:15:14,210][TRACE][jmx ] [Jade Dragon] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3,shardType=store
[2012-01-16 09:15:14,221][DEBUG][index.shard.service ] [Jade Dragon] [twitter][3] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:15:14,209][DEBUG][index.shard.service ] [Jade Dragon] [twitter][4] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:15:14,212][TRACE][jmx ] [Jade Dragon] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2,shardType=store
[2012-01-16 09:15:14,230][DEBUG][index.shard.service ] [Jade Dragon] [twitter][2] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:15:14,240][TRACE][jmx ] [Jade Dragon] Unregistered org.elasticsearch:service=indices,index=twitter
[2012-01-16 09:15:14,256][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x19176e5f, /10.0.1.5:62857 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,262][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x140e3010, /10.0.1.5:62856 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,261][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x083ba4f1, /10.0.1.5:62852 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,262][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x24b6a561, /10.0.1.5:62855 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,261][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x1e37504d, /10.0.1.5:62854 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,265][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x1be2f6b0, /10.0.1.5:62858 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,264][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x0745bb9d, /10.0.1.5:62893 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,264][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x03c9ce70, /10.0.1.5:62853 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,266][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x0576eeb9, /10.0.1.5:62888 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,267][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x366aa95b, /10.0.1.5:62890 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,268][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x4b5a142f, /10.0.1.5:62894 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,271][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x4f980c26, /10.0.1.5:62892 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,274][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x1494b146, /10.0.1.5:62891 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,276][TRACE][transport.netty ] [Jade Dragon] channel closed: [id: 0x4332b67c, /10.0.1.5:62889 :> /10.0.1.5:9301]
[2012-01-16 09:15:14,281][TRACE][jmx ] [Jade Dragon] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:15:14,281][TRACE][jmx ] [Jade Dragon] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:15:14,281][INFO ][node ] [Jade Dragon] {0.18.7}[11896]: stopped
[2012-01-16 09:15:14,282][INFO ][node ] [Jade Dragon] {0.18.7}[11896]: closing ...
[2012-01-16 09:15:14,337][TRACE][node ] [Jade Dragon] Close times for each service:
StopWatch 'node_close': running time = 48ms
-----------------------------------------
ms % Task name
-----------------------------------------
00000 000% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00025 052% indices
00000 000% routing
00000 000% cluster
00021 044% discovery
00000 000% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00001 002% node_cache
00000 000% script
00001 002% thread_pool
00000 000% thread_pool_force_shutdown
[2012-01-16 09:15:14,338][INFO ][node ] [Jade Dragon] {0.18.7}[11896]: closed
[2012-01-16 09:15:25,311][INFO ][node ] [Living Totem] {0.18.7}[12022]: initializing ...
[2012-01-16 09:15:25,319][INFO ][plugins ] [Living Totem] loaded [], sites []
[2012-01-16 09:15:26,495][DEBUG][threadpool ] [Living Totem] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:15:26,499][DEBUG][threadpool ] [Living Totem] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:15:26,499][DEBUG][threadpool ] [Living Totem] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:15:26,499][DEBUG][threadpool ] [Living Totem] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:15:26,499][DEBUG][threadpool ] [Living Totem] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:15:26,503][DEBUG][threadpool ] [Living Totem] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:15:26,503][DEBUG][threadpool ] [Living Totem] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:15:26,516][DEBUG][transport.netty ] [Living Totem] using worker_count[4], port[9301], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:15:26,534][DEBUG][discovery.zen.ping.unicast] [Living Totem] using initial hosts [localhost:9300, localhost:9301], with concurrent_connects [10]
[2012-01-16 09:15:26,538][DEBUG][discovery.zen ] [Living Totem] using ping.timeout [3s]
[2012-01-16 09:15:26,544][DEBUG][discovery.zen.elect ] [Living Totem] using minimum_master_nodes [-1]
[2012-01-16 09:15:26,547][DEBUG][discovery.zen.fd ] [Living Totem] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:15:26,551][DEBUG][discovery.zen.fd ] [Living Totem] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:15:26,573][DEBUG][monitor.jvm ] [Living Totem] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:15:27,083][DEBUG][monitor.os ] [Living Totem] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@75a9883d] with refresh_interval [1s]
[2012-01-16 09:15:27,088][DEBUG][monitor.process ] [Living Totem] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@6070c38c] with refresh_interval [1s]
[2012-01-16 09:15:27,092][DEBUG][monitor.jvm ] [Living Totem] Using refresh_interval [1s]
[2012-01-16 09:15:27,093][DEBUG][monitor.network ] [Living Totem] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@5c76458f] with refresh_interval [5s]
[2012-01-16 09:15:27,102][DEBUG][monitor.network ] [Living Totem] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:15:27,106][TRACE][monitor.network ] [Living Totem] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:22115 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:22115 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3946677 (3.8M) TX bytes:3946677 (3.8M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2841007 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1506897 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3828048357 (3.6G) TX bytes:117892668 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:15:27,108][TRACE][env ] [Living Totem] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0 ...
[2012-01-16 09:15:27,205][DEBUG][env ] [Living Totem] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:15:27,206][TRACE][env ] [Living Totem] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0, free_space [221.7gb, usable_space [221.4gb
[2012-01-16 09:15:27,522][DEBUG][cache.memory ] [Living Totem] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:15:27,535][DEBUG][cluster.routing.allocation.decider] [Living Totem] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:15:27,536][DEBUG][cluster.routing.allocation.decider] [Living Totem] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:15:27,536][DEBUG][cluster.routing.allocation.decider] [Living Totem] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:15:27,539][DEBUG][gateway.local ] [Living Totem] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:15:27,565][DEBUG][indices.recovery ] [Living Totem] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:15:27,751][TRACE][jmx ] [Living Totem] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:27,751][TRACE][jmx ] [Living Totem] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:27,757][TRACE][jmx ] [Living Totem] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:27,760][TRACE][jmx ] [Living Totem] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:15:27,764][TRACE][jmx ] [Living Totem] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:27,764][TRACE][jmx ] [Living Totem] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:27,764][TRACE][jmx ] [Living Totem] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:27,765][TRACE][jmx ] [Living Totem] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:27,765][TRACE][jmx ] [Living Totem] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:15:27,765][TRACE][jmx ] [Living Totem] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:27,765][TRACE][jmx ] [Living Totem] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:15:27,765][TRACE][jmx ] [Living Totem] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:27,765][TRACE][jmx ] [Living Totem] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:27,765][TRACE][jmx ] [Living Totem] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:27,766][DEBUG][http.netty ] [Living Totem] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:15:27,773][DEBUG][indices.memory ] [Living Totem] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:15:27,782][DEBUG][indices.cache.filter ] [Living Totem] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:15:27,867][INFO ][node ] [Living Totem] {0.18.7}[12022]: initialized
[2012-01-16 09:15:27,867][INFO ][node ] [Living Totem] {0.18.7}[12022]: starting ...
[2012-01-16 09:15:27,892][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:15:27,968][DEBUG][transport.netty ] [Living Totem] Bound to address [/0.0.0.0:9301]
[2012-01-16 09:15:27,971][INFO ][transport ] [Living Totem] bound_address {inet[/0.0.0.0:9301]}, publish_address {inet[/10.0.1.5:9301]}
[2012-01-16 09:15:28,130][TRACE][discovery ] [Living Totem] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:15:28,156][DEBUG][transport.netty ] [Living Totem] Connected to node [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:15:28,157][TRACE][discovery.zen.ping.unicast] [Living Totem] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:15:28,186][DEBUG][transport.netty ] [Living Totem] Connected to node [[#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:15:28,186][TRACE][discovery.zen.ping.unicast] [Living Totem] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:15:28,193][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x513c952f, /127.0.0.1:62998 => /127.0.0.1:9301]
[2012-01-16 09:15:28,212][TRACE][discovery.zen.ping.unicast] [Living Totem] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:15:28,216][TRACE][discovery.zen.ping.unicast] [Living Totem] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:15:29,635][TRACE][discovery.zen.ping.unicast] [Living Totem] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:15:29,636][TRACE][discovery.zen.ping.unicast] [Living Totem] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:15:29,638][TRACE][discovery.zen.ping.unicast] [Living Totem] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:15:29,639][TRACE][discovery.zen.ping.unicast] [Living Totem] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:15:31,139][TRACE][discovery.zen.ping.unicast] [Living Totem] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:15:31,140][TRACE][discovery.zen.ping.unicast] [Living Totem] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:15:31,142][TRACE][discovery.zen.ping.unicast] [Living Totem] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:15:31,142][TRACE][discovery.zen.ping.unicast] [Living Totem] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:15:31,144][DEBUG][discovery.zen ] [Living Totem] ping responses:
--> target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:15:31,147][DEBUG][transport.netty ] [Living Totem] Disconnected from [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:15:31,152][DEBUG][transport.netty ] [Living Totem] Disconnected from [[#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:15:31,155][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x513c952f, /127.0.0.1:62998 :> /127.0.0.1:9301]
[2012-01-16 09:15:31,170][DEBUG][transport.netty ] [Living Totem] Connected to node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:15:31,269][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x3f70119f, /10.0.1.5:63006 => /10.0.1.5:9301]
[2012-01-16 09:15:31,271][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x10bcc8f4, /10.0.1.5:63007 => /10.0.1.5:9301]
[2012-01-16 09:15:31,272][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x3a1be20c, /10.0.1.5:63008 => /10.0.1.5:9301]
[2012-01-16 09:15:31,275][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x0400c02a, /10.0.1.5:63009 => /10.0.1.5:9301]
[2012-01-16 09:15:31,276][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x083ba4f1, /10.0.1.5:63010 => /10.0.1.5:9301]
[2012-01-16 09:15:31,276][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x03c9ce70, /10.0.1.5:63011 => /10.0.1.5:9301]
[2012-01-16 09:15:31,277][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x66e90097, /10.0.1.5:63012 => /10.0.1.5:9301]
[2012-01-16 09:15:31,371][DEBUG][discovery.zen.fd ] [Living Totem] [master] starting fault detection against master [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], reason [initial_join]
[2012-01-16 09:15:31,375][DEBUG][cluster.service ] [Living Totem] processing [zen-disco-join (detected master)]: execute
[2012-01-16 09:15:31,377][TRACE][cluster.service ] [Living Totem] cluster state updated:
version [22], source [zen-disco-join (detected master)]
nodes:
[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:15:31,383][DEBUG][transport.netty ] [Living Totem] Connected to node [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:31,383][DEBUG][cluster.service ] [Living Totem] processing [zen-disco-join (detected master)]: done applying updated cluster_state
[2012-01-16 09:15:31,383][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x0d335207, /10.0.1.5:63013 => /10.0.1.5:9301]
[2012-01-16 09:15:31,384][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x2d44b624, /10.0.1.5:63014 => /10.0.1.5:9301]
[2012-01-16 09:15:31,384][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x36fffa61, /10.0.1.5:63015 => /10.0.1.5:9301]
[2012-01-16 09:15:31,384][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x2a06bbe7, /10.0.1.5:63016 => /10.0.1.5:9301]
[2012-01-16 09:15:31,384][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x1f8a6890, /10.0.1.5:63017 => /10.0.1.5:9301]
[2012-01-16 09:15:31,385][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x37d6d61d, /10.0.1.5:63018 => /10.0.1.5:9301]
[2012-01-16 09:15:31,385][TRACE][transport.netty ] [Living Totem] channel opened: [id: 0x115872f5, /10.0.1.5:63019 => /10.0.1.5:9301]
[2012-01-16 09:15:32,379][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:33,382][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:34,384][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:35,386][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:36,388][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:37,390][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:38,392][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:39,394][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:40,397][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:41,399][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:42,402][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:43,404][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:44,406][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:45,408][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:46,411][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:47,412][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:48,414][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:49,416][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:50,418][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:51,420][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:52,421][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:53,430][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:54,439][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:55,443][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:56,444][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:57,447][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:58,131][WARN ][discovery ] [Living Totem] waited for 30s and no initial state was set by the discovery
[2012-01-16 09:15:58,131][INFO ][discovery ] [Living Totem] elasticsearch/2OkEhEd_RV2STF1a7BdfEw
[2012-01-16 09:15:58,132][TRACE][gateway.local ] [Living Totem] [find_latest_state]: processing [metadata-5]
[2012-01-16 09:15:58,158][TRACE][gateway.local ] [Living Totem] [find_latest_state]: processing [shards-20]
[2012-01-16 09:15:58,158][DEBUG][gateway.local ] [Living Totem] [find_latest_state]: loading metadata from [/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0/_state/metadata-5]
[2012-01-16 09:15:58,167][TRACE][gateway.local ] [Living Totem] [find_latest_state]: processing [metadata-5]
[2012-01-16 09:15:58,168][TRACE][gateway.local ] [Living Totem] [find_latest_state]: processing [shards-20]
[2012-01-16 09:15:58,173][DEBUG][gateway.local ] [Living Totem] [find_latest_state]: loading started shards from [/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0/_state/shards-20]
[2012-01-16 09:15:58,174][DEBUG][gateway ] [Living Totem] can't wait on start for (possibly) reading state from gateway, will do it asynchronously
[2012-01-16 09:15:58,180][INFO ][http ] [Living Totem] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/10.0.1.5:9200]}
[2012-01-16 09:15:58,181][TRACE][jmx ] [Living Totem] Registered org.elasticsearch.jmx.ResourceDMBean@4856d149 under org.elasticsearch:service=transport
[2012-01-16 09:15:58,181][TRACE][jmx ] [Living Totem] Registered org.elasticsearch.jmx.ResourceDMBean@3bc634b9 under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:15:58,181][INFO ][node ] [Living Totem] {0.18.7}[12022]: started
[2012-01-16 09:15:58,449][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:59,452][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:15:59,696][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x083ba4f1, /10.0.1.5:63010 :> /10.0.1.5:9301]
[2012-01-16 09:15:59,696][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x3f70119f, /10.0.1.5:63006 :> /10.0.1.5:9301]
[2012-01-16 09:15:59,697][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x3a1be20c, /10.0.1.5:63008 :> /10.0.1.5:9301]
[2012-01-16 09:15:59,697][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x66e90097, /10.0.1.5:63012 :> /10.0.1.5:9301]
[2012-01-16 09:15:59,698][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x0400c02a, /10.0.1.5:63009 :> /10.0.1.5:9301]
[2012-01-16 09:15:59,699][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x03c9ce70, /10.0.1.5:63011 :> /10.0.1.5:9301]
[2012-01-16 09:15:59,699][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x10bcc8f4, /10.0.1.5:63007 :> /10.0.1.5:9301]
[2012-01-16 09:15:59,704][DEBUG][transport.netty ] [Living Totem] Disconnected from [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:15:59,712][TRACE][transport.netty ] [Living Totem] (Ignoring) Exception caught on netty layer [[id: 0x12b9b67b]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:15:59,716][TRACE][discovery.zen.fd ] [Living Totem] [master] [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]] transport disconnected (with verified connect)
[2012-01-16 09:15:59,736][DEBUG][discovery.zen.fd ] [Living Totem] [master] stopping fault detection against master [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], reason [master failure, transport disconnected (with verified connect)]
[2012-01-16 09:15:59,737][INFO ][discovery.zen ] [Living Totem] master_left [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], reason [transport disconnected (with verified connect)]
[2012-01-16 09:15:59,738][DEBUG][cluster.service ] [Living Totem] processing [zen-disco-master_failed ([Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]])]: execute
[2012-01-16 09:15:59,739][DEBUG][cluster.service ] [Living Totem] processing [zen-disco-master_failed ([Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]])]: no change in cluster_state
[2012-01-16 09:16:03,533][INFO ][node ] [Living Totem] {0.18.7}[12022]: stopping ...
[2012-01-16 09:16:03,545][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x0d335207, /10.0.1.5:63013 :> /10.0.1.5:9301]
[2012-01-16 09:16:03,545][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x1f8a6890, /10.0.1.5:63017 :> /10.0.1.5:9301]
[2012-01-16 09:16:03,545][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x2a06bbe7, /10.0.1.5:63016 :> /10.0.1.5:9301]
[2012-01-16 09:16:03,545][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x2d44b624, /10.0.1.5:63014 :> /10.0.1.5:9301]
[2012-01-16 09:16:03,546][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x115872f5, /10.0.1.5:63019 :> /10.0.1.5:9301]
[2012-01-16 09:16:03,548][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x36fffa61, /10.0.1.5:63015 :> /10.0.1.5:9301]
[2012-01-16 09:16:03,548][TRACE][transport.netty ] [Living Totem] channel closed: [id: 0x37d6d61d, /10.0.1.5:63018 :> /10.0.1.5:9301]
[2012-01-16 09:16:03,552][TRACE][jmx ] [Living Totem] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:16:03,552][TRACE][jmx ] [Living Totem] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:16:03,553][INFO ][node ] [Living Totem] {0.18.7}[12022]: stopped
[2012-01-16 09:16:03,553][INFO ][node ] [Living Totem] {0.18.7}[12022]: closing ...
[2012-01-16 09:16:03,565][TRACE][node ] [Living Totem] Close times for each service:
StopWatch 'node_close': running time = 4ms
-----------------------------------------
ms % Task name
-----------------------------------------
00000 000% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00000 000% indices
00000 000% routing
00000 000% cluster
00002 050% discovery
00000 000% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00001 025% node_cache
00000 000% script
00001 025% thread_pool
00000 000% thread_pool_force_shutdown
[2012-01-16 09:16:03,568][INFO ][node ] [Living Totem] {0.18.7}[12022]: closed
[2012-01-16 09:17:24,771][INFO ][node ] [Jester] {0.18.7}[12085]: initializing ...
[2012-01-16 09:17:24,780][INFO ][plugins ] [Jester] loaded [], sites []
[2012-01-16 09:17:26,107][DEBUG][threadpool ] [Jester] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:17:26,111][DEBUG][threadpool ] [Jester] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:17:26,111][DEBUG][threadpool ] [Jester] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:17:26,112][DEBUG][threadpool ] [Jester] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:17:26,112][DEBUG][threadpool ] [Jester] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:17:26,116][DEBUG][threadpool ] [Jester] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:17:26,116][DEBUG][threadpool ] [Jester] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:17:26,128][DEBUG][transport.netty ] [Jester] using worker_count[4], port[9301], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:17:26,149][DEBUG][discovery.zen.ping.unicast] [Jester] using initial hosts [localhost:9300], with concurrent_connects [10]
[2012-01-16 09:17:26,154][DEBUG][discovery.zen ] [Jester] using ping.timeout [3s]
[2012-01-16 09:17:26,161][DEBUG][discovery.zen.elect ] [Jester] using minimum_master_nodes [-1]
[2012-01-16 09:17:26,162][DEBUG][discovery.zen.fd ] [Jester] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:17:26,166][DEBUG][discovery.zen.fd ] [Jester] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:17:26,191][DEBUG][monitor.jvm ] [Jester] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:17:26,700][DEBUG][monitor.os ] [Jester] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@357c7988] with refresh_interval [1s]
[2012-01-16 09:17:26,723][DEBUG][monitor.process ] [Jester] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@3844006e] with refresh_interval [1s]
[2012-01-16 09:17:26,727][DEBUG][monitor.jvm ] [Jester] Using refresh_interval [1s]
[2012-01-16 09:17:26,728][DEBUG][monitor.network ] [Jester] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@3414a97b] with refresh_interval [5s]
[2012-01-16 09:17:26,738][DEBUG][monitor.network ] [Jester] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:17:26,741][TRACE][monitor.network ] [Jester] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:22542 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:22542 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3973460 (3.8M) TX bytes:3973460 (3.8M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2841176 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1507056 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3828128985 (3.6G) TX bytes:117930228 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:17:26,743][TRACE][env ] [Jester] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0 ...
[2012-01-16 09:17:26,779][DEBUG][env ] [Jester] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:17:26,779][TRACE][env ] [Jester] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0, free_space [221.7gb, usable_space [221.4gb
[2012-01-16 09:17:27,088][DEBUG][cache.memory ] [Jester] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:17:27,113][DEBUG][cluster.routing.allocation.decider] [Jester] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:17:27,114][DEBUG][cluster.routing.allocation.decider] [Jester] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:17:27,115][DEBUG][cluster.routing.allocation.decider] [Jester] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:17:27,118][DEBUG][gateway.local ] [Jester] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:17:27,168][DEBUG][indices.recovery ] [Jester] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:17:27,377][TRACE][jmx ] [Jester] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:27,379][TRACE][jmx ] [Jester] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:27,379][TRACE][jmx ] [Jester] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:27,382][TRACE][jmx ] [Jester] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:17:27,382][TRACE][jmx ] [Jester] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:27,382][TRACE][jmx ] [Jester] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:27,382][TRACE][jmx ] [Jester] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:27,382][TRACE][jmx ] [Jester] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:27,383][TRACE][jmx ] [Jester] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:17:27,383][TRACE][jmx ] [Jester] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:27,383][TRACE][jmx ] [Jester] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:17:27,383][TRACE][jmx ] [Jester] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:27,383][TRACE][jmx ] [Jester] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:27,383][TRACE][jmx ] [Jester] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:27,384][DEBUG][http.netty ] [Jester] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:17:27,390][DEBUG][indices.memory ] [Jester] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:17:27,400][DEBUG][indices.cache.filter ] [Jester] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:17:27,486][INFO ][node ] [Jester] {0.18.7}[12085]: initialized
[2012-01-16 09:17:27,486][INFO ][node ] [Jester] {0.18.7}[12085]: starting ...
[2012-01-16 09:17:27,510][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:17:27,590][DEBUG][transport.netty ] [Jester] Bound to address [/0.0.0.0:9301]
[2012-01-16 09:17:27,592][INFO ][transport ] [Jester] bound_address {inet[/0.0.0.0:9301]}, publish_address {inet[/10.0.1.5:9301]}
[2012-01-16 09:17:27,669][TRACE][discovery ] [Jester] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:17:27,697][DEBUG][transport.netty ] [Jester] Connected to node [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:17:27,698][TRACE][discovery.zen.ping.unicast] [Jester] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:17:27,776][TRACE][discovery.zen.ping.unicast] [Jester] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:17:29,174][TRACE][discovery.zen.ping.unicast] [Jester] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:17:29,177][TRACE][discovery.zen.ping.unicast] [Jester] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:17:30,678][TRACE][discovery.zen.ping.unicast] [Jester] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:17:30,680][TRACE][discovery.zen.ping.unicast] [Jester] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:17:30,684][DEBUG][transport.netty ] [Jester] Disconnected from [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:17:30,686][DEBUG][discovery.zen ] [Jester] ping responses:
--> target [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:30,704][DEBUG][transport.netty ] [Jester] Connected to node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:30,714][TRACE][transport.netty ] [Jester] channel opened: [id: 0x4c4936f3, /10.0.1.5:63050 => /10.0.1.5:9301]
[2012-01-16 09:17:30,715][TRACE][transport.netty ] [Jester] channel opened: [id: 0x7d627b8b, /10.0.1.5:63051 => /10.0.1.5:9301]
[2012-01-16 09:17:30,719][TRACE][transport.netty ] [Jester] channel opened: [id: 0x06db248c, /10.0.1.5:63052 => /10.0.1.5:9301]
[2012-01-16 09:17:30,720][TRACE][transport.netty ] [Jester] channel opened: [id: 0x54dbb83a, /10.0.1.5:63053 => /10.0.1.5:9301]
[2012-01-16 09:17:30,720][TRACE][transport.netty ] [Jester] channel opened: [id: 0x3f9ab00e, /10.0.1.5:63054 => /10.0.1.5:9301]
[2012-01-16 09:17:30,721][TRACE][transport.netty ] [Jester] channel opened: [id: 0x449c87c1, /10.0.1.5:63055 => /10.0.1.5:9301]
[2012-01-16 09:17:30,722][TRACE][transport.netty ] [Jester] channel opened: [id: 0x0094b318, /10.0.1.5:63056 => /10.0.1.5:9301]
[2012-01-16 09:17:30,737][DEBUG][discovery.zen.fd ] [Jester] [master] starting fault detection against master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], reason [initial_join]
[2012-01-16 09:17:30,745][DEBUG][cluster.service ] [Jester] processing [zen-disco-join (detected master)]: execute
[2012-01-16 09:17:30,746][TRACE][cluster.service ] [Jester] cluster state updated:
version [1], source [zen-disco-join (detected master)]
nodes:
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:17:30,751][TRACE][transport.netty ] [Jester] channel opened: [id: 0x32efe27b, /10.0.1.5:63057 => /10.0.1.5:9301]
[2012-01-16 09:17:30,756][TRACE][transport.netty ] [Jester] channel opened: [id: 0x420253af, /10.0.1.5:63058 => /10.0.1.5:9301]
[2012-01-16 09:17:30,758][TRACE][transport.netty ] [Jester] channel opened: [id: 0x26c42804, /10.0.1.5:63059 => /10.0.1.5:9301]
[2012-01-16 09:17:30,758][TRACE][transport.netty ] [Jester] channel opened: [id: 0x181f327e, /10.0.1.5:63060 => /10.0.1.5:9301]
[2012-01-16 09:17:30,759][TRACE][transport.netty ] [Jester] channel opened: [id: 0x659adc2c, /10.0.1.5:63061 => /10.0.1.5:9301]
[2012-01-16 09:17:30,759][TRACE][transport.netty ] [Jester] channel opened: [id: 0x19ed00d1, /10.0.1.5:63062 => /10.0.1.5:9301]
[2012-01-16 09:17:30,760][TRACE][transport.netty ] [Jester] channel opened: [id: 0x16d0a6a3, /10.0.1.5:63063 => /10.0.1.5:9301]
[2012-01-16 09:17:30,777][DEBUG][transport.netty ] [Jester] Connected to node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:17:30,777][DEBUG][cluster.service ] [Jester] processing [zen-disco-join (detected master)]: done applying updated cluster_state
[2012-01-16 09:17:30,777][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:17:30,777][TRACE][cluster.service ] [Jester] cluster state updated:
version [2], source [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:17:30,778][INFO ][cluster.service ] [Jester] detected_master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], added {[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]],}, reason: zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])
[2012-01-16 09:17:30,778][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:17:30,778][TRACE][discovery ] [Jester] initial state set from discovery
[2012-01-16 09:17:30,778][INFO ][discovery ] [Jester] elasticsearch/EjOqep-KRKqkFhqBSBA7tg
[2012-01-16 09:17:30,779][DEBUG][gateway.local ] [Jester] [find_latest_state]: no metadata state loaded
[2012-01-16 09:17:30,779][DEBUG][gateway.local ] [Jester] [find_latest_state]: no started shards loaded
[2012-01-16 09:17:30,785][INFO ][http ] [Jester] bound_address {inet[/0.0.0.0:9201]}, publish_address {inet[/10.0.1.5:9201]}
[2012-01-16 09:17:30,786][TRACE][jmx ] [Jester] Registered org.elasticsearch.jmx.ResourceDMBean@4c767fb3 under org.elasticsearch:service=transport
[2012-01-16 09:17:30,786][TRACE][jmx ] [Jester] Registered org.elasticsearch.jmx.ResourceDMBean@77b9e7fc under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:17:30,786][INFO ][node ] [Jester] {0.18.7}[12085]: started
[2012-01-16 09:17:42,497][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:17:42,499][TRACE][cluster.service ] [Jester] cluster state updated:
version [3], source [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
--------[twitter][0], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][1]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
---- unassigned
--------[twitter][0], node[null], [R], s[UNASSIGNED]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:17:42,503][DEBUG][indices.cluster ] [Jester] [twitter] creating index
[2012-01-16 09:17:42,503][DEBUG][indices ] [Jester] creating Index [twitter], shards [5]/[1]
[2012-01-16 09:17:42,794][DEBUG][index.mapper ] [Jester] [twitter] using dynamic[true], default mapping: location[null] and source[{
"_default_" : {
}
}]
[2012-01-16 09:17:42,795][DEBUG][index.cache.field.data.resident] [Jester] [twitter] using [resident] field cache with max_size [-1], expire [null]
[2012-01-16 09:17:42,797][DEBUG][index.cache ] [Jester] [twitter] Using stats.refresh_interval [1s]
[2012-01-16 09:17:42,810][TRACE][jmx ] [Jester] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,810][TRACE][jmx ] [Jester] Registered org.elasticsearch.jmx.ResourceDMBean@5afc0f5 under org.elasticsearch:service=indices,index=twitter
[2012-01-16 09:17:42,812][DEBUG][indices.cluster ] [Jester] [twitter][1] creating shard
[2012-01-16 09:17:42,812][DEBUG][index.service ] [Jester] [twitter] creating shard_id [1]
[2012-01-16 09:17:42,952][DEBUG][index.deletionpolicy ] [Jester] [twitter][1] Using [keep_only_last] deletion policy
[2012-01-16 09:17:42,954][DEBUG][index.merge.policy ] [Jester] [twitter][1] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:17:42,954][DEBUG][index.merge.scheduler ] [Jester] [twitter][1] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:17:42,957][DEBUG][index.shard.service ] [Jester] [twitter][1] state: [CREATED]
[2012-01-16 09:17:42,960][TRACE][jmx ] [Jester] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:42,960][TRACE][jmx ] [Jester] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:42,960][TRACE][jmx ] [Jester] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:42,961][TRACE][jmx ] [Jester] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:42,961][TRACE][jmx ] [Jester] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,961][TRACE][jmx ] [Jester] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,961][TRACE][jmx ] [Jester] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,961][TRACE][jmx ] [Jester] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,961][TRACE][jmx ] [Jester] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:17:42,961][TRACE][jmx ] [Jester] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:42,961][TRACE][jmx ] [Jester] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,961][TRACE][jmx ] [Jester] Registered org.elasticsearch.jmx.ResourceDMBean@64d1afd3 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1
[2012-01-16 09:17:42,962][TRACE][jmx ] [Jester] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:42,963][TRACE][jmx ] [Jester] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,963][TRACE][jmx ] [Jester] Registered org.elasticsearch.jmx.ResourceDMBean@22e1469c under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1,shardType=store
[2012-01-16 09:17:42,963][DEBUG][index.translog ] [Jester] [twitter][1] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:17:42,968][DEBUG][index.shard.service ] [Jester] [twitter][1] state: [CREATED]->[RECOVERING], reason [from gateway]
[2012-01-16 09:17:42,970][DEBUG][indices.cluster ] [Jester] [twitter][3] creating shard
[2012-01-16 09:17:42,970][DEBUG][index.service ] [Jester] [twitter] creating shard_id [3]
[2012-01-16 09:17:42,970][DEBUG][index.gateway ] [Jester] [twitter][1] starting recovery from local ...
[2012-01-16 09:17:42,982][DEBUG][index.engine.robin ] [Jester] [twitter][1] Starting engine
[2012-01-16 09:17:43,118][DEBUG][index.deletionpolicy ] [Jester] [twitter][3] Using [keep_only_last] deletion policy
[2012-01-16 09:17:43,119][DEBUG][index.merge.policy ] [Jester] [twitter][3] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:17:43,119][DEBUG][index.merge.scheduler ] [Jester] [twitter][3] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:17:43,120][DEBUG][index.shard.service ] [Jester] [twitter][3] state: [CREATED]
[2012-01-16 09:17:43,124][TRACE][jmx ] [Jester] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,124][TRACE][jmx ] [Jester] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,125][TRACE][jmx ] [Jester] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,125][TRACE][jmx ] [Jester] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,125][TRACE][jmx ] [Jester] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,125][TRACE][jmx ] [Jester] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,125][TRACE][jmx ] [Jester] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,125][TRACE][jmx ] [Jester] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,126][TRACE][jmx ] [Jester] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:17:43,126][TRACE][jmx ] [Jester] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,126][TRACE][jmx ] [Jester] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,126][TRACE][jmx ] [Jester] Registered org.elasticsearch.jmx.ResourceDMBean@2f6e4ddd under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3
[2012-01-16 09:17:43,127][TRACE][jmx ] [Jester] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,127][TRACE][jmx ] [Jester] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,127][TRACE][jmx ] [Jester] Registered org.elasticsearch.jmx.ResourceDMBean@28c13406 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3,shardType=store
[2012-01-16 09:17:43,128][DEBUG][index.translog ] [Jester] [twitter][3] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:17:43,128][DEBUG][indices.memory ] [Jester] recalculating shard indexing buffer (reason=created_shard[twitter][3]), total is [101.9mb] with [1] active shards, each shard set to [101.9mb]
[2012-01-16 09:17:43,135][DEBUG][index.shard.service ] [Jester] [twitter][3] state: [CREATED]->[RECOVERING], reason [from gateway]
[2012-01-16 09:17:43,135][DEBUG][index.gateway ] [Jester] [twitter][3] starting recovery from local ...
[2012-01-16 09:17:43,138][DEBUG][index.engine.robin ] [Jester] [twitter][3] Starting engine
[2012-01-16 09:17:43,143][DEBUG][index.shard.service ] [Jester] [twitter][1] scheduling refresher every 1s
[2012-01-16 09:17:43,151][DEBUG][index.shard.service ] [Jester] [twitter][3] scheduling refresher every 1s
[2012-01-16 09:17:43,153][DEBUG][index.shard.service ] [Jester] [twitter][3] scheduling optimizer / merger every 1s
[2012-01-16 09:17:43,153][DEBUG][index.shard.service ] [Jester] [twitter][3] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2012-01-16 09:17:43,154][TRACE][index.shard.service ] [Jester] [twitter][3] refresh with waitForOperations[false]
[2012-01-16 09:17:43,153][DEBUG][index.shard.service ] [Jester] [twitter][1] scheduling optimizer / merger every 1s
[2012-01-16 09:17:43,154][DEBUG][index.gateway ] [Jester] [twitter][3] recovery completed from local, took [19ms]
index : files [0] with total_size [0b], took[0s]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
translog : number_of_operations [0], took [16ms]
[2012-01-16 09:17:43,154][DEBUG][cluster.action.shard ] [Jester] sending shard started for [twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:17:43,156][DEBUG][index.shard.service ] [Jester] [twitter][1] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2012-01-16 09:17:43,156][TRACE][index.shard.service ] [Jester] [twitter][1] refresh with waitForOperations[false]
[2012-01-16 09:17:43,157][DEBUG][index.gateway ] [Jester] [twitter][1] recovery completed from local, took [187ms]
index : files [0] with total_size [0b], took[9ms]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
translog : number_of_operations [0], took [183ms]
[2012-01-16 09:17:43,157][DEBUG][cluster.action.shard ] [Jester] sending shard started for [twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:17:43,197][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:17:43,197][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:17:43,197][TRACE][cluster.service ] [Jester] cluster state updated:
version [4], source [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][1]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
---- unassigned
--------[twitter][0], node[null], [R], s[UNASSIGNED]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:17:43,201][TRACE][indices.cluster ] [Jester] [{}][{}] master [{}] marked shard as initializing, but shard already created, mark shard as started
[2012-01-16 09:17:43,242][DEBUG][cluster.action.shard ] [Jester] sending shard started for [twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,244][TRACE][indices.cluster ] [Jester] [{}][{}] master [{}] marked shard as initializing, but shard already created, mark shard as started
[2012-01-16 09:17:43,244][DEBUG][cluster.action.shard ] [Jester] sending shard started for [twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,246][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:17:43,246][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:17:43,248][TRACE][cluster.service ] [Jester] cluster state updated:
version [5], source [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][1]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
---- unassigned
--------[twitter][0], node[null], [R], s[UNASSIGNED]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:17:43,249][TRACE][indices.cluster ] [Jester] [{}][{}] master [{}] marked shard as initializing, but shard already created, mark shard as started
[2012-01-16 09:17:43,249][DEBUG][cluster.action.shard ] [Jester] sending shard started for [twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,267][TRACE][indices.cluster ] [Jester] [{}][{}] master [{}] marked shard as initializing, but shard already created, mark shard as started
[2012-01-16 09:17:43,267][DEBUG][cluster.action.shard ] [Jester] sending shard started for [twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,268][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:17:43,449][TRACE][indices.recovery ] [Jester] [twitter][1] starting recovery to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], mark_as_relocated false
[2012-01-16 09:17:43,454][TRACE][indices.recovery ] [Jester] [twitter][1] recovery [phase1] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: recovering [segments_1], does not exists in remote
[2012-01-16 09:17:43,454][TRACE][indices.recovery ] [Jester] [twitter][1] recovery [phase1] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: recovering_files [1] with total_size [58b], reusing_files [0] with total_size [0b]
[2012-01-16 09:17:43,519][TRACE][indices.recovery ] [Jester] [twitter][3] starting recovery to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], mark_as_relocated false
[2012-01-16 09:17:43,520][TRACE][indices.recovery ] [Jester] [twitter][3] recovery [phase1] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: recovering [segments_1], does not exists in remote
[2012-01-16 09:17:43,520][TRACE][indices.recovery ] [Jester] [twitter][3] recovery [phase1] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: recovering_files [1] with total_size [58b], reusing_files [0] with total_size [0b]
[2012-01-16 09:17:43,524][TRACE][indices.recovery ] [Jester] [twitter][1] recovery [phase1] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: took [70ms]
[2012-01-16 09:17:43,525][TRACE][indices.recovery ] [Jester] [twitter][1] recovery [phase2] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: sending transaction log operations
[2012-01-16 09:17:43,556][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:17:43,556][TRACE][cluster.service ] [Jester] cluster state updated:
version [6], source [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
---- unassigned
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:17:43,558][DEBUG][indices.cluster ] [Jester] [twitter][0] creating shard
[2012-01-16 09:17:43,558][DEBUG][index.service ] [Jester] [twitter] creating shard_id [0]
[2012-01-16 09:17:43,558][TRACE][indices.recovery ] [Jester] [twitter][3] recovery [phase1] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: took [31ms]
[2012-01-16 09:17:43,572][TRACE][indices.recovery ] [Jester] [twitter][3] recovery [phase2] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: sending transaction log operations
[2012-01-16 09:17:43,593][TRACE][indices.recovery ] [Jester] [twitter][1] recovery [phase2] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: took [68ms]
[2012-01-16 09:17:43,593][TRACE][indices.recovery ] [Jester] [twitter][1] recovery [phase3] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: sending transaction log operations
[2012-01-16 09:17:43,637][TRACE][indices.recovery ] [Jester] [twitter][1] recovery [phase3] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: took [44ms]
[2012-01-16 09:17:43,645][TRACE][indices.recovery ] [Jester] [twitter][3] recovery [phase2] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: took [72ms]
[2012-01-16 09:17:43,645][TRACE][indices.recovery ] [Jester] [twitter][3] recovery [phase3] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: sending transaction log operations
[2012-01-16 09:17:43,649][TRACE][indices.recovery ] [Jester] [twitter][3] recovery [phase3] to [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]: took [3ms]
[2012-01-16 09:17:43,702][DEBUG][index.deletionpolicy ] [Jester] [twitter][0] Using [keep_only_last] deletion policy
[2012-01-16 09:17:43,703][DEBUG][index.merge.policy ] [Jester] [twitter][0] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:17:43,703][DEBUG][index.merge.scheduler ] [Jester] [twitter][0] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:17:43,731][DEBUG][index.shard.service ] [Jester] [twitter][0] state: [CREATED]
[2012-01-16 09:17:43,734][TRACE][jmx ] [Jester] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,734][TRACE][jmx ] [Jester] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,734][TRACE][jmx ] [Jester] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,734][TRACE][jmx ] [Jester] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,734][TRACE][jmx ] [Jester] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,734][TRACE][jmx ] [Jester] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,734][TRACE][jmx ] [Jester] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,734][TRACE][jmx ] [Jester] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,740][TRACE][jmx ] [Jester] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:17:43,740][TRACE][jmx ] [Jester] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,740][TRACE][jmx ] [Jester] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,747][TRACE][jmx ] [Jester] Registered org.elasticsearch.jmx.ResourceDMBean@5262667 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0
[2012-01-16 09:17:43,748][TRACE][jmx ] [Jester] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,748][TRACE][jmx ] [Jester] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,749][TRACE][jmx ] [Jester] Registered org.elasticsearch.jmx.ResourceDMBean@864dfeb under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0,shardType=store
[2012-01-16 09:17:43,749][DEBUG][index.translog ] [Jester] [twitter][0] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:17:43,750][DEBUG][indices.memory ] [Jester] recalculating shard indexing buffer (reason=created_shard[twitter][0]), total is [101.9mb] with [2] active shards, each shard set to [50.9mb]
[2012-01-16 09:17:43,753][DEBUG][index.shard.service ] [Jester] [twitter][0] state: [CREATED]->[RECOVERING], reason [from [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:43,755][DEBUG][indices.cluster ] [Jester] [twitter][2] creating shard
[2012-01-16 09:17:43,756][DEBUG][index.service ] [Jester] [twitter] creating shard_id [2]
[2012-01-16 09:17:43,755][TRACE][indices.recovery ] [Jester] [twitter][0] starting recovery from [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]
[2012-01-16 09:17:43,797][DEBUG][index.engine.robin ] [Jester] [twitter][0] Starting engine
[2012-01-16 09:17:43,848][DEBUG][index.shard.service ] [Jester] [twitter][0] state: [RECOVERING]->[STARTED], reason [post recovery]
[2012-01-16 09:17:43,848][DEBUG][index.shard.service ] [Jester] [twitter][0] scheduling refresher every 1s
[2012-01-16 09:17:43,848][DEBUG][index.shard.service ] [Jester] [twitter][0] scheduling optimizer / merger every 1s
[2012-01-16 09:17:43,851][DEBUG][indices.recovery ] [Jester] [twitter][0] recovery completed from [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], took[95ms]
phase1: recovered_files [1] with total_size of [58b], took [31ms], throttling_wait [0s]
: reusing_files [0] with total_size of [0b]
phase2: recovered [0] transaction log operations, took [50ms]
phase3: recovered [0] transaction log operations, took [2ms]
[2012-01-16 09:17:43,853][DEBUG][cluster.action.shard ] [Jester] sending shard started for [twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:17:43,869][DEBUG][index.deletionpolicy ] [Jester] [twitter][2] Using [keep_only_last] deletion policy
[2012-01-16 09:17:43,869][DEBUG][index.merge.policy ] [Jester] [twitter][2] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:17:43,870][DEBUG][index.merge.scheduler ] [Jester] [twitter][2] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:17:43,870][DEBUG][index.shard.service ] [Jester] [twitter][2] state: [CREATED]
[2012-01-16 09:17:43,874][TRACE][jmx ] [Jester] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,874][TRACE][jmx ] [Jester] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,874][TRACE][jmx ] [Jester] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,887][TRACE][jmx ] [Jester] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,887][TRACE][jmx ] [Jester] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,887][TRACE][jmx ] [Jester] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,887][TRACE][jmx ] [Jester] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,887][TRACE][jmx ] [Jester] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,887][TRACE][jmx ] [Jester] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:17:43,887][TRACE][jmx ] [Jester] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,887][TRACE][jmx ] [Jester] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,888][TRACE][jmx ] [Jester] Registered org.elasticsearch.jmx.ResourceDMBean@63280c85 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2
[2012-01-16 09:17:43,894][TRACE][jmx ] [Jester] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,894][TRACE][jmx ] [Jester] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,894][TRACE][jmx ] [Jester] Registered org.elasticsearch.jmx.ResourceDMBean@3fc2e163 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2,shardType=store
[2012-01-16 09:17:43,894][DEBUG][index.translog ] [Jester] [twitter][2] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:17:43,895][DEBUG][indices.memory ] [Jester] recalculating shard indexing buffer (reason=created_shard[twitter][2]), total is [101.9mb] with [3] active shards, each shard set to [33.9mb]
[2012-01-16 09:17:43,895][DEBUG][index.shard.service ] [Jester] [twitter][2] state: [CREATED]->[RECOVERING], reason [from [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:43,904][TRACE][indices.recovery ] [Jester] [twitter][2] starting recovery from [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]
[2012-01-16 09:17:43,911][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:17:43,912][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:17:43,914][TRACE][cluster.service ] [Jester] cluster state updated:
version [7], source [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
---- unassigned
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:17:43,919][TRACE][indices.cluster ] [Jester] [{}][{}] master [{}] marked shard as initializing, but shard already created, mark shard as started
[2012-01-16 09:17:43,920][DEBUG][cluster.action.shard ] [Jester] sending shard started for [twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,923][DEBUG][index.engine.robin ] [Jester] [twitter][2] Starting engine
[2012-01-16 09:17:43,953][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:17:43,953][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:17:43,954][TRACE][cluster.service ] [Jester] cluster state updated:
version [8], source [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
---- unassigned
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:17:43,975][TRACE][indices.cluster ] [Jester] [{}][{}] master [{}] marked shard as initializing, but shard already created, mark shard as started
[2012-01-16 09:17:43,984][DEBUG][cluster.action.shard ] [Jester] sending shard started for [twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,995][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:17:43,995][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:17:43,995][TRACE][cluster.service ] [Jester] cluster state updated:
version [9], source [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
---- unassigned
[2012-01-16 09:17:44,007][DEBUG][indices.cluster ] [Jester] [twitter][4] creating shard
[2012-01-16 09:17:44,008][DEBUG][index.service ] [Jester] [twitter] creating shard_id [4]
[2012-01-16 09:17:44,151][DEBUG][index.deletionpolicy ] [Jester] [twitter][4] Using [keep_only_last] deletion policy
[2012-01-16 09:17:44,152][DEBUG][index.merge.policy ] [Jester] [twitter][4] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:17:44,152][DEBUG][index.merge.scheduler ] [Jester] [twitter][4] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:17:44,153][DEBUG][index.shard.service ] [Jester] [twitter][4] state: [CREATED]
[2012-01-16 09:17:44,156][TRACE][jmx ] [Jester] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:44,156][TRACE][jmx ] [Jester] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:44,156][TRACE][jmx ] [Jester] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:44,156][TRACE][jmx ] [Jester] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:44,156][TRACE][jmx ] [Jester] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:44,156][TRACE][jmx ] [Jester] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:44,156][TRACE][jmx ] [Jester] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:44,156][TRACE][jmx ] [Jester] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:44,157][TRACE][jmx ] [Jester] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:17:44,157][TRACE][jmx ] [Jester] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:44,159][TRACE][jmx ] [Jester] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:44,159][TRACE][jmx ] [Jester] Registered org.elasticsearch.jmx.ResourceDMBean@52c20893 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4
[2012-01-16 09:17:44,161][TRACE][jmx ] [Jester] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:44,166][TRACE][jmx ] [Jester] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:44,166][TRACE][jmx ] [Jester] Registered org.elasticsearch.jmx.ResourceDMBean@d054f93 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4,shardType=store
[2012-01-16 09:17:44,166][DEBUG][index.translog ] [Jester] [twitter][4] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:17:44,170][DEBUG][indices.memory ] [Jester] recalculating shard indexing buffer (reason=created_shard[twitter][4]), total is [101.9mb] with [4] active shards, each shard set to [25.4mb]
[2012-01-16 09:17:44,170][DEBUG][index.shard.service ] [Jester] [twitter][4] state: [CREATED]->[RECOVERING], reason [from [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:44,171][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:17:44,171][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:17:44,172][TRACE][cluster.service ] [Jester] cluster state updated:
version [10], source [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
---- unassigned
[2012-01-16 09:17:44,176][DEBUG][indices.cluster ] [Jester] [twitter] adding mapping [tweet], source [{"tweet":{"properties":{"message":{"type":"string"},"post_date":{"type":"date","format":"dateOptionalTime"},"user":{"type":"string"}}}}]
[2012-01-16 09:17:44,171][TRACE][indices.recovery ] [Jester] [twitter][4] starting recovery from [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]
[2012-01-16 09:17:44,286][DEBUG][index.engine.robin ] [Jester] [twitter][4] Starting engine
[2012-01-16 09:17:44,299][DEBUG][index.shard.service ] [Jester] [twitter][4] state: [RECOVERING]->[STARTED], reason [post recovery]
[2012-01-16 09:17:44,300][DEBUG][index.shard.service ] [Jester] [twitter][4] scheduling refresher every 1s
[2012-01-16 09:17:44,300][DEBUG][index.shard.service ] [Jester] [twitter][4] scheduling optimizer / merger every 1s
[2012-01-16 09:17:44,311][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:17:44,312][DEBUG][indices.recovery ] [Jester] [twitter][4] recovery completed from [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], took[120ms]
phase1: recovered_files [1] with total_size of [58b], took [83ms], throttling_wait [0s]
: reusing_files [0] with total_size of [0b]
phase2: recovered [0] transaction log operations, took [11ms]
phase3: recovered [0] transaction log operations, took [13ms]
[2012-01-16 09:17:44,312][DEBUG][cluster.action.shard ] [Jester] sending shard started for [twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:17:44,334][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:17:44,335][TRACE][cluster.service ] [Jester] cluster state updated:
version [11], source [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
---- unassigned
[2012-01-16 09:17:44,338][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:17:44,503][DEBUG][index.shard.service ] [Jester] [twitter][2] state: [RECOVERING]->[STARTED], reason [post recovery]
[2012-01-16 09:17:44,504][DEBUG][index.shard.service ] [Jester] [twitter][2] scheduling refresher every 1s
[2012-01-16 09:17:44,504][DEBUG][index.shard.service ] [Jester] [twitter][2] scheduling optimizer / merger every 1s
[2012-01-16 09:17:44,506][DEBUG][indices.recovery ] [Jester] [twitter][2] recovery completed from [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], took[602ms]
phase1: recovered_files [1] with total_size of [58b], took [10ms], throttling_wait [0s]
: reusing_files [0] with total_size of [0b]
phase2: recovered [0] transaction log operations, took [49ms]
phase3: recovered [1] transaction log operations, took [440ms]
[2012-01-16 09:17:44,506][DEBUG][cluster.action.shard ] [Jester] sending shard started for [twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:17:44,525][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: execute
[2012-01-16 09:17:44,525][TRACE][cluster.service ] [Jester] cluster state updated:
version [12], source [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
---- unassigned
[2012-01-16 09:17:44,527][DEBUG][cluster.service ] [Jester] processing [zen-disco-receive(from master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]])]: done applying updated cluster_state
[2012-01-16 09:17:47,487][INFO ][node ] [Jester] {0.18.7}[12085]: stopping ...
[2012-01-16 09:17:47,500][TRACE][jmx ] [Jester] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0
[2012-01-16 09:17:47,500][TRACE][jmx ] [Jester] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0,shardType=store
[2012-01-16 09:17:47,501][DEBUG][index.shard.service ] [Jester] [twitter][0] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:17:47,504][TRACE][jmx ] [Jester] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1
[2012-01-16 09:17:47,505][TRACE][jmx ] [Jester] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2
[2012-01-16 09:17:47,507][TRACE][jmx ] [Jester] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2,shardType=store
[2012-01-16 09:17:47,508][DEBUG][index.shard.service ] [Jester] [twitter][2] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:17:47,507][TRACE][jmx ] [Jester] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1,shardType=store
[2012-01-16 09:17:47,510][DEBUG][index.shard.service ] [Jester] [twitter][1] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:17:47,511][TRACE][jmx ] [Jester] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3
[2012-01-16 09:17:47,512][TRACE][jmx ] [Jester] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3,shardType=store
[2012-01-16 09:17:47,512][DEBUG][index.shard.service ] [Jester] [twitter][3] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:17:47,520][TRACE][jmx ] [Jester] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4
[2012-01-16 09:17:47,523][TRACE][jmx ] [Jester] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4,shardType=store
[2012-01-16 09:17:47,526][DEBUG][index.shard.service ] [Jester] [twitter][4] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:17:47,537][TRACE][jmx ] [Jester] Unregistered org.elasticsearch:service=indices,index=twitter
[2012-01-16 09:17:47,545][DEBUG][discovery.zen.fd ] [Jester] [master] stopping fault detection against master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], reason [zen disco stop]
[2012-01-16 09:17:47,565][TRACE][transport.netty ] [Jester] channel closed: [id: 0x32efe27b, /10.0.1.5:63057 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,570][TRACE][transport.netty ] [Jester] channel closed: [id: 0x420253af, /10.0.1.5:63058 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,570][TRACE][transport.netty ] [Jester] channel closed: [id: 0x181f327e, /10.0.1.5:63060 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,572][TRACE][transport.netty ] [Jester] channel closed: [id: 0x659adc2c, /10.0.1.5:63061 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,573][TRACE][transport.netty ] [Jester] channel closed: [id: 0x19ed00d1, /10.0.1.5:63062 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,572][TRACE][transport.netty ] [Jester] channel closed: [id: 0x26c42804, /10.0.1.5:63059 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,574][TRACE][transport.netty ] [Jester] channel closed: [id: 0x3f9ab00e, /10.0.1.5:63054 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,574][TRACE][transport.netty ] [Jester] channel closed: [id: 0x16d0a6a3, /10.0.1.5:63063 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,578][TRACE][transport.netty ] [Jester] channel closed: [id: 0x7d627b8b, /10.0.1.5:63051 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,578][TRACE][transport.netty ] [Jester] channel closed: [id: 0x449c87c1, /10.0.1.5:63055 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,580][TRACE][transport.netty ] [Jester] channel closed: [id: 0x54dbb83a, /10.0.1.5:63053 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,580][TRACE][transport.netty ] [Jester] channel closed: [id: 0x4c4936f3, /10.0.1.5:63050 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,579][TRACE][transport.netty ] [Jester] channel closed: [id: 0x06db248c, /10.0.1.5:63052 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,581][TRACE][transport.netty ] [Jester] channel closed: [id: 0x0094b318, /10.0.1.5:63056 :> /10.0.1.5:9301]
[2012-01-16 09:17:47,595][TRACE][jmx ] [Jester] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:17:47,596][TRACE][jmx ] [Jester] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:17:47,599][INFO ][node ] [Jester] {0.18.7}[12085]: stopped
[2012-01-16 09:17:47,599][INFO ][node ] [Jester] {0.18.7}[12085]: closing ...
[2012-01-16 09:17:47,615][TRACE][node ] [Jester] Close times for each service:
StopWatch 'node_close': running time = 6ms
-----------------------------------------
ms % Task name
-----------------------------------------
00000 000% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00001 017% indices
00000 000% routing
00000 000% cluster
00002 033% discovery
00000 000% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00001 017% node_cache
00000 000% script
00002 033% thread_pool
00000 000% thread_pool_force_shutdown
[2012-01-16 09:17:47,615][INFO ][node ] [Jester] {0.18.7}[12085]: closed
[2012-01-16 09:17:54,089][INFO ][node ] [Chameleon] {0.18.7}[12108]: initializing ...
[2012-01-16 09:17:54,099][INFO ][plugins ] [Chameleon] loaded [], sites []
[2012-01-16 09:17:55,315][DEBUG][threadpool ] [Chameleon] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:17:55,318][DEBUG][threadpool ] [Chameleon] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:17:55,318][DEBUG][threadpool ] [Chameleon] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:17:55,319][DEBUG][threadpool ] [Chameleon] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:17:55,319][DEBUG][threadpool ] [Chameleon] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:17:55,323][DEBUG][threadpool ] [Chameleon] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:17:55,323][DEBUG][threadpool ] [Chameleon] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:17:55,335][DEBUG][transport.netty ] [Chameleon] using worker_count[4], port[9301], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:17:55,355][DEBUG][discovery.zen.ping.unicast] [Chameleon] using initial hosts [localhost:9300], with concurrent_connects [10]
[2012-01-16 09:17:55,359][DEBUG][discovery.zen ] [Chameleon] using ping.timeout [3s]
[2012-01-16 09:17:55,366][DEBUG][discovery.zen.elect ] [Chameleon] using minimum_master_nodes [-1]
[2012-01-16 09:17:55,368][DEBUG][discovery.zen.fd ] [Chameleon] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:17:55,371][DEBUG][discovery.zen.fd ] [Chameleon] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:17:55,394][DEBUG][monitor.jvm ] [Chameleon] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:17:55,903][DEBUG][monitor.os ] [Chameleon] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@34115512] with refresh_interval [1s]
[2012-01-16 09:17:55,923][DEBUG][monitor.process ] [Chameleon] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@7878529d] with refresh_interval [1s]
[2012-01-16 09:17:55,928][DEBUG][monitor.jvm ] [Chameleon] Using refresh_interval [1s]
[2012-01-16 09:17:55,928][DEBUG][monitor.network ] [Chameleon] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@3882e4f3] with refresh_interval [5s]
[2012-01-16 09:17:55,939][DEBUG][monitor.network ] [Chameleon] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:17:55,943][TRACE][monitor.network ] [Chameleon] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:23200 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:23200 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:4021526 (3.8M) TX bytes:4021526 (3.8M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2841272 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1507155 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3828193498 (3.6G) TX bytes:117948627 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:17:55,945][TRACE][env ] [Chameleon] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0 ...
[2012-01-16 09:17:56,031][DEBUG][env ] [Chameleon] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:17:56,031][TRACE][env ] [Chameleon] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7copy/data/elasticsearch/nodes/0, free_space [221.7gb, usable_space [221.4gb
[2012-01-16 09:17:56,330][DEBUG][cache.memory ] [Chameleon] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:17:56,343][DEBUG][cluster.routing.allocation.decider] [Chameleon] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:17:56,344][DEBUG][cluster.routing.allocation.decider] [Chameleon] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:17:56,344][DEBUG][cluster.routing.allocation.decider] [Chameleon] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:17:56,347][DEBUG][gateway.local ] [Chameleon] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:17:56,366][DEBUG][indices.recovery ] [Chameleon] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:17:56,553][TRACE][jmx ] [Chameleon] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:56,554][TRACE][jmx ] [Chameleon] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:56,554][TRACE][jmx ] [Chameleon] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:56,557][TRACE][jmx ] [Chameleon] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:17:56,557][TRACE][jmx ] [Chameleon] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:56,557][TRACE][jmx ] [Chameleon] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:56,557][TRACE][jmx ] [Chameleon] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:56,557][TRACE][jmx ] [Chameleon] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:56,557][TRACE][jmx ] [Chameleon] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:17:56,557][TRACE][jmx ] [Chameleon] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:56,558][TRACE][jmx ] [Chameleon] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:17:56,558][TRACE][jmx ] [Chameleon] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:56,558][TRACE][jmx ] [Chameleon] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:56,558][TRACE][jmx ] [Chameleon] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:56,559][DEBUG][http.netty ] [Chameleon] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:17:56,565][DEBUG][indices.memory ] [Chameleon] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:17:56,575][DEBUG][indices.cache.filter ] [Chameleon] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:17:56,665][INFO ][node ] [Chameleon] {0.18.7}[12108]: initialized
[2012-01-16 09:17:56,666][INFO ][node ] [Chameleon] {0.18.7}[12108]: starting ...
[2012-01-16 09:17:56,690][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:17:56,768][DEBUG][transport.netty ] [Chameleon] Bound to address [/0.0.0.0:9301]
[2012-01-16 09:17:56,771][INFO ][transport ] [Chameleon] bound_address {inet[/0.0.0.0:9301]}, publish_address {inet[/10.0.1.5:9301]}
[2012-01-16 09:17:56,869][TRACE][discovery ] [Chameleon] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:17:56,897][DEBUG][transport.netty ] [Chameleon] Connected to node [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:17:56,898][TRACE][discovery.zen.ping.unicast] [Chameleon] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:17:56,949][TRACE][discovery.zen.ping.unicast] [Chameleon] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:17:58,375][TRACE][discovery.zen.ping.unicast] [Chameleon] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:17:58,379][TRACE][discovery.zen.ping.unicast] [Chameleon] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:17:59,878][TRACE][discovery.zen.ping.unicast] [Chameleon] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:17:59,881][TRACE][discovery.zen.ping.unicast] [Chameleon] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], cluster_name[elasticsearch]}]
[2012-01-16 09:17:59,885][DEBUG][discovery.zen ] [Chameleon] ping responses:
--> target [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:59,888][DEBUG][transport.netty ] [Chameleon] Disconnected from [[#zen_unicast_1#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:17:59,904][DEBUG][transport.netty ] [Chameleon] Connected to node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:59,914][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x4c61a7e6, /10.0.1.5:63083 => /10.0.1.5:9301]
[2012-01-16 09:17:59,916][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x108a9d2a, /10.0.1.5:63084 => /10.0.1.5:9301]
[2012-01-16 09:17:59,920][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x4296e599, /10.0.1.5:63085 => /10.0.1.5:9301]
[2012-01-16 09:17:59,929][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x084d6b1a, /10.0.1.5:63086 => /10.0.1.5:9301]
[2012-01-16 09:17:59,930][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x1ee99d0f, /10.0.1.5:63087 => /10.0.1.5:9301]
[2012-01-16 09:17:59,930][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x7b4653a3, /10.0.1.5:63088 => /10.0.1.5:9301]
[2012-01-16 09:17:59,931][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x461d318f, /10.0.1.5:63089 => /10.0.1.5:9301]
[2012-01-16 09:18:00,028][DEBUG][discovery.zen.fd ] [Chameleon] [master] starting fault detection against master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], reason [initial_join]
[2012-01-16 09:18:00,032][DEBUG][cluster.service ] [Chameleon] processing [zen-disco-join (detected master)]: execute
[2012-01-16 09:18:00,033][TRACE][cluster.service ] [Chameleon] cluster state updated:
version [14], source [zen-disco-join (detected master)]
nodes:
[Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:18:00,039][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x273f212a, /10.0.1.5:63090 => /10.0.1.5:9301]
[2012-01-16 09:18:00,041][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x72e8e8f9, /10.0.1.5:63091 => /10.0.1.5:9301]
[2012-01-16 09:18:00,043][DEBUG][transport.netty ] [Chameleon] Connected to node [[Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:18:00,043][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x0e07023f, /10.0.1.5:63092 => /10.0.1.5:9301]
[2012-01-16 09:18:00,043][DEBUG][cluster.service ] [Chameleon] processing [zen-disco-join (detected master)]: done applying updated cluster_state
[2012-01-16 09:18:00,044][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x5f159e0c, /10.0.1.5:63093 => /10.0.1.5:9301]
[2012-01-16 09:18:00,044][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x043b5699, /10.0.1.5:63094 => /10.0.1.5:9301]
[2012-01-16 09:18:00,044][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x53b258fa, /10.0.1.5:63095 => /10.0.1.5:9301]
[2012-01-16 09:18:00,045][TRACE][transport.netty ] [Chameleon] channel opened: [id: 0x61efb003, /10.0.1.5:63096 => /10.0.1.5:9301]
[2012-01-16 09:18:01,034][TRACE][discovery.zen.fd ] [Chameleon] [master] [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:18:02,036][TRACE][discovery.zen.fd ] [Chameleon] [master] [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:18:03,038][TRACE][discovery.zen.fd ] [Chameleon] [master] [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:18:04,040][TRACE][discovery.zen.fd ] [Chameleon] [master] [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]] does not have us registered with it...
[2012-01-16 09:18:04,067][INFO ][node ] [Chameleon] {0.18.7}[12108]: stopping ...
[2012-01-16 09:18:04,109][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x5f159e0c, /10.0.1.5:63093 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,112][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x72e8e8f9, /10.0.1.5:63091 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,112][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x273f212a, /10.0.1.5:63090 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,114][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x53b258fa, /10.0.1.5:63095 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,115][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x0e07023f, /10.0.1.5:63092 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,115][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x043b5699, /10.0.1.5:63094 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,115][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x61efb003, /10.0.1.5:63096 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,116][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x084d6b1a, /10.0.1.5:63086 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,116][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x4296e599, /10.0.1.5:63085 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,116][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x7b4653a3, /10.0.1.5:63088 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,119][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x4c61a7e6, /10.0.1.5:63083 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,119][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x108a9d2a, /10.0.1.5:63084 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,120][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x461d318f, /10.0.1.5:63089 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,119][TRACE][transport.netty ] [Chameleon] channel closed: [id: 0x1ee99d0f, /10.0.1.5:63087 :> /10.0.1.5:9301]
[2012-01-16 09:18:04,125][INFO ][node ] [Chameleon] {0.18.7}[12108]: stopped
[2012-01-16 09:18:04,125][INFO ][node ] [Chameleon] {0.18.7}[12108]: closing ...
[2012-01-16 09:18:04,131][DEBUG][discovery.zen.fd ] [Chameleon] [master] stopping fault detection against master [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]], reason [zen disco stop]
[2012-01-16 09:18:04,143][TRACE][node ] [Chameleon] Close times for each service:
StopWatch 'node_close': running time = 7ms
-----------------------------------------
ms % Task name
-----------------------------------------
00000 000% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00001 014% indices
00000 000% routing
00000 000% cluster
00003 043% discovery
00000 000% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00001 014% node_cache
00000 000% script
00002 029% thread_pool
00000 000% thread_pool_force_shutdown
[2012-01-16 09:18:04,145][INFO ][node ] [Chameleon] {0.18.7}[12108]: closed
[2012-01-16 09:02:13,817][INFO ][node ] [Topolov, Yuri] {0.18.7}[11592]: initializing ...
[2012-01-16 09:02:13,860][INFO ][plugins ] [Topolov, Yuri] loaded [], sites []
[2012-01-16 09:02:20,625][INFO ][node ] [Topolov, Yuri] {0.18.7}[11592]: initialized
[2012-01-16 09:02:20,625][INFO ][node ] [Topolov, Yuri] {0.18.7}[11592]: starting ...
[2012-01-16 09:02:20,876][INFO ][transport ] [Topolov, Yuri] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/10.0.1.5:9300]}
[2012-01-16 09:02:25,072][INFO ][cluster.service ] [Topolov, Yuri] new_master [Topolov, Yuri][p00_52YWQ_i3uOs9MrkvwA][inet[/10.0.1.5:9300]], reason: zen-disco-join (elected_as_master)
[2012-01-16 09:02:25,111][INFO ][discovery ] [Topolov, Yuri] elasticsearch/p00_52YWQ_i3uOs9MrkvwA
[2012-01-16 09:02:25,155][INFO ][http ] [Topolov, Yuri] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/10.0.1.5:9200]}
[2012-01-16 09:02:25,157][INFO ][node ] [Topolov, Yuri] {0.18.7}[11592]: started
[2012-01-16 09:02:25,245][INFO ][gateway ] [Topolov, Yuri] recovered [0] indices into cluster_state
[2012-01-16 09:03:59,713][INFO ][cluster.service ] [Topolov, Yuri] added {[Match][WmAM68HkTpCsqpRNHyaF0Q][inet[/10.0.1.5:9301]],}, reason: zen-disco-receive(join from node[[Match][WmAM68HkTpCsqpRNHyaF0Q][inet[/10.0.1.5:9301]]])
[2012-01-16 09:04:07,424][INFO ][cluster.service ] [Topolov, Yuri] removed {[Match][WmAM68HkTpCsqpRNHyaF0Q][inet[/10.0.1.5:9301]],}, reason: zen-disco-node_left([Match][WmAM68HkTpCsqpRNHyaF0Q][inet[/10.0.1.5:9301]])
[2012-01-16 09:04:19,378][INFO ][cluster.service ] [Topolov, Yuri] added {[Aralune][itHyqcQ_RgGh3fy-zO3G6g][inet[/10.0.1.5:9301]],}, reason: zen-disco-receive(join from node[[Aralune][itHyqcQ_RgGh3fy-zO3G6g][inet[/10.0.1.5:9301]]])
[2012-01-16 09:04:24,344][INFO ][node ] [Topolov, Yuri] {0.18.7}[11592]: stopping ...
[2012-01-16 09:04:24,379][INFO ][node ] [Topolov, Yuri] {0.18.7}[11592]: stopped
[2012-01-16 09:04:24,379][INFO ][node ] [Topolov, Yuri] {0.18.7}[11592]: closing ...
[2012-01-16 09:04:24,483][INFO ][node ] [Topolov, Yuri] {0.18.7}[11592]: closed
[2012-01-16 09:04:29,301][INFO ][node ] [Nth Man: the Ultimate Ninja] {0.18.7}[11674]: initializing ...
[2012-01-16 09:04:29,310][INFO ][plugins ] [Nth Man: the Ultimate Ninja] loaded [], sites []
[2012-01-16 09:04:31,783][INFO ][node ] [Nth Man: the Ultimate Ninja] {0.18.7}[11674]: initialized
[2012-01-16 09:04:31,784][INFO ][node ] [Nth Man: the Ultimate Ninja] {0.18.7}[11674]: starting ...
[2012-01-16 09:04:31,900][INFO ][transport ] [Nth Man: the Ultimate Ninja] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/10.0.1.5:9300]}
[2012-01-16 09:04:35,050][INFO ][cluster.service ] [Nth Man: the Ultimate Ninja] detected_master [Aralune][itHyqcQ_RgGh3fy-zO3G6g][inet[/10.0.1.5:9301]], added {[Aralune][itHyqcQ_RgGh3fy-zO3G6g][inet[/10.0.1.5:9301]],}, reason: zen-disco-receive(from master [[Aralune][itHyqcQ_RgGh3fy-zO3G6g][inet[/10.0.1.5:9301]]])
[2012-01-16 09:04:35,052][INFO ][discovery ] [Nth Man: the Ultimate Ninja] elasticsearch/BItSUFFaQzy94R7Z93vhbA
[2012-01-16 09:04:35,062][INFO ][http ] [Nth Man: the Ultimate Ninja] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/10.0.1.5:9200]}
[2012-01-16 09:04:35,063][INFO ][node ] [Nth Man: the Ultimate Ninja] {0.18.7}[11674]: started
[2012-01-16 09:04:49,843][INFO ][node ] [Nth Man: the Ultimate Ninja] {0.18.7}[11674]: stopping ...
[2012-01-16 09:04:49,897][INFO ][node ] [Nth Man: the Ultimate Ninja] {0.18.7}[11674]: stopped
[2012-01-16 09:04:49,898][INFO ][node ] [Nth Man: the Ultimate Ninja] {0.18.7}[11674]: closing ...
[2012-01-16 09:04:49,919][INFO ][node ] [Nth Man: the Ultimate Ninja] {0.18.7}[11674]: closed
[2012-01-16 09:05:26,962][INFO ][node ] [Sabra] {0.18.7}[11714]: initializing ...
[2012-01-16 09:05:26,971][INFO ][plugins ] [Sabra] loaded [], sites []
[2012-01-16 09:05:28,180][DEBUG][threadpool ] [Sabra] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:05:28,184][DEBUG][threadpool ] [Sabra] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:05:28,184][DEBUG][threadpool ] [Sabra] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:05:28,184][DEBUG][threadpool ] [Sabra] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:05:28,185][DEBUG][threadpool ] [Sabra] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:05:28,188][DEBUG][threadpool ] [Sabra] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:05:28,189][DEBUG][threadpool ] [Sabra] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:05:28,201][DEBUG][transport.netty ] [Sabra] using worker_count[4], port[9300], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:05:28,219][DEBUG][discovery.zen.ping.unicast] [Sabra] using initial hosts [localhost:9301], with concurrent_connects [10]
[2012-01-16 09:05:28,224][DEBUG][discovery.zen ] [Sabra] using ping.timeout [3s]
[2012-01-16 09:05:28,233][DEBUG][discovery.zen.elect ] [Sabra] using minimum_master_nodes [-1]
[2012-01-16 09:05:28,234][DEBUG][discovery.zen.fd ] [Sabra] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:05:28,238][DEBUG][discovery.zen.fd ] [Sabra] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:05:28,260][DEBUG][monitor.jvm ] [Sabra] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:05:28,770][DEBUG][monitor.os ] [Sabra] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@326147d9] with refresh_interval [1s]
[2012-01-16 09:05:28,775][DEBUG][monitor.process ] [Sabra] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@355c6c8d] with refresh_interval [1s]
[2012-01-16 09:05:28,779][DEBUG][monitor.jvm ] [Sabra] Using refresh_interval [1s]
[2012-01-16 09:05:28,779][DEBUG][monitor.network ] [Sabra] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@4ddf3d59] with refresh_interval [5s]
[2012-01-16 09:05:28,807][DEBUG][monitor.network ] [Sabra] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:05:28,811][TRACE][monitor.network ] [Sabra] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:16553 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:16553 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3581433 (3.4M) TX bytes:3581433 (3.4M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2833759 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1502709 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3818223156 (3.6G) TX bytes:117488532 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:05:28,813][TRACE][env ] [Sabra] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0 ...
[2012-01-16 09:05:28,914][DEBUG][env ] [Sabra] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:05:28,915][TRACE][env ] [Sabra] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0, free_space [221.7gb, usable_space [221.4gb
[2012-01-16 09:05:29,319][DEBUG][cache.memory ] [Sabra] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:05:29,334][DEBUG][cluster.routing.allocation.decider] [Sabra] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:05:29,335][DEBUG][cluster.routing.allocation.decider] [Sabra] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:05:29,335][DEBUG][cluster.routing.allocation.decider] [Sabra] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:05:29,338][DEBUG][gateway.local ] [Sabra] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:05:29,369][DEBUG][indices.recovery ] [Sabra] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:05:29,558][TRACE][jmx ] [Sabra] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:05:29,558][TRACE][jmx ] [Sabra] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:29,559][TRACE][jmx ] [Sabra] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:29,565][TRACE][jmx ] [Sabra] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:05:29,565][TRACE][jmx ] [Sabra] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:05:29,566][TRACE][jmx ] [Sabra] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:29,566][TRACE][jmx ] [Sabra] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:05:29,566][TRACE][jmx ] [Sabra] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:29,566][TRACE][jmx ] [Sabra] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:05:29,566][TRACE][jmx ] [Sabra] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:29,566][TRACE][jmx ] [Sabra] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:05:29,566][TRACE][jmx ] [Sabra] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:29,566][TRACE][jmx ] [Sabra] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:29,567][TRACE][jmx ] [Sabra] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:29,568][DEBUG][http.netty ] [Sabra] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:05:29,574][DEBUG][indices.memory ] [Sabra] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:05:29,584][DEBUG][indices.cache.filter ] [Sabra] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:05:29,674][INFO ][node ] [Sabra] {0.18.7}[11714]: initialized
[2012-01-16 09:05:29,675][INFO ][node ] [Sabra] {0.18.7}[11714]: starting ...
[2012-01-16 09:05:29,724][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:05:29,812][DEBUG][transport.netty ] [Sabra] Bound to address [/0.0.0.0:9300]
[2012-01-16 09:05:29,815][INFO ][transport ] [Sabra] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/10.0.1.5:9300]}
[2012-01-16 09:05:29,963][TRACE][discovery ] [Sabra] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:05:29,993][TRACE][transport.netty ] [Sabra] (Ignoring) Exception caught on netty layer [[id: 0x15e0a283]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:05:29,996][TRACE][discovery.zen.ping.unicast] [Sabra] [1] failed to connect to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
org.elasticsearch.transport.ConnectTransportException: [][inet[localhost/127.0.0.1:9301]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:533)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:499)
at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:478)
at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:128)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$3.run(UnicastZenPing.java:273)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
... 3 more
[2012-01-16 09:05:31,467][TRACE][discovery.zen.ping.unicast] [Sabra] [1] failed to connect to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
org.elasticsearch.transport.ConnectTransportException: [][inet[localhost/127.0.0.1:9301]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:533)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:499)
at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:478)
at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:128)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$3.run(UnicastZenPing.java:273)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
... 3 more
[2012-01-16 09:05:31,467][TRACE][transport.netty ] [Sabra] (Ignoring) Exception caught on netty layer [[id: 0x6b0cc9b4]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:05:32,970][TRACE][transport.netty ] [Sabra] (Ignoring) Exception caught on netty layer [[id: 0x5d4fa79d]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:05:32,971][TRACE][discovery.zen.ping.unicast] [Sabra] [1] failed to connect to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
org.elasticsearch.transport.ConnectTransportException: [][inet[localhost/127.0.0.1:9301]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:533)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:499)
at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:478)
at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:128)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$3.run(UnicastZenPing.java:273)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
... 3 more
[2012-01-16 09:05:32,980][DEBUG][discovery.zen ] [Sabra] ping responses: {none}
[2012-01-16 09:05:32,983][DEBUG][cluster.service ] [Sabra] processing [zen-disco-join (elected_as_master)]: execute
[2012-01-16 09:05:32,987][TRACE][cluster.service ] [Sabra] cluster state updated:
version [1], source [zen-disco-join (elected_as_master)]
nodes:
[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:05:32,989][INFO ][cluster.service ] [Sabra] new_master [Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]], reason: zen-disco-join (elected_as_master)
[2012-01-16 09:05:32,997][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x7326aaca, /10.0.1.5:62660 => /10.0.1.5:9300]
[2012-01-16 09:05:33,002][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x706c08b2, /10.0.1.5:62661 => /10.0.1.5:9300]
[2012-01-16 09:05:33,004][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x24be0446, /10.0.1.5:62662 => /10.0.1.5:9300]
[2012-01-16 09:05:33,011][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x39bde3d2, /10.0.1.5:62663 => /10.0.1.5:9300]
[2012-01-16 09:05:33,012][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x123e1d25, /10.0.1.5:62664 => /10.0.1.5:9300]
[2012-01-16 09:05:33,012][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x2beb717e, /10.0.1.5:62665 => /10.0.1.5:9300]
[2012-01-16 09:05:33,028][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x6e681db8, /10.0.1.5:62666 => /10.0.1.5:9300]
[2012-01-16 09:05:33,034][DEBUG][transport.netty ] [Sabra] Connected to node [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:05:33,040][DEBUG][cluster.service ] [Sabra] processing [zen-disco-join (elected_as_master)]: done applying updated cluster_state
[2012-01-16 09:05:33,040][TRACE][discovery ] [Sabra] initial state set from discovery
[2012-01-16 09:05:33,040][INFO ][discovery ] [Sabra] elasticsearch/XxMY7Zu9SCiatLnTofGobA
[2012-01-16 09:05:33,041][DEBUG][river.cluster ] [Sabra] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:05:33,041][TRACE][gateway.local ] [Sabra] [find_latest_state]: processing [metadata-1]
[2012-01-16 09:05:33,041][DEBUG][river.cluster ] [Sabra] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:05:33,064][DEBUG][gateway.local ] [Sabra] [find_latest_state]: loading metadata from [/Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0/_state/metadata-1]
[2012-01-16 09:05:33,064][TRACE][gateway.local ] [Sabra] [find_latest_state]: processing [metadata-1]
[2012-01-16 09:05:33,065][DEBUG][gateway.local ] [Sabra] [find_latest_state]: no started shards loaded
[2012-01-16 09:05:33,075][DEBUG][gateway.local ] [Sabra] elected state from [[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:05:33,077][DEBUG][cluster.service ] [Sabra] processing [local-gateway-elected-state]: execute
[2012-01-16 09:05:33,080][TRACE][cluster.service ] [Sabra] cluster state updated:
version [2], source [local-gateway-elected-state]
nodes:
[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:05:33,081][DEBUG][river.cluster ] [Sabra] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:05:33,081][DEBUG][river.cluster ] [Sabra] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:05:33,088][INFO ][http ] [Sabra] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/10.0.1.5:9200]}
[2012-01-16 09:05:33,089][TRACE][jmx ] [Sabra] Registered org.elasticsearch.jmx.ResourceDMBean@2f368c5d under org.elasticsearch:service=transport
[2012-01-16 09:05:33,089][TRACE][jmx ] [Sabra] Registered org.elasticsearch.jmx.ResourceDMBean@263945e2 under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:05:33,097][INFO ][node ] [Sabra] {0.18.7}[11714]: started
[2012-01-16 09:05:33,107][INFO ][gateway ] [Sabra] recovered [0] indices into cluster_state
[2012-01-16 09:05:33,107][DEBUG][cluster.service ] [Sabra] processing [local-gateway-elected-state]: done applying updated cluster_state
[2012-01-16 09:05:38,858][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x219a6087, /127.0.0.1:62669 => /127.0.0.1:9300]
[2012-01-16 09:05:41,859][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x219a6087, /127.0.0.1:62669 :> /127.0.0.1:9300]
[2012-01-16 09:05:41,860][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x0e07023f, /10.0.1.5:62670 => /10.0.1.5:9300]
[2012-01-16 09:05:41,861][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x6e247d4a, /10.0.1.5:62671 => /10.0.1.5:9300]
[2012-01-16 09:05:41,863][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x043b5699, /10.0.1.5:62672 => /10.0.1.5:9300]
[2012-01-16 09:05:41,865][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x53b258fa, /10.0.1.5:62673 => /10.0.1.5:9300]
[2012-01-16 09:05:41,865][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x61efb003, /10.0.1.5:62674 => /10.0.1.5:9300]
[2012-01-16 09:05:41,866][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x3f0cc730, /10.0.1.5:62675 => /10.0.1.5:9300]
[2012-01-16 09:05:41,866][TRACE][transport.netty ] [Sabra] channel opened: [id: 0x5fe940a6, /10.0.1.5:62676 => /10.0.1.5:9300]
[2012-01-16 09:05:41,901][DEBUG][transport.netty ] [Sabra] Connected to node [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:05:41,909][DEBUG][cluster.service ] [Sabra] processing [zen-disco-receive(join from node[[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:05:41,909][TRACE][cluster.service ] [Sabra] cluster state updated:
version [3], source [zen-disco-receive(join from node[[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]])]
nodes:
[Sabra][XxMY7Zu9SCiatLnTofGobA][inet[/10.0.1.5:9300]], local, master
[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:05:41,912][INFO ][cluster.service ] [Sabra] added {[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]],}, reason: zen-disco-receive(join from node[[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]])
[2012-01-16 09:05:41,913][DEBUG][river.cluster ] [Sabra] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:05:41,913][DEBUG][river.cluster ] [Sabra] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:05:41,916][DEBUG][cluster.service ] [Sabra] processing [zen-disco-receive(join from node[[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:05:43,038][DEBUG][cluster.service ] [Sabra] processing [routing-table-updater]: execute
[2012-01-16 09:05:43,038][DEBUG][cluster.service ] [Sabra] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:05:50,344][INFO ][node ] [Sabra] {0.18.7}[11714]: stopping ...
[2012-01-16 09:05:50,382][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x7326aaca, /10.0.1.5:62660 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,383][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x706c08b2, /10.0.1.5:62661 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,386][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x24be0446, /10.0.1.5:62662 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,387][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x123e1d25, /10.0.1.5:62664 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,387][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x2beb717e, /10.0.1.5:62665 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,388][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x6e681db8, /10.0.1.5:62666 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,419][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x0e07023f, /10.0.1.5:62670 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,464][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x39bde3d2, /10.0.1.5:62663 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,465][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x61efb003, /10.0.1.5:62674 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,467][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x6e247d4a, /10.0.1.5:62671 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,481][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x043b5699, /10.0.1.5:62672 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,487][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x53b258fa, /10.0.1.5:62673 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,490][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x5fe940a6, /10.0.1.5:62676 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,491][TRACE][transport.netty ] [Sabra] channel closed: [id: 0x3f0cc730, /10.0.1.5:62675 :> /10.0.1.5:9300]
[2012-01-16 09:05:50,528][TRACE][jmx ] [Sabra] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:05:50,528][TRACE][jmx ] [Sabra] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:05:50,529][INFO ][node ] [Sabra] {0.18.7}[11714]: stopped
[2012-01-16 09:05:50,529][INFO ][node ] [Sabra] {0.18.7}[11714]: closing ...
[2012-01-16 09:05:50,558][TRACE][node ] [Sabra] Close times for each service:
StopWatch 'node_close': running time = 21ms
-----------------------------------------
ms % Task name
-----------------------------------------
00000 000% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00001 005% indices
00000 000% routing
00000 000% cluster
00001 005% discovery
00002 010% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00001 005% node_cache
00000 000% script
00015 071% thread_pool
00001 005% thread_pool_force_shutdown
[2012-01-16 09:05:50,560][INFO ][node ] [Sabra] {0.18.7}[11714]: closed
[2012-01-16 09:05:53,536][INFO ][node ] [Magma] {0.18.7}[11746]: initializing ...
[2012-01-16 09:05:53,544][INFO ][plugins ] [Magma] loaded [], sites []
[2012-01-16 09:05:54,784][DEBUG][threadpool ] [Magma] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:05:54,787][DEBUG][threadpool ] [Magma] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:05:54,787][DEBUG][threadpool ] [Magma] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:05:54,787][DEBUG][threadpool ] [Magma] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:05:54,788][DEBUG][threadpool ] [Magma] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:05:54,791][DEBUG][threadpool ] [Magma] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:05:54,791][DEBUG][threadpool ] [Magma] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:05:54,804][DEBUG][transport.netty ] [Magma] using worker_count[4], port[9300], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:05:54,824][DEBUG][discovery.zen.ping.unicast] [Magma] using initial hosts [localhost:9301], with concurrent_connects [10]
[2012-01-16 09:05:54,829][DEBUG][discovery.zen ] [Magma] using ping.timeout [3s]
[2012-01-16 09:05:54,835][DEBUG][discovery.zen.elect ] [Magma] using minimum_master_nodes [-1]
[2012-01-16 09:05:54,837][DEBUG][discovery.zen.fd ] [Magma] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:05:54,841][DEBUG][discovery.zen.fd ] [Magma] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:05:54,866][DEBUG][monitor.jvm ] [Magma] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:05:55,376][DEBUG][monitor.os ] [Magma] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@a51064e] with refresh_interval [1s]
[2012-01-16 09:05:55,381][DEBUG][monitor.process ] [Magma] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@7463e563] with refresh_interval [1s]
[2012-01-16 09:05:55,385][DEBUG][monitor.jvm ] [Magma] Using refresh_interval [1s]
[2012-01-16 09:05:55,385][DEBUG][monitor.network ] [Magma] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@40c07527] with refresh_interval [5s]
[2012-01-16 09:05:55,395][DEBUG][monitor.network ] [Magma] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:05:55,412][TRACE][monitor.network ] [Magma] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:16871 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:16871 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3600452 (3.4M) TX bytes:3600452 (3.4M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2833763 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1502718 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3818223810 (3.6G) TX bytes:117489935 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:05:55,418][TRACE][env ] [Magma] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0 ...
[2012-01-16 09:05:55,445][DEBUG][env ] [Magma] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:05:55,446][TRACE][env ] [Magma] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0, free_space [221.7gb, usable_space [221.4gb
[2012-01-16 09:05:55,746][DEBUG][cache.memory ] [Magma] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:05:55,759][DEBUG][cluster.routing.allocation.decider] [Magma] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:05:55,760][DEBUG][cluster.routing.allocation.decider] [Magma] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:05:55,760][DEBUG][cluster.routing.allocation.decider] [Magma] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:05:55,764][DEBUG][gateway.local ] [Magma] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:05:55,782][DEBUG][indices.recovery ] [Magma] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:05:55,984][TRACE][jmx ] [Magma] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:05:55,984][TRACE][jmx ] [Magma] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:55,985][TRACE][jmx ] [Magma] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:55,987][TRACE][jmx ] [Magma] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:05:55,988][TRACE][jmx ] [Magma] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:05:55,988][TRACE][jmx ] [Magma] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:55,988][TRACE][jmx ] [Magma] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:05:55,989][TRACE][jmx ] [Magma] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:55,989][TRACE][jmx ] [Magma] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:05:55,989][TRACE][jmx ] [Magma] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:55,989][TRACE][jmx ] [Magma] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:05:55,990][TRACE][jmx ] [Magma] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:55,990][TRACE][jmx ] [Magma] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:55,990][TRACE][jmx ] [Magma] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:05:55,991][DEBUG][http.netty ] [Magma] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:05:55,999][DEBUG][indices.memory ] [Magma] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:05:56,010][DEBUG][indices.cache.filter ] [Magma] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:05:56,101][INFO ][node ] [Magma] {0.18.7}[11746]: initialized
[2012-01-16 09:05:56,102][INFO ][node ] [Magma] {0.18.7}[11746]: starting ...
[2012-01-16 09:05:56,136][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:05:56,230][DEBUG][transport.netty ] [Magma] Bound to address [/0.0.0.0:9300]
[2012-01-16 09:05:56,232][INFO ][transport ] [Magma] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/10.0.1.5:9300]}
[2012-01-16 09:05:56,310][TRACE][discovery ] [Magma] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:05:56,337][DEBUG][transport.netty ] [Magma] Connected to node [[#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:05:56,338][TRACE][discovery.zen.ping.unicast] [Magma] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:05:56,389][TRACE][discovery.zen.ping.unicast] [Magma] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], master [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:05:57,815][TRACE][discovery.zen.ping.unicast] [Magma] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:05:57,819][TRACE][discovery.zen.ping.unicast] [Magma] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], master [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:05:59,318][TRACE][discovery.zen.ping.unicast] [Magma] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:05:59,321][TRACE][discovery.zen.ping.unicast] [Magma] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], master [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:05:59,322][DEBUG][discovery.zen ] [Magma] ping responses:
--> target [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], master [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:05:59,324][DEBUG][transport.netty ] [Magma] Disconnected from [[#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:05:59,345][DEBUG][transport.netty ] [Magma] Connected to node [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:05:59,356][TRACE][transport.netty ] [Magma] channel opened: [id: 0x7b4653a3, /10.0.1.5:62701 => /10.0.1.5:9300]
[2012-01-16 09:05:59,358][TRACE][transport.netty ] [Magma] channel opened: [id: 0x16fa21a4, /10.0.1.5:62702 => /10.0.1.5:9300]
[2012-01-16 09:05:59,359][TRACE][transport.netty ] [Magma] channel opened: [id: 0x263945e2, /10.0.1.5:62703 => /10.0.1.5:9300]
[2012-01-16 09:05:59,365][TRACE][transport.netty ] [Magma] channel opened: [id: 0x56a9509d, /10.0.1.5:62704 => /10.0.1.5:9300]
[2012-01-16 09:05:59,367][TRACE][transport.netty ] [Magma] channel opened: [id: 0x796528a2, /10.0.1.5:62705 => /10.0.1.5:9300]
[2012-01-16 09:05:59,367][TRACE][transport.netty ] [Magma] channel opened: [id: 0x05945a5a, /10.0.1.5:62706 => /10.0.1.5:9300]
[2012-01-16 09:05:59,368][TRACE][transport.netty ] [Magma] channel opened: [id: 0x4eb7cd92, /10.0.1.5:62707 => /10.0.1.5:9300]
[2012-01-16 09:05:59,374][DEBUG][discovery.zen ] [Magma] got a new state from master node, though we are already trying to rejoin the cluster
[2012-01-16 09:05:59,376][DEBUG][discovery.zen.fd ] [Magma] [master] starting fault detection against master [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], reason [initial_join]
[2012-01-16 09:05:59,380][DEBUG][cluster.service ] [Magma] processing [zen-disco-receive(from master [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:05:59,381][TRACE][cluster.service ] [Magma] cluster state updated:
version [5], source [zen-disco-receive(from master [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]])]
nodes:
[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]], local
[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]], master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:05:59,383][INFO ][cluster.service ] [Magma] detected_master [Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]], added {[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]],}, reason: zen-disco-receive(from master [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]])
[2012-01-16 09:05:59,384][TRACE][transport.netty ] [Magma] channel opened: [id: 0x083ba4f1, /10.0.1.5:62708 => /10.0.1.5:9300]
[2012-01-16 09:05:59,385][TRACE][transport.netty ] [Magma] channel opened: [id: 0x4b25ee49, /10.0.1.5:62709 => /10.0.1.5:9300]
[2012-01-16 09:05:59,386][TRACE][transport.netty ] [Magma] channel opened: [id: 0x4553f141, /10.0.1.5:62710 => /10.0.1.5:9300]
[2012-01-16 09:05:59,388][DEBUG][transport.netty ] [Magma] Connected to node [[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:05:59,388][TRACE][transport.netty ] [Magma] channel opened: [id: 0x72e8e8f9, /10.0.1.5:62711 => /10.0.1.5:9300]
[2012-01-16 09:05:59,389][TRACE][transport.netty ] [Magma] channel opened: [id: 0x19176e5f, /10.0.1.5:62712 => /10.0.1.5:9300]
[2012-01-16 09:05:59,389][TRACE][transport.netty ] [Magma] channel opened: [id: 0x514f2bd7, /10.0.1.5:62713 => /10.0.1.5:9300]
[2012-01-16 09:05:59,390][TRACE][transport.netty ] [Magma] channel opened: [id: 0x1be2f6b0, /10.0.1.5:62714 => /10.0.1.5:9300]
[2012-01-16 09:05:59,390][DEBUG][cluster.service ] [Magma] processing [zen-disco-receive(from master [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:05:59,391][DEBUG][cluster.service ] [Magma] processing [zen-disco-join (detected master)]: execute
[2012-01-16 09:05:59,391][DEBUG][cluster.service ] [Magma] got old cluster state [4<5] from source [zen-disco-join (detected master)], ignoring
[2012-01-16 09:05:59,391][TRACE][discovery ] [Magma] initial state set from discovery
[2012-01-16 09:05:59,392][INFO ][discovery ] [Magma] elasticsearch/CTg8dCATQQKFFF-7xZchYA
[2012-01-16 09:05:59,392][TRACE][gateway.local ] [Magma] [find_latest_state]: processing [metadata-2]
[2012-01-16 09:05:59,398][DEBUG][gateway.local ] [Magma] [find_latest_state]: loading metadata from [/Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0/_state/metadata-2]
[2012-01-16 09:05:59,398][TRACE][gateway.local ] [Magma] [find_latest_state]: processing [metadata-2]
[2012-01-16 09:05:59,398][DEBUG][gateway.local ] [Magma] [find_latest_state]: no started shards loaded
[2012-01-16 09:05:59,406][INFO ][http ] [Magma] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/10.0.1.5:9200]}
[2012-01-16 09:05:59,407][TRACE][jmx ] [Magma] Registered org.elasticsearch.jmx.ResourceDMBean@502c06b2 under org.elasticsearch:service=transport
[2012-01-16 09:05:59,407][TRACE][jmx ] [Magma] Registered org.elasticsearch.jmx.ResourceDMBean@7a6bb93c under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:05:59,407][INFO ][node ] [Magma] {0.18.7}[11746]: started
[2012-01-16 09:06:05,953][INFO ][discovery.zen ] [Magma] master_left [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], reason [shut_down]
[2012-01-16 09:06:05,954][DEBUG][cluster.service ] [Magma] processing [zen-disco-master_failed ([Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]])]: execute
[2012-01-16 09:06:05,955][DEBUG][discovery.zen.fd ] [Magma] [master] stopping fault detection against master [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]], reason [got elected as new master since master left (reason = shut_down)]
[2012-01-16 09:06:05,955][TRACE][cluster.service ] [Magma] cluster state updated:
version [5], source [zen-disco-master_failed ([Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]])]
nodes:
[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:05,955][INFO ][cluster.service ] [Magma] master {new [Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]], previous [Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]}, removed {[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]],}, reason: zen-disco-master_failed ([Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]])
[2012-01-16 09:06:05,960][DEBUG][river.cluster ] [Magma] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:06:05,961][DEBUG][river.cluster ] [Magma] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:06:05,963][TRACE][transport.netty ] [Magma] channel closed: [id: 0x05945a5a, /10.0.1.5:62706 :> /10.0.1.5:9300]
[2012-01-16 09:06:05,963][TRACE][transport.netty ] [Magma] channel closed: [id: 0x4eb7cd92, /10.0.1.5:62707 :> /10.0.1.5:9300]
[2012-01-16 09:06:05,969][TRACE][transport.netty ] [Magma] channel closed: [id: 0x56a9509d, /10.0.1.5:62704 :> /10.0.1.5:9300]
[2012-01-16 09:06:05,970][TRACE][transport.netty ] [Magma] channel closed: [id: 0x796528a2, /10.0.1.5:62705 :> /10.0.1.5:9300]
[2012-01-16 09:06:05,972][DEBUG][cluster.service ] [Magma] processing [zen-disco-master_failed ([Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]])]: done applying updated cluster_state
[2012-01-16 09:06:05,972][DEBUG][cluster.service ] [Magma] processing [routing-table-updater]: execute
[2012-01-16 09:06:05,975][DEBUG][cluster.service ] [Magma] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:06:05,978][TRACE][transport.netty ] [Magma] channel closed: [id: 0x263945e2, /10.0.1.5:62703 :> /10.0.1.5:9300]
[2012-01-16 09:06:05,993][TRACE][transport.netty ] [Magma] channel closed: [id: 0x7b4653a3, /10.0.1.5:62701 :> /10.0.1.5:9300]
[2012-01-16 09:06:05,993][TRACE][transport.netty ] [Magma] channel closed: [id: 0x16fa21a4, /10.0.1.5:62702 :> /10.0.1.5:9300]
[2012-01-16 09:06:05,996][DEBUG][transport.netty ] [Magma] Disconnected from [[Loki][phAiMOQZTZyq8wPJlgfUOg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:06:10,587][TRACE][transport.netty ] [Magma] channel opened: [id: 0x17510d96, /127.0.0.1:62717 => /127.0.0.1:9300]
[2012-01-16 09:06:13,581][TRACE][transport.netty ] [Magma] channel opened: [id: 0x41aef798, /10.0.1.5:62718 => /10.0.1.5:9300]
[2012-01-16 09:06:13,582][TRACE][transport.netty ] [Magma] channel closed: [id: 0x17510d96, /127.0.0.1:62717 :> /127.0.0.1:9300]
[2012-01-16 09:06:13,583][TRACE][transport.netty ] [Magma] channel opened: [id: 0x7b8353cf, /10.0.1.5:62719 => /10.0.1.5:9300]
[2012-01-16 09:06:13,586][TRACE][transport.netty ] [Magma] channel opened: [id: 0x54edd9de, /10.0.1.5:62720 => /10.0.1.5:9300]
[2012-01-16 09:06:13,586][TRACE][transport.netty ] [Magma] channel opened: [id: 0x4b7aa961, /10.0.1.5:62721 => /10.0.1.5:9300]
[2012-01-16 09:06:13,586][TRACE][transport.netty ] [Magma] channel opened: [id: 0x09dd1752, /10.0.1.5:62722 => /10.0.1.5:9300]
[2012-01-16 09:06:13,587][TRACE][transport.netty ] [Magma] channel opened: [id: 0x0043ad4a, /10.0.1.5:62723 => /10.0.1.5:9300]
[2012-01-16 09:06:13,587][TRACE][transport.netty ] [Magma] channel opened: [id: 0x10ddcd98, /10.0.1.5:62724 => /10.0.1.5:9300]
[2012-01-16 09:06:13,605][DEBUG][transport.netty ] [Magma] Connected to node [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:06:13,607][DEBUG][cluster.service ] [Magma] processing [zen-disco-receive(join from node[[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:06:13,607][TRACE][cluster.service ] [Magma] cluster state updated:
version [6], source [zen-disco-receive(join from node[[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]])]
nodes:
[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]
[Magma][CTg8dCATQQKFFF-7xZchYA][inet[/10.0.1.5:9300]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:13,608][INFO ][cluster.service ] [Magma] added {[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]],}, reason: zen-disco-receive(join from node[[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]])
[2012-01-16 09:06:13,608][DEBUG][river.cluster ] [Magma] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:06:13,609][DEBUG][river.cluster ] [Magma] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:06:13,615][DEBUG][cluster.service ] [Magma] processing [zen-disco-receive(join from node[[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:06:15,958][DEBUG][cluster.service ] [Magma] processing [routing-table-updater]: execute
[2012-01-16 09:06:15,958][DEBUG][cluster.service ] [Magma] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:06:25,263][INFO ][node ] [Magma] {0.18.7}[11746]: stopping ...
[2012-01-16 09:06:25,308][TRACE][transport.netty ] [Magma] channel closed: [id: 0x514f2bd7, /10.0.1.5:62713 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,309][TRACE][transport.netty ] [Magma] channel closed: [id: 0x4b25ee49, /10.0.1.5:62709 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,312][TRACE][transport.netty ] [Magma] channel closed: [id: 0x4553f141, /10.0.1.5:62710 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,312][TRACE][transport.netty ] [Magma] channel closed: [id: 0x1be2f6b0, /10.0.1.5:62714 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,308][TRACE][transport.netty ] [Magma] channel closed: [id: 0x083ba4f1, /10.0.1.5:62708 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,316][TRACE][transport.netty ] [Magma] channel closed: [id: 0x19176e5f, /10.0.1.5:62712 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,316][TRACE][transport.netty ] [Magma] channel closed: [id: 0x72e8e8f9, /10.0.1.5:62711 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,321][TRACE][transport.netty ] [Magma] channel closed: [id: 0x41aef798, /10.0.1.5:62718 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,321][TRACE][transport.netty ] [Magma] channel closed: [id: 0x54edd9de, /10.0.1.5:62720 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,322][TRACE][transport.netty ] [Magma] channel closed: [id: 0x0043ad4a, /10.0.1.5:62723 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,322][TRACE][transport.netty ] [Magma] channel closed: [id: 0x09dd1752, /10.0.1.5:62722 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,322][TRACE][transport.netty ] [Magma] channel closed: [id: 0x7b8353cf, /10.0.1.5:62719 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,322][TRACE][transport.netty ] [Magma] channel closed: [id: 0x4b7aa961, /10.0.1.5:62721 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,323][TRACE][transport.netty ] [Magma] channel closed: [id: 0x10ddcd98, /10.0.1.5:62724 :> /10.0.1.5:9300]
[2012-01-16 09:06:25,327][TRACE][jmx ] [Magma] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:06:25,327][TRACE][jmx ] [Magma] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:06:25,327][INFO ][node ] [Magma] {0.18.7}[11746]: stopped
[2012-01-16 09:06:25,327][INFO ][node ] [Magma] {0.18.7}[11746]: closing ...
[2012-01-16 09:06:25,368][TRACE][node ] [Magma] Close times for each service:
StopWatch 'node_close': running time = 21ms
-----------------------------------------
ms % Task name
-----------------------------------------
00000 000% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00001 005% indices
00000 000% routing
00000 000% cluster
00001 005% discovery
00000 000% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00001 005% node_cache
00004 019% script
00014 067% thread_pool
00000 000% thread_pool_force_shutdown
[2012-01-16 09:06:25,371][INFO ][node ] [Magma] {0.18.7}[11746]: closed
[2012-01-16 09:06:28,086][INFO ][node ] [Argus] {0.18.7}[11783]: initializing ...
[2012-01-16 09:06:28,095][INFO ][plugins ] [Argus] loaded [], sites []
[2012-01-16 09:06:29,288][DEBUG][threadpool ] [Argus] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:06:29,291][DEBUG][threadpool ] [Argus] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:06:29,291][DEBUG][threadpool ] [Argus] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:06:29,291][DEBUG][threadpool ] [Argus] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:06:29,292][DEBUG][threadpool ] [Argus] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:06:29,296][DEBUG][threadpool ] [Argus] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:06:29,296][DEBUG][threadpool ] [Argus] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:06:29,309][DEBUG][transport.netty ] [Argus] using worker_count[4], port[9300], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:06:29,331][DEBUG][discovery.zen.ping.unicast] [Argus] using initial hosts [localhost:9301], with concurrent_connects [10]
[2012-01-16 09:06:29,336][DEBUG][discovery.zen ] [Argus] using ping.timeout [3s]
[2012-01-16 09:06:29,342][DEBUG][discovery.zen.elect ] [Argus] using minimum_master_nodes [-1]
[2012-01-16 09:06:29,343][DEBUG][discovery.zen.fd ] [Argus] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:06:29,347][DEBUG][discovery.zen.fd ] [Argus] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:06:29,369][DEBUG][monitor.jvm ] [Argus] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:06:29,879][DEBUG][monitor.os ] [Argus] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@5de82b72] with refresh_interval [1s]
[2012-01-16 09:06:29,884][DEBUG][monitor.process ] [Argus] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@3e9c22ff] with refresh_interval [1s]
[2012-01-16 09:06:29,888][DEBUG][monitor.jvm ] [Argus] Using refresh_interval [1s]
[2012-01-16 09:06:29,889][DEBUG][monitor.network ] [Argus] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@3ec19fbf] with refresh_interval [5s]
[2012-01-16 09:06:29,899][DEBUG][monitor.network ] [Argus] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:06:29,918][TRACE][monitor.network ] [Argus] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:17429 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:17429 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3632485 (3.5M) TX bytes:3632485 (3.5M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2833766 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1502722 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3818224309 (3.6G) TX bytes:117490144 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:06:29,920][TRACE][env ] [Argus] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0 ...
[2012-01-16 09:06:29,947][DEBUG][env ] [Argus] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:06:29,948][TRACE][env ] [Argus] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0, free_space [221.7gb, usable_space [221.4gb
[2012-01-16 09:06:30,241][DEBUG][cache.memory ] [Argus] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:06:30,254][DEBUG][cluster.routing.allocation.decider] [Argus] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:06:30,255][DEBUG][cluster.routing.allocation.decider] [Argus] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:06:30,256][DEBUG][cluster.routing.allocation.decider] [Argus] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:06:30,259][DEBUG][gateway.local ] [Argus] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:06:30,278][DEBUG][indices.recovery ] [Argus] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:06:30,472][TRACE][jmx ] [Argus] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:06:30,473][TRACE][jmx ] [Argus] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:30,473][TRACE][jmx ] [Argus] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:30,476][TRACE][jmx ] [Argus] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:06:30,476][TRACE][jmx ] [Argus] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:06:30,476][TRACE][jmx ] [Argus] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:30,476][TRACE][jmx ] [Argus] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:06:30,476][TRACE][jmx ] [Argus] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:30,476][TRACE][jmx ] [Argus] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:06:30,476][TRACE][jmx ] [Argus] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:30,478][TRACE][jmx ] [Argus] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:06:30,478][TRACE][jmx ] [Argus] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:30,479][TRACE][jmx ] [Argus] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:30,480][TRACE][jmx ] [Argus] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:06:30,480][DEBUG][http.netty ] [Argus] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:06:30,488][DEBUG][indices.memory ] [Argus] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:06:30,499][DEBUG][indices.cache.filter ] [Argus] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:06:30,587][INFO ][node ] [Argus] {0.18.7}[11783]: initialized
[2012-01-16 09:06:30,588][INFO ][node ] [Argus] {0.18.7}[11783]: starting ...
[2012-01-16 09:06:30,612][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:06:30,697][DEBUG][transport.netty ] [Argus] Bound to address [/0.0.0.0:9300]
[2012-01-16 09:06:30,700][INFO ][transport ] [Argus] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/10.0.1.5:9300]}
[2012-01-16 09:06:30,809][TRACE][discovery ] [Argus] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:06:30,838][DEBUG][transport.netty ] [Argus] Connected to node [[#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:06:30,840][TRACE][discovery.zen.ping.unicast] [Argus] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:06:30,889][TRACE][discovery.zen.ping.unicast] [Argus] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], master [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:06:32,314][TRACE][discovery.zen.ping.unicast] [Argus] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:06:32,328][TRACE][discovery.zen.ping.unicast] [Argus] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], master [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:06:33,817][TRACE][discovery.zen.ping.unicast] [Argus] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:06:33,819][TRACE][discovery.zen.ping.unicast] [Argus] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], master [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:06:33,820][DEBUG][discovery.zen ] [Argus] ping responses:
--> target [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], master [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:06:33,821][DEBUG][transport.netty ] [Argus] Disconnected from [[#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:06:33,850][DEBUG][transport.netty ] [Argus] Connected to node [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:06:33,861][TRACE][transport.netty ] [Argus] channel opened: [id: 0x31923ca5, /10.0.1.5:62749 => /10.0.1.5:9300]
[2012-01-16 09:06:33,863][TRACE][transport.netty ] [Argus] channel opened: [id: 0x563b100c, /10.0.1.5:62750 => /10.0.1.5:9300]
[2012-01-16 09:06:33,869][TRACE][transport.netty ] [Argus] channel opened: [id: 0x72b398da, /10.0.1.5:62751 => /10.0.1.5:9300]
[2012-01-16 09:06:33,870][TRACE][transport.netty ] [Argus] channel opened: [id: 0x03f94a1f, /10.0.1.5:62752 => /10.0.1.5:9300]
[2012-01-16 09:06:33,872][DEBUG][discovery.zen.fd ] [Argus] [master] starting fault detection against master [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], reason [initial_join]
[2012-01-16 09:06:33,881][TRACE][transport.netty ] [Argus] channel opened: [id: 0x7fb6a1c4, /10.0.1.5:62753 => /10.0.1.5:9300]
[2012-01-16 09:06:33,883][TRACE][transport.netty ] [Argus] channel opened: [id: 0x2f368c5d, /10.0.1.5:62754 => /10.0.1.5:9300]
[2012-01-16 09:06:33,884][TRACE][transport.netty ] [Argus] channel opened: [id: 0x263945e2, /10.0.1.5:62755 => /10.0.1.5:9300]
[2012-01-16 09:06:33,884][DEBUG][cluster.service ] [Argus] processing [zen-disco-join (detected master)]: execute
[2012-01-16 09:06:33,886][TRACE][cluster.service ] [Argus] cluster state updated:
version [7], source [zen-disco-join (detected master)]
nodes:
[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:33,890][TRACE][transport.netty ] [Argus] channel opened: [id: 0x181f327e, /10.0.1.5:62756 => /10.0.1.5:9300]
[2012-01-16 09:06:33,893][TRACE][transport.netty ] [Argus] channel opened: [id: 0x5694fe42, /10.0.1.5:62757 => /10.0.1.5:9300]
[2012-01-16 09:06:33,897][TRACE][transport.netty ] [Argus] channel opened: [id: 0x7a6dd8e1, /10.0.1.5:62758 => /10.0.1.5:9300]
[2012-01-16 09:06:33,900][TRACE][transport.netty ] [Argus] channel opened: [id: 0x06cb6a34, /10.0.1.5:62759 => /10.0.1.5:9300]
[2012-01-16 09:06:33,901][TRACE][transport.netty ] [Argus] channel opened: [id: 0x219a6087, /10.0.1.5:62760 => /10.0.1.5:9300]
[2012-01-16 09:06:33,902][TRACE][transport.netty ] [Argus] channel opened: [id: 0x24b6a561, /10.0.1.5:62761 => /10.0.1.5:9300]
[2012-01-16 09:06:33,902][TRACE][transport.netty ] [Argus] channel opened: [id: 0x5323961b, /10.0.1.5:62762 => /10.0.1.5:9300]
[2012-01-16 09:06:33,901][DEBUG][transport.netty ] [Argus] Connected to node [[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:06:33,902][DEBUG][cluster.service ] [Argus] processing [zen-disco-join (detected master)]: done applying updated cluster_state
[2012-01-16 09:06:33,903][DEBUG][cluster.service ] [Argus] processing [zen-disco-receive(from master [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:06:33,903][TRACE][cluster.service ] [Argus] cluster state updated:
version [8], source [zen-disco-receive(from master [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]])]
nodes:
[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]], master
[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:33,903][INFO ][cluster.service ] [Argus] detected_master [Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]], added {[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]],}, reason: zen-disco-receive(from master [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]])
[2012-01-16 09:06:33,904][DEBUG][cluster.service ] [Argus] processing [zen-disco-receive(from master [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:06:33,905][TRACE][discovery ] [Argus] initial state set from discovery
[2012-01-16 09:06:33,905][INFO ][discovery ] [Argus] elasticsearch/z8WR1TsvT3mnOeYKaINXyQ
[2012-01-16 09:06:33,906][TRACE][gateway.local ] [Argus] [find_latest_state]: processing [metadata-2]
[2012-01-16 09:06:33,909][DEBUG][gateway.local ] [Argus] [find_latest_state]: loading metadata from [/Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0/_state/metadata-2]
[2012-01-16 09:06:33,910][TRACE][gateway.local ] [Argus] [find_latest_state]: processing [metadata-2]
[2012-01-16 09:06:33,910][DEBUG][gateway.local ] [Argus] [find_latest_state]: no started shards loaded
[2012-01-16 09:06:33,925][INFO ][http ] [Argus] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/10.0.1.5:9200]}
[2012-01-16 09:06:33,928][TRACE][jmx ] [Argus] Registered org.elasticsearch.jmx.ResourceDMBean@51b1ab1d under org.elasticsearch:service=transport
[2012-01-16 09:06:33,929][TRACE][jmx ] [Argus] Registered org.elasticsearch.jmx.ResourceDMBean@675926d1 under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:06:33,929][INFO ][node ] [Argus] {0.18.7}[11783]: started
[2012-01-16 09:06:40,304][INFO ][discovery.zen ] [Argus] master_left [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], reason [shut_down]
[2012-01-16 09:06:40,305][DEBUG][cluster.service ] [Argus] processing [zen-disco-master_failed ([Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]])]: execute
[2012-01-16 09:06:40,306][DEBUG][discovery.zen.fd ] [Argus] [master] stopping fault detection against master [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]], reason [got elected as new master since master left (reason = shut_down)]
[2012-01-16 09:06:40,306][TRACE][cluster.service ] [Argus] cluster state updated:
version [9], source [zen-disco-master_failed ([Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]])]
nodes:
[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:40,306][INFO ][cluster.service ] [Argus] master {new [Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]], previous [Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]}, removed {[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]],}, reason: zen-disco-master_failed ([Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]])
[2012-01-16 09:06:40,315][TRACE][transport.netty ] [Argus] channel closed: [id: 0x31923ca5, /10.0.1.5:62749 :> /10.0.1.5:9300]
[2012-01-16 09:06:40,316][DEBUG][river.cluster ] [Argus] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:06:40,316][DEBUG][river.cluster ] [Argus] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:06:40,317][TRACE][transport.netty ] [Argus] channel closed: [id: 0x563b100c, /10.0.1.5:62750 :> /10.0.1.5:9300]
[2012-01-16 09:06:40,318][TRACE][transport.netty ] [Argus] channel closed: [id: 0x03f94a1f, /10.0.1.5:62752 :> /10.0.1.5:9300]
[2012-01-16 09:06:40,318][TRACE][transport.netty ] [Argus] channel closed: [id: 0x72b398da, /10.0.1.5:62751 :> /10.0.1.5:9300]
[2012-01-16 09:06:40,318][TRACE][transport.netty ] [Argus] channel closed: [id: 0x7fb6a1c4, /10.0.1.5:62753 :> /10.0.1.5:9300]
[2012-01-16 09:06:40,319][TRACE][transport.netty ] [Argus] channel closed: [id: 0x2f368c5d, /10.0.1.5:62754 :> /10.0.1.5:9300]
[2012-01-16 09:06:40,320][TRACE][transport.netty ] [Argus] channel closed: [id: 0x263945e2, /10.0.1.5:62755 :> /10.0.1.5:9300]
[2012-01-16 09:06:40,333][DEBUG][transport.netty ] [Argus] Disconnected from [[Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:06:40,335][DEBUG][cluster.service ] [Argus] processing [zen-disco-master_failed ([Nekra][3VWDdHUNTx6Twhw_QvsNFA][inet[/10.0.1.5:9301]])]: done applying updated cluster_state
[2012-01-16 09:06:40,335][DEBUG][cluster.service ] [Argus] processing [routing-table-updater]: execute
[2012-01-16 09:06:40,337][DEBUG][cluster.service ] [Argus] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:06:45,410][TRACE][transport.netty ] [Argus] channel opened: [id: 0x2abbaa16, /127.0.0.1:62765 => /127.0.0.1:9300]
[2012-01-16 09:06:48,411][TRACE][transport.netty ] [Argus] channel opened: [id: 0x1fea6a1c, /10.0.1.5:62766 => /10.0.1.5:9300]
[2012-01-16 09:06:48,411][TRACE][transport.netty ] [Argus] channel closed: [id: 0x2abbaa16, /127.0.0.1:62765 :> /127.0.0.1:9300]
[2012-01-16 09:06:48,412][TRACE][transport.netty ] [Argus] channel opened: [id: 0x7f205d8d, /10.0.1.5:62767 => /10.0.1.5:9300]
[2012-01-16 09:06:48,412][TRACE][transport.netty ] [Argus] channel opened: [id: 0x25de152f, /10.0.1.5:62768 => /10.0.1.5:9300]
[2012-01-16 09:06:48,413][TRACE][transport.netty ] [Argus] channel opened: [id: 0x1740d415, /10.0.1.5:62769 => /10.0.1.5:9300]
[2012-01-16 09:06:48,415][TRACE][transport.netty ] [Argus] channel opened: [id: 0x5106def2, /10.0.1.5:62770 => /10.0.1.5:9300]
[2012-01-16 09:06:48,415][TRACE][transport.netty ] [Argus] channel opened: [id: 0x1a170b6d, /10.0.1.5:62771 => /10.0.1.5:9300]
[2012-01-16 09:06:48,415][TRACE][transport.netty ] [Argus] channel opened: [id: 0x2e595420, /10.0.1.5:62772 => /10.0.1.5:9300]
[2012-01-16 09:06:48,446][DEBUG][transport.netty ] [Argus] Connected to node [[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:06:48,454][DEBUG][cluster.service ] [Argus] processing [zen-disco-receive(join from node[[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:06:48,455][TRACE][cluster.service ] [Argus] cluster state updated:
version [10], source [zen-disco-receive(join from node[[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]])]
nodes:
[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]
[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:48,455][INFO ][cluster.service ] [Argus] added {[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]],}, reason: zen-disco-receive(join from node[[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]])
[2012-01-16 09:06:48,455][DEBUG][river.cluster ] [Argus] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:06:48,456][DEBUG][river.cluster ] [Argus] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:06:48,461][DEBUG][cluster.service ] [Argus] processing [zen-disco-receive(join from node[[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:06:50,309][DEBUG][cluster.service ] [Argus] processing [routing-table-updater]: execute
[2012-01-16 09:06:50,309][DEBUG][cluster.service ] [Argus] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:06:53,412][DEBUG][cluster.service ] [Argus] processing [zen-disco-node_left([Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]])]: execute
[2012-01-16 09:06:53,430][DEBUG][transport.netty ] [Argus] Disconnected from [[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:06:53,436][TRACE][cluster.service ] [Argus] cluster state updated:
version [11], source [zen-disco-node_left([Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]])]
nodes:
[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:53,428][TRACE][transport.netty ] [Argus] channel closed: [id: 0x1740d415, /10.0.1.5:62769 :> /10.0.1.5:9300]
[2012-01-16 09:06:53,427][TRACE][transport.netty ] [Argus] channel closed: [id: 0x2e595420, /10.0.1.5:62772 :> /10.0.1.5:9300]
[2012-01-16 09:06:53,438][TRACE][transport.netty ] [Argus] channel closed: [id: 0x25de152f, /10.0.1.5:62768 :> /10.0.1.5:9300]
[2012-01-16 09:06:53,427][TRACE][transport.netty ] [Argus] channel closed: [id: 0x7f205d8d, /10.0.1.5:62767 :> /10.0.1.5:9300]
[2012-01-16 09:06:53,441][TRACE][transport.netty ] [Argus] channel closed: [id: 0x1a170b6d, /10.0.1.5:62771 :> /10.0.1.5:9300]
[2012-01-16 09:06:53,444][TRACE][discovery.zen.fd ] [Argus] [node ] [[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]] transport disconnected (with verified connect)
[2012-01-16 09:06:53,445][TRACE][transport.netty ] [Argus] (Ignoring) Exception caught on netty layer [[id: 0x7831d5e2]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:06:53,427][TRACE][transport.netty ] [Argus] channel closed: [id: 0x5106def2, /10.0.1.5:62770 :> /10.0.1.5:9300]
[2012-01-16 09:06:53,470][TRACE][transport.netty ] [Argus] channel closed: [id: 0x1fea6a1c, /10.0.1.5:62766 :> /10.0.1.5:9300]
[2012-01-16 09:06:53,436][INFO ][cluster.service ] [Argus] removed {[Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]],}, reason: zen-disco-node_left([Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]])
[2012-01-16 09:06:53,472][DEBUG][cluster.service ] [Argus] processing [zen-disco-node_left([Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]])]: done applying updated cluster_state
[2012-01-16 09:06:53,472][DEBUG][cluster.service ] [Argus] processing [zen-disco-node_failed([Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]), reason transport disconnected (with verified connect)]: execute
[2012-01-16 09:06:53,472][TRACE][cluster.service ] [Argus] cluster state updated:
version [12], source [zen-disco-node_failed([Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]), reason transport disconnected (with verified connect)]
nodes:
[Argus][z8WR1TsvT3mnOeYKaINXyQ][inet[/10.0.1.5:9300]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:06:53,472][DEBUG][cluster.service ] [Argus] processing [zen-disco-node_failed([Bradley, Isaiah][tPDDcLRYTXez2j0_7c69Wg][inet[/10.0.1.5:9301]]), reason transport disconnected (with verified connect)]: done applying updated cluster_state
[2012-01-16 09:06:53,472][DEBUG][cluster.service ] [Argus] processing [routing-table-updater]: execute
[2012-01-16 09:06:53,473][DEBUG][cluster.service ] [Argus] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:06:53,475][DEBUG][river.cluster ] [Argus] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:06:53,475][DEBUG][river.cluster ] [Argus] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:06:53,476][DEBUG][river.cluster ] [Argus] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:06:53,476][DEBUG][river.cluster ] [Argus] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:06:54,387][INFO ][node ] [Argus] {0.18.7}[11783]: stopping ...
[2012-01-16 09:06:54,398][TRACE][transport.netty ] [Argus] channel closed: [id: 0x181f327e, /10.0.1.5:62756 :> /10.0.1.5:9300]
[2012-01-16 09:06:54,398][TRACE][transport.netty ] [Argus] channel closed: [id: 0x5323961b, /10.0.1.5:62762 :> /10.0.1.5:9300]
[2012-01-16 09:06:54,399][TRACE][transport.netty ] [Argus] channel closed: [id: 0x7a6dd8e1, /10.0.1.5:62758 :> /10.0.1.5:9300]
[2012-01-16 09:06:54,399][TRACE][transport.netty ] [Argus] channel closed: [id: 0x5694fe42, /10.0.1.5:62757 :> /10.0.1.5:9300]
[2012-01-16 09:06:54,399][TRACE][transport.netty ] [Argus] channel closed: [id: 0x06cb6a34, /10.0.1.5:62759 :> /10.0.1.5:9300]
[2012-01-16 09:06:54,399][TRACE][transport.netty ] [Argus] channel closed: [id: 0x219a6087, /10.0.1.5:62760 :> /10.0.1.5:9300]
[2012-01-16 09:06:54,399][TRACE][transport.netty ] [Argus] channel closed: [id: 0x24b6a561, /10.0.1.5:62761 :> /10.0.1.5:9300]
[2012-01-16 09:06:54,403][TRACE][jmx ] [Argus] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:06:54,404][TRACE][jmx ] [Argus] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:06:54,416][INFO ][node ] [Argus] {0.18.7}[11783]: stopped
[2012-01-16 09:06:54,416][INFO ][node ] [Argus] {0.18.7}[11783]: closing ...
[2012-01-16 09:06:54,429][TRACE][node ] [Argus] Close times for each service:
StopWatch 'node_close': running time = 5ms
-----------------------------------------
ms % Task name
-----------------------------------------
00000 000% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00001 020% indices
00000 000% routing
00000 000% cluster
00001 020% discovery
00001 020% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00000 000% node_cache
00001 020% script
00001 020% thread_pool
00000 000% thread_pool_force_shutdown
[2012-01-16 09:06:54,431][INFO ][node ] [Argus] {0.18.7}[11783]: closed
[2012-01-16 09:08:42,265][INFO ][node ] [Mountjoy] {0.18.7}[11874]: initializing ...
[2012-01-16 09:08:42,274][INFO ][plugins ] [Mountjoy] loaded [], sites []
[2012-01-16 09:08:43,456][DEBUG][threadpool ] [Mountjoy] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:08:43,459][DEBUG][threadpool ] [Mountjoy] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:08:43,459][DEBUG][threadpool ] [Mountjoy] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:08:43,460][DEBUG][threadpool ] [Mountjoy] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:08:43,460][DEBUG][threadpool ] [Mountjoy] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:08:43,463][DEBUG][threadpool ] [Mountjoy] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:08:43,464][DEBUG][threadpool ] [Mountjoy] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:08:43,478][DEBUG][transport.netty ] [Mountjoy] using worker_count[4], port[9300], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:08:43,500][DEBUG][discovery.zen.ping.unicast] [Mountjoy] using initial hosts [localhost:9301, localhost:9300], with concurrent_connects [10]
[2012-01-16 09:08:43,504][DEBUG][discovery.zen ] [Mountjoy] using ping.timeout [3s]
[2012-01-16 09:08:43,511][DEBUG][discovery.zen.elect ] [Mountjoy] using minimum_master_nodes [-1]
[2012-01-16 09:08:43,512][DEBUG][discovery.zen.fd ] [Mountjoy] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:08:43,516][DEBUG][discovery.zen.fd ] [Mountjoy] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:08:43,542][DEBUG][monitor.jvm ] [Mountjoy] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:08:44,052][DEBUG][monitor.os ] [Mountjoy] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@182153fe] with refresh_interval [1s]
[2012-01-16 09:08:44,057][DEBUG][monitor.process ] [Mountjoy] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@239cd5f5] with refresh_interval [1s]
[2012-01-16 09:08:44,061][DEBUG][monitor.jvm ] [Mountjoy] Using refresh_interval [1s]
[2012-01-16 09:08:44,062][DEBUG][monitor.network ] [Mountjoy] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@2377ff35] with refresh_interval [5s]
[2012-01-16 09:08:44,086][DEBUG][monitor.network ] [Mountjoy] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:08:44,090][TRACE][monitor.network ] [Mountjoy] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:18055 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:18055 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3670834 (3.5M) TX bytes:3670834 (3.5M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2833842 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1502806 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3818244990 (3.6G) TX bytes:117505881 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:08:44,092][TRACE][env ] [Mountjoy] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0 ...
[2012-01-16 09:08:44,128][DEBUG][env ] [Mountjoy] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:08:44,130][TRACE][env ] [Mountjoy] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0, free_space [221.6gb, usable_space [221.4gb
[2012-01-16 09:08:44,440][DEBUG][cache.memory ] [Mountjoy] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:08:44,453][DEBUG][cluster.routing.allocation.decider] [Mountjoy] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:08:44,454][DEBUG][cluster.routing.allocation.decider] [Mountjoy] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:08:44,454][DEBUG][cluster.routing.allocation.decider] [Mountjoy] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:08:44,457][DEBUG][gateway.local ] [Mountjoy] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:08:44,485][DEBUG][indices.recovery ] [Mountjoy] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:08:44,664][TRACE][jmx ] [Mountjoy] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:08:44,665][TRACE][jmx ] [Mountjoy] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:44,665][TRACE][jmx ] [Mountjoy] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:44,668][TRACE][jmx ] [Mountjoy] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:08:44,668][TRACE][jmx ] [Mountjoy] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:08:44,668][TRACE][jmx ] [Mountjoy] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:44,669][TRACE][jmx ] [Mountjoy] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:08:44,669][TRACE][jmx ] [Mountjoy] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:44,669][TRACE][jmx ] [Mountjoy] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:08:44,669][TRACE][jmx ] [Mountjoy] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:44,669][TRACE][jmx ] [Mountjoy] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:08:44,670][TRACE][jmx ] [Mountjoy] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:44,670][TRACE][jmx ] [Mountjoy] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:44,670][TRACE][jmx ] [Mountjoy] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:08:44,671][DEBUG][http.netty ] [Mountjoy] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:08:44,677][DEBUG][indices.memory ] [Mountjoy] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:08:44,688][DEBUG][indices.cache.filter ] [Mountjoy] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:08:44,783][INFO ][node ] [Mountjoy] {0.18.7}[11874]: initialized
[2012-01-16 09:08:44,784][INFO ][node ] [Mountjoy] {0.18.7}[11874]: starting ...
[2012-01-16 09:08:44,808][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:08:44,885][DEBUG][transport.netty ] [Mountjoy] Bound to address [/0.0.0.0:9300]
[2012-01-16 09:08:44,888][INFO ][transport ] [Mountjoy] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/10.0.1.5:9300]}
[2012-01-16 09:08:44,996][TRACE][discovery ] [Mountjoy] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:08:45,025][DEBUG][transport.netty ] [Mountjoy] Connected to node [[#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:08:45,026][TRACE][discovery.zen.ping.unicast] [Mountjoy] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:08:45,067][DEBUG][transport.netty ] [Mountjoy] Connected to node [[#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:08:45,068][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x32b8f675, /127.0.0.1:62812 => /127.0.0.1:9300]
[2012-01-16 09:08:45,068][TRACE][discovery.zen.ping.unicast] [Mountjoy] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:08:45,084][TRACE][discovery.zen.ping.unicast] [Mountjoy] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:08:45,088][TRACE][discovery.zen.ping.unicast] [Mountjoy] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:08:46,500][TRACE][discovery.zen.ping.unicast] [Mountjoy] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:08:46,501][TRACE][discovery.zen.ping.unicast] [Mountjoy] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:08:46,503][TRACE][discovery.zen.ping.unicast] [Mountjoy] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:08:46,504][TRACE][discovery.zen.ping.unicast] [Mountjoy] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:08:48,004][TRACE][discovery.zen.ping.unicast] [Mountjoy] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:08:48,005][TRACE][discovery.zen.ping.unicast] [Mountjoy] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:08:48,007][TRACE][discovery.zen.ping.unicast] [Mountjoy] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:08:48,008][TRACE][discovery.zen.ping.unicast] [Mountjoy] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:08:48,009][DEBUG][discovery.zen ] [Mountjoy] ping responses:
--> target [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], master [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:08:48,012][DEBUG][transport.netty ] [Mountjoy] Disconnected from [[#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:08:48,018][DEBUG][transport.netty ] [Mountjoy] Disconnected from [[#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:08:48,021][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x32b8f675, /127.0.0.1:62812 :> /127.0.0.1:9300]
[2012-01-16 09:08:48,031][DEBUG][transport.netty ] [Mountjoy] Connected to node [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:08:48,041][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x26a0c73f, /10.0.1.5:62820 => /10.0.1.5:9300]
[2012-01-16 09:08:48,043][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x6f603bdc, /10.0.1.5:62821 => /10.0.1.5:9300]
[2012-01-16 09:08:48,047][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x7a1b0c08, /10.0.1.5:62822 => /10.0.1.5:9300]
[2012-01-16 09:08:48,048][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x56a9509d, /10.0.1.5:62823 => /10.0.1.5:9300]
[2012-01-16 09:08:48,052][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x05eb9fde, /10.0.1.5:62824 => /10.0.1.5:9300]
[2012-01-16 09:08:48,053][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x181f327e, /10.0.1.5:62825 => /10.0.1.5:9300]
[2012-01-16 09:08:48,053][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x282bfa91, /10.0.1.5:62826 => /10.0.1.5:9300]
[2012-01-16 09:08:48,062][DEBUG][discovery.zen.fd ] [Mountjoy] [master] starting fault detection against master [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], reason [initial_join]
[2012-01-16 09:08:48,064][DEBUG][discovery.zen ] [Mountjoy] got a new state from master node, though we are already trying to rejoin the cluster
[2012-01-16 09:08:48,067][DEBUG][cluster.service ] [Mountjoy] processing [zen-disco-join (detected master)]: execute
[2012-01-16 09:08:48,068][TRACE][cluster.service ] [Mountjoy] cluster state updated:
version [3], source [zen-disco-join (detected master)]
nodes:
[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:08:48,070][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x4d480773, /10.0.1.5:62827 => /10.0.1.5:9300]
[2012-01-16 09:08:48,071][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x5e1645b9, /10.0.1.5:62828 => /10.0.1.5:9300]
[2012-01-16 09:08:48,076][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x10fa1b2d, /10.0.1.5:62829 => /10.0.1.5:9300]
[2012-01-16 09:08:48,076][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x140e3010, /10.0.1.5:62830 => /10.0.1.5:9300]
[2012-01-16 09:08:48,078][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x2f7574b9, /10.0.1.5:62831 => /10.0.1.5:9300]
[2012-01-16 09:08:48,078][DEBUG][transport.netty ] [Mountjoy] Connected to node [[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]]]
[2012-01-16 09:08:48,079][DEBUG][cluster.service ] [Mountjoy] processing [zen-disco-join (detected master)]: done applying updated cluster_state
[2012-01-16 09:08:48,079][DEBUG][cluster.service ] [Mountjoy] processing [zen-disco-receive(from master [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:08:48,079][TRACE][cluster.service ] [Mountjoy] cluster state updated:
version [4], source [zen-disco-receive(from master [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]])]
nodes:
[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]], local
[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]], master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:08:48,079][INFO ][cluster.service ] [Mountjoy] detected_master [Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]], added {[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]],}, reason: zen-disco-receive(from master [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]])
[2012-01-16 09:08:48,080][DEBUG][cluster.service ] [Mountjoy] processing [zen-disco-receive(from master [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:08:48,080][TRACE][discovery ] [Mountjoy] initial state set from discovery
[2012-01-16 09:08:48,080][INFO ][discovery ] [Mountjoy] elasticsearch/n_s81i1wRu2xCOeTxl18Vg
[2012-01-16 09:08:48,081][TRACE][gateway.local ] [Mountjoy] [find_latest_state]: processing [metadata-2]
[2012-01-16 09:08:48,078][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x24c759f5, /10.0.1.5:62832 => /10.0.1.5:9300]
[2012-01-16 09:08:48,081][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x1be2f6b0, /10.0.1.5:62833 => /10.0.1.5:9300]
[2012-01-16 09:08:48,101][DEBUG][gateway.local ] [Mountjoy] [find_latest_state]: loading metadata from [/Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0/_state/metadata-2]
[2012-01-16 09:08:48,102][TRACE][gateway.local ] [Mountjoy] [find_latest_state]: processing [metadata-2]
[2012-01-16 09:08:48,102][DEBUG][gateway.local ] [Mountjoy] [find_latest_state]: no started shards loaded
[2012-01-16 09:08:48,113][INFO ][http ] [Mountjoy] bound_address {inet[/0.0.0.0:9201]}, publish_address {inet[/10.0.1.5:9201]}
[2012-01-16 09:08:48,114][TRACE][jmx ] [Mountjoy] Registered org.elasticsearch.jmx.ResourceDMBean@4fa3551c under org.elasticsearch:service=transport
[2012-01-16 09:08:48,114][TRACE][jmx ] [Mountjoy] Registered org.elasticsearch.jmx.ResourceDMBean@6c28ca1c under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:08:48,114][INFO ][node ] [Mountjoy] {0.18.7}[11874]: started
[2012-01-16 09:09:04,760][INFO ][discovery.zen ] [Mountjoy] master_left [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], reason [shut_down]
[2012-01-16 09:09:04,763][DEBUG][cluster.service ] [Mountjoy] processing [zen-disco-master_failed ([Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]])]: execute
[2012-01-16 09:09:04,764][DEBUG][discovery.zen.fd ] [Mountjoy] [master] stopping fault detection against master [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]], reason [got elected as new master since master left (reason = shut_down)]
[2012-01-16 09:09:04,764][TRACE][cluster.service ] [Mountjoy] cluster state updated:
version [5], source [zen-disco-master_failed ([Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]])]
nodes:
[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:09:04,765][INFO ][cluster.service ] [Mountjoy] master {new [Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]], previous [Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]}, removed {[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]],}, reason: zen-disco-master_failed ([Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]])
[2012-01-16 09:09:04,766][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x26a0c73f, /10.0.1.5:62820 :> /10.0.1.5:9300]
[2012-01-16 09:09:04,767][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x6f603bdc, /10.0.1.5:62821 :> /10.0.1.5:9300]
[2012-01-16 09:09:04,773][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x282bfa91, /10.0.1.5:62826 :> /10.0.1.5:9300]
[2012-01-16 09:09:04,774][DEBUG][river.cluster ] [Mountjoy] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:09:04,774][DEBUG][river.cluster ] [Mountjoy] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:09:04,776][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x181f327e, /10.0.1.5:62825 :> /10.0.1.5:9300]
[2012-01-16 09:09:04,776][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x05eb9fde, /10.0.1.5:62824 :> /10.0.1.5:9300]
[2012-01-16 09:09:04,776][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x56a9509d, /10.0.1.5:62823 :> /10.0.1.5:9300]
[2012-01-16 09:09:04,778][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x7a1b0c08, /10.0.1.5:62822 :> /10.0.1.5:9300]
[2012-01-16 09:09:04,786][DEBUG][transport.netty ] [Mountjoy] Disconnected from [[Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:09:04,787][DEBUG][cluster.service ] [Mountjoy] processing [zen-disco-master_failed ([Rom the Spaceknight][PgTrOQtYT1Oo9CgNiw0Dzg][inet[/10.0.1.5:9301]])]: done applying updated cluster_state
[2012-01-16 09:09:04,787][DEBUG][cluster.service ] [Mountjoy] processing [routing-table-updater]: execute
[2012-01-16 09:09:04,802][DEBUG][cluster.service ] [Mountjoy] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:09:16,049][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x12260d8d, /127.0.0.1:62836 => /127.0.0.1:9300]
[2012-01-16 09:09:19,050][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x12260d8d, /127.0.0.1:62836 :> /127.0.0.1:9300]
[2012-01-16 09:09:19,050][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x10ddcd98, /10.0.1.5:62838 => /10.0.1.5:9300]
[2012-01-16 09:09:19,052][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x568bf3ec, /10.0.1.5:62839 => /10.0.1.5:9300]
[2012-01-16 09:09:19,059][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x61ae717f, /10.0.1.5:62840 => /10.0.1.5:9300]
[2012-01-16 09:09:19,060][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x039d7af3, /10.0.1.5:62841 => /10.0.1.5:9300]
[2012-01-16 09:09:19,060][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x6588c838, /10.0.1.5:62842 => /10.0.1.5:9300]
[2012-01-16 09:09:19,061][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x7711089b, /10.0.1.5:62843 => /10.0.1.5:9300]
[2012-01-16 09:09:19,061][TRACE][transport.netty ] [Mountjoy] channel opened: [id: 0x6437a04c, /10.0.1.5:62844 => /10.0.1.5:9300]
[2012-01-16 09:09:19,093][DEBUG][transport.netty ] [Mountjoy] Connected to node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:09:19,098][DEBUG][cluster.service ] [Mountjoy] processing [zen-disco-receive(join from node[[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:09:19,099][TRACE][cluster.service ] [Mountjoy] cluster state updated:
version [6], source [zen-disco-receive(join from node[[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Mountjoy][n_s81i1wRu2xCOeTxl18Vg][inet[/10.0.1.5:9300]], local, master
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:09:19,099][INFO ][cluster.service ] [Mountjoy] added {[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]],}, reason: zen-disco-receive(join from node[[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])
[2012-01-16 09:09:19,103][DEBUG][cluster.service ] [Mountjoy] processing [zen-disco-receive(join from node[[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:09:19,103][DEBUG][river.cluster ] [Mountjoy] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:09:19,103][DEBUG][river.cluster ] [Mountjoy] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:09:24,768][DEBUG][cluster.service ] [Mountjoy] processing [routing-table-updater]: execute
[2012-01-16 09:09:24,769][DEBUG][cluster.service ] [Mountjoy] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:09:33,648][INFO ][node ] [Mountjoy] {0.18.7}[11874]: stopping ...
[2012-01-16 09:09:33,662][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x4d480773, /10.0.1.5:62827 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,663][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x5e1645b9, /10.0.1.5:62828 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,665][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x10fa1b2d, /10.0.1.5:62829 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,665][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x1be2f6b0, /10.0.1.5:62833 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,666][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x24c759f5, /10.0.1.5:62832 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,667][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x2f7574b9, /10.0.1.5:62831 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,666][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x140e3010, /10.0.1.5:62830 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,669][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x7711089b, /10.0.1.5:62843 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,670][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x568bf3ec, /10.0.1.5:62839 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,671][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x61ae717f, /10.0.1.5:62840 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,671][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x10ddcd98, /10.0.1.5:62838 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,671][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x6437a04c, /10.0.1.5:62844 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,671][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x6588c838, /10.0.1.5:62842 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,671][TRACE][transport.netty ] [Mountjoy] channel closed: [id: 0x039d7af3, /10.0.1.5:62841 :> /10.0.1.5:9300]
[2012-01-16 09:09:33,678][TRACE][jmx ] [Mountjoy] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:09:33,678][TRACE][jmx ] [Mountjoy] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:09:33,678][INFO ][node ] [Mountjoy] {0.18.7}[11874]: stopped
[2012-01-16 09:09:33,679][INFO ][node ] [Mountjoy] {0.18.7}[11874]: closing ...
[2012-01-16 09:09:33,710][TRACE][node ] [Mountjoy] Close times for each service:
StopWatch 'node_close': running time = 14ms
-----------------------------------------
ms % Task name
-----------------------------------------
00000 000% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00010 071% indices
00000 000% routing
00000 000% cluster
00001 007% discovery
00000 000% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00001 007% node_cache
00000 000% script
00001 007% thread_pool
00001 007% thread_pool_force_shutdown
[2012-01-16 09:09:33,715][INFO ][node ] [Mountjoy] {0.18.7}[11874]: closed
[2012-01-16 09:09:37,491][INFO ][node ] [She-Thing] {0.18.7}[11915]: initializing ...
[2012-01-16 09:09:37,501][INFO ][plugins ] [She-Thing] loaded [], sites []
[2012-01-16 09:09:38,722][DEBUG][threadpool ] [She-Thing] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:09:38,725][DEBUG][threadpool ] [She-Thing] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:09:38,726][DEBUG][threadpool ] [She-Thing] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:09:38,726][DEBUG][threadpool ] [She-Thing] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:09:38,726][DEBUG][threadpool ] [She-Thing] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:09:38,729][DEBUG][threadpool ] [She-Thing] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:09:38,730][DEBUG][threadpool ] [She-Thing] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:09:38,743][DEBUG][transport.netty ] [She-Thing] using worker_count[4], port[9300], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:09:38,762][DEBUG][discovery.zen.ping.unicast] [She-Thing] using initial hosts [localhost:9301, localhost:9300], with concurrent_connects [10]
[2012-01-16 09:09:38,766][DEBUG][discovery.zen ] [She-Thing] using ping.timeout [3s]
[2012-01-16 09:09:38,771][DEBUG][discovery.zen.elect ] [She-Thing] using minimum_master_nodes [-1]
[2012-01-16 09:09:38,772][DEBUG][discovery.zen.fd ] [She-Thing] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:09:38,776][DEBUG][discovery.zen.fd ] [She-Thing] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:09:38,800][DEBUG][monitor.jvm ] [She-Thing] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:09:39,311][DEBUG][monitor.os ] [She-Thing] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@2dc8b884] with refresh_interval [1s]
[2012-01-16 09:09:39,316][DEBUG][monitor.process ] [She-Thing] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@cc7f9e] with refresh_interval [1s]
[2012-01-16 09:09:39,321][DEBUG][monitor.jvm ] [She-Thing] Using refresh_interval [1s]
[2012-01-16 09:09:39,335][DEBUG][monitor.network ] [She-Thing] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@3844006e] with refresh_interval [5s]
[2012-01-16 09:09:39,346][DEBUG][monitor.network ] [She-Thing] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:09:39,351][TRACE][monitor.network ] [She-Thing] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:18763 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:18763 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3714220 (3.5M) TX bytes:3714220 (3.5M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2833850 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1502815 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3818246433 (3.6G) TX bytes:117506461 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:09:39,353][TRACE][env ] [She-Thing] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0 ...
[2012-01-16 09:09:39,378][DEBUG][env ] [She-Thing] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:09:39,379][TRACE][env ] [She-Thing] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0, free_space [221.6gb, usable_space [221.4gb
[2012-01-16 09:09:39,703][DEBUG][cache.memory ] [She-Thing] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:09:39,717][DEBUG][cluster.routing.allocation.decider] [She-Thing] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:09:39,718][DEBUG][cluster.routing.allocation.decider] [She-Thing] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:09:39,719][DEBUG][cluster.routing.allocation.decider] [She-Thing] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:09:39,722][DEBUG][gateway.local ] [She-Thing] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:09:39,743][DEBUG][indices.recovery ] [She-Thing] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:09:39,929][TRACE][jmx ] [She-Thing] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:09:39,930][TRACE][jmx ] [She-Thing] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:39,930][TRACE][jmx ] [She-Thing] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:39,932][TRACE][jmx ] [She-Thing] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:09:39,942][TRACE][jmx ] [She-Thing] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:09:39,943][TRACE][jmx ] [She-Thing] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:39,943][TRACE][jmx ] [She-Thing] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:09:39,943][TRACE][jmx ] [She-Thing] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:39,944][TRACE][jmx ] [She-Thing] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:09:39,944][TRACE][jmx ] [She-Thing] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:39,944][TRACE][jmx ] [She-Thing] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:09:39,944][TRACE][jmx ] [She-Thing] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:39,944][TRACE][jmx ] [She-Thing] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:39,944][TRACE][jmx ] [She-Thing] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:09:39,945][DEBUG][http.netty ] [She-Thing] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:09:39,953][DEBUG][indices.memory ] [She-Thing] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:09:39,963][DEBUG][indices.cache.filter ] [She-Thing] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:09:40,054][INFO ][node ] [She-Thing] {0.18.7}[11915]: initialized
[2012-01-16 09:09:40,055][INFO ][node ] [She-Thing] {0.18.7}[11915]: starting ...
[2012-01-16 09:09:40,079][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:09:40,155][DEBUG][transport.netty ] [She-Thing] Bound to address [/0.0.0.0:9300]
[2012-01-16 09:09:40,158][INFO ][transport ] [She-Thing] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/10.0.1.5:9300]}
[2012-01-16 09:09:40,234][TRACE][discovery ] [She-Thing] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:09:40,288][DEBUG][transport.netty ] [She-Thing] Connected to node [[#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:09:40,290][TRACE][discovery.zen.ping.unicast] [She-Thing] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:09:40,318][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x7f408325, /127.0.0.1:62862 => /127.0.0.1:9300]
[2012-01-16 09:09:40,319][DEBUG][transport.netty ] [She-Thing] Connected to node [[#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:09:40,341][TRACE][discovery.zen.ping.unicast] [She-Thing] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:09:40,356][TRACE][discovery.zen.ping.unicast] [She-Thing] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:09:40,363][TRACE][discovery.zen.ping.unicast] [She-Thing] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:09:41,745][TRACE][discovery.zen.ping.unicast] [She-Thing] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:09:41,747][TRACE][discovery.zen.ping.unicast] [She-Thing] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:09:41,750][TRACE][discovery.zen.ping.unicast] [She-Thing] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:09:41,752][TRACE][discovery.zen.ping.unicast] [She-Thing] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:09:43,249][TRACE][discovery.zen.ping.unicast] [She-Thing] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:09:43,250][TRACE][discovery.zen.ping.unicast] [She-Thing] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:09:43,253][TRACE][discovery.zen.ping.unicast] [She-Thing] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:09:43,254][TRACE][discovery.zen.ping.unicast] [She-Thing] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:09:43,255][DEBUG][discovery.zen ] [She-Thing] ping responses:
--> target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:09:43,257][DEBUG][transport.netty ] [She-Thing] Disconnected from [[#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:09:43,264][DEBUG][transport.netty ] [She-Thing] Disconnected from [[#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:09:43,267][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x7f408325, /127.0.0.1:62862 :> /127.0.0.1:9300]
[2012-01-16 09:09:43,278][DEBUG][transport.netty ] [She-Thing] Connected to node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:09:43,288][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x26c42804, /10.0.1.5:62870 => /10.0.1.5:9300]
[2012-01-16 09:09:43,291][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x52aa77d9, /10.0.1.5:62871 => /10.0.1.5:9300]
[2012-01-16 09:09:43,296][DEBUG][discovery.zen.fd ] [She-Thing] [master] starting fault detection against master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], reason [initial_join]
[2012-01-16 09:09:43,300][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x5694fe42, /10.0.1.5:62872 => /10.0.1.5:9300]
[2012-01-16 09:09:43,304][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x7c959fa1, /10.0.1.5:62873 => /10.0.1.5:9300]
[2012-01-16 09:09:43,304][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x432342ed, /10.0.1.5:62874 => /10.0.1.5:9300]
[2012-01-16 09:09:43,306][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x3ffef80a, /10.0.1.5:62875 => /10.0.1.5:9300]
[2012-01-16 09:09:43,306][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x7c4e7958, /10.0.1.5:62876 => /10.0.1.5:9300]
[2012-01-16 09:09:43,312][DEBUG][cluster.service ] [She-Thing] processing [zen-disco-join (detected master)]: execute
[2012-01-16 09:09:43,316][TRACE][cluster.service ] [She-Thing] cluster state updated:
version [7], source [zen-disco-join (detected master)]
nodes:
[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:09:43,320][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x6e9b86ea, /10.0.1.5:62877 => /10.0.1.5:9300]
[2012-01-16 09:09:43,320][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x34189cab, /10.0.1.5:62878 => /10.0.1.5:9300]
[2012-01-16 09:09:43,322][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x1be2f6b0, /10.0.1.5:62879 => /10.0.1.5:9300]
[2012-01-16 09:09:43,322][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x675926d1, /10.0.1.5:62880 => /10.0.1.5:9300]
[2012-01-16 09:09:43,322][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x0e039859, /10.0.1.5:62881 => /10.0.1.5:9300]
[2012-01-16 09:09:43,322][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x0e07023f, /10.0.1.5:62882 => /10.0.1.5:9300]
[2012-01-16 09:09:43,323][TRACE][transport.netty ] [She-Thing] channel opened: [id: 0x6e247d4a, /10.0.1.5:62883 => /10.0.1.5:9300]
[2012-01-16 09:09:43,328][DEBUG][transport.netty ] [She-Thing] Connected to node [[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]]]
[2012-01-16 09:09:43,329][DEBUG][cluster.service ] [She-Thing] processing [zen-disco-join (detected master)]: done applying updated cluster_state
[2012-01-16 09:09:43,329][DEBUG][cluster.service ] [She-Thing] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:09:43,330][TRACE][cluster.service ] [She-Thing] cluster state updated:
version [8], source [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], master
[She-Thing][xcODJyySQbGVIZfs5oiPrw][inet[/10.0.1.5:9300]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:09:43,330][INFO ][cluster.service ] [She-Thing] detected_master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], added {[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]],}, reason: zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])
[2012-01-16 09:09:43,330][DEBUG][cluster.service ] [She-Thing] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:09:43,331][TRACE][discovery ] [She-Thing] initial state set from discovery
[2012-01-16 09:09:43,331][INFO ][discovery ] [She-Thing] elasticsearch/xcODJyySQbGVIZfs5oiPrw
[2012-01-16 09:09:43,332][TRACE][gateway.local ] [She-Thing] [find_latest_state]: processing [metadata-3]
[2012-01-16 09:09:43,336][DEBUG][gateway.local ] [She-Thing] [find_latest_state]: loading metadata from [/Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0/_state/metadata-3]
[2012-01-16 09:09:43,337][TRACE][gateway.local ] [She-Thing] [find_latest_state]: processing [metadata-3]
[2012-01-16 09:09:43,337][DEBUG][gateway.local ] [She-Thing] [find_latest_state]: no started shards loaded
[2012-01-16 09:09:43,352][INFO ][http ] [She-Thing] bound_address {inet[/0.0.0.0:9201]}, publish_address {inet[/10.0.1.5:9201]}
[2012-01-16 09:09:43,353][TRACE][jmx ] [She-Thing] Registered org.elasticsearch.jmx.ResourceDMBean@1dcbcf91 under org.elasticsearch:service=transport
[2012-01-16 09:09:43,353][TRACE][jmx ] [She-Thing] Registered org.elasticsearch.jmx.ResourceDMBean@2fa847df under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:09:43,353][INFO ][node ] [She-Thing] {0.18.7}[11915]: started
[2012-01-16 09:09:56,904][INFO ][node ] [She-Thing] {0.18.7}[11915]: stopping ...
[2012-01-16 09:09:56,936][DEBUG][discovery.zen.fd ] [She-Thing] [master] stopping fault detection against master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], reason [zen disco stop]
[2012-01-16 09:09:56,950][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x26c42804, /10.0.1.5:62870 :> /10.0.1.5:9300]
[2012-01-16 09:09:56,951][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x432342ed, /10.0.1.5:62874 :> /10.0.1.5:9300]
[2012-01-16 09:09:56,951][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x5694fe42, /10.0.1.5:62872 :> /10.0.1.5:9300]
[2012-01-16 09:09:56,951][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x7c4e7958, /10.0.1.5:62876 :> /10.0.1.5:9300]
[2012-01-16 09:09:56,955][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x1be2f6b0, /10.0.1.5:62879 :> /10.0.1.5:9300]
[2012-01-16 09:09:56,956][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x3ffef80a, /10.0.1.5:62875 :> /10.0.1.5:9300]
[2012-01-16 09:09:56,956][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x52aa77d9, /10.0.1.5:62871 :> /10.0.1.5:9300]
[2012-01-16 09:09:56,957][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x7c959fa1, /10.0.1.5:62873 :> /10.0.1.5:9300]
[2012-01-16 09:09:56,957][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x0e039859, /10.0.1.5:62881 :> /10.0.1.5:9300]
[2012-01-16 09:09:56,958][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x6e9b86ea, /10.0.1.5:62877 :> /10.0.1.5:9300]
[2012-01-16 09:09:56,960][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x6e247d4a, /10.0.1.5:62883 :> /10.0.1.5:9300]
[2012-01-16 09:09:56,963][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x0e07023f, /10.0.1.5:62882 :> /10.0.1.5:9300]
[2012-01-16 09:09:56,989][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x675926d1, /10.0.1.5:62880 :> /10.0.1.5:9300]
[2012-01-16 09:09:57,000][TRACE][transport.netty ] [She-Thing] channel closed: [id: 0x34189cab, /10.0.1.5:62878 :> /10.0.1.5:9300]
[2012-01-16 09:09:57,032][TRACE][jmx ] [She-Thing] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:09:57,033][TRACE][jmx ] [She-Thing] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:09:57,033][INFO ][node ] [She-Thing] {0.18.7}[11915]: stopped
[2012-01-16 09:09:57,033][INFO ][node ] [She-Thing] {0.18.7}[11915]: closing ...
[2012-01-16 09:09:57,048][TRACE][node ] [She-Thing] Close times for each service:
StopWatch 'node_close': running time = 6ms
-----------------------------------------
ms % Task name
-----------------------------------------
00000 000% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00001 017% indices
00000 000% routing
00000 000% cluster
00001 017% discovery
00000 000% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00001 017% node_cache
00000 000% script
00003 050% thread_pool
00000 000% thread_pool_force_shutdown
[2012-01-16 09:09:57,050][INFO ][node ] [She-Thing] {0.18.7}[11915]: closed
[2012-01-16 09:09:59,244][INFO ][node ] [Dawson, Tex] {0.18.7}[11933]: initializing ...
[2012-01-16 09:09:59,254][INFO ][plugins ] [Dawson, Tex] loaded [], sites []
[2012-01-16 09:10:00,482][DEBUG][threadpool ] [Dawson, Tex] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:10:00,485][DEBUG][threadpool ] [Dawson, Tex] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:10:00,486][DEBUG][threadpool ] [Dawson, Tex] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:10:00,487][DEBUG][threadpool ] [Dawson, Tex] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:10:00,488][DEBUG][threadpool ] [Dawson, Tex] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:10:00,492][DEBUG][threadpool ] [Dawson, Tex] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:10:00,492][DEBUG][threadpool ] [Dawson, Tex] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:10:00,506][DEBUG][transport.netty ] [Dawson, Tex] using worker_count[4], port[9300], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:10:00,531][DEBUG][discovery.zen.ping.unicast] [Dawson, Tex] using initial hosts [localhost:9301, localhost:9300], with concurrent_connects [10]
[2012-01-16 09:10:00,536][DEBUG][discovery.zen ] [Dawson, Tex] using ping.timeout [3s]
[2012-01-16 09:10:00,542][DEBUG][discovery.zen.elect ] [Dawson, Tex] using minimum_master_nodes [-1]
[2012-01-16 09:10:00,543][DEBUG][discovery.zen.fd ] [Dawson, Tex] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:10:00,546][DEBUG][discovery.zen.fd ] [Dawson, Tex] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:10:00,569][DEBUG][monitor.jvm ] [Dawson, Tex] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:10:01,086][DEBUG][monitor.os ] [Dawson, Tex] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@182153fe] with refresh_interval [1s]
[2012-01-16 09:10:01,092][DEBUG][monitor.process ] [Dawson, Tex] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@239cd5f5] with refresh_interval [1s]
[2012-01-16 09:10:01,096][DEBUG][monitor.jvm ] [Dawson, Tex] Using refresh_interval [1s]
[2012-01-16 09:10:01,097][DEBUG][monitor.network ] [Dawson, Tex] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@2377ff35] with refresh_interval [5s]
[2012-01-16 09:10:01,107][DEBUG][monitor.network ] [Dawson, Tex] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:10:01,135][TRACE][monitor.network ] [Dawson, Tex] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:19103 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:19103 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3734972 (3.6M) TX bytes:3734972 (3.6M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2833856 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1502822 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3818248918 (3.6G) TX bytes:117506844 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:10:01,137][TRACE][env ] [Dawson, Tex] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0 ...
[2012-01-16 09:10:01,174][DEBUG][env ] [Dawson, Tex] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:10:01,175][TRACE][env ] [Dawson, Tex] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0, free_space [221.6gb, usable_space [221.4gb
[2012-01-16 09:10:01,519][DEBUG][cache.memory ] [Dawson, Tex] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:10:01,533][DEBUG][cluster.routing.allocation.decider] [Dawson, Tex] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:10:01,534][DEBUG][cluster.routing.allocation.decider] [Dawson, Tex] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:10:01,534][DEBUG][cluster.routing.allocation.decider] [Dawson, Tex] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:10:01,538][DEBUG][gateway.local ] [Dawson, Tex] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:10:01,559][DEBUG][indices.recovery ] [Dawson, Tex] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:10:01,777][TRACE][jmx ] [Dawson, Tex] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:10:01,777][TRACE][jmx ] [Dawson, Tex] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:10:01,778][TRACE][jmx ] [Dawson, Tex] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:10:01,781][TRACE][jmx ] [Dawson, Tex] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:10:01,783][TRACE][jmx ] [Dawson, Tex] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:10:01,783][TRACE][jmx ] [Dawson, Tex] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:10:01,783][TRACE][jmx ] [Dawson, Tex] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:10:01,784][TRACE][jmx ] [Dawson, Tex] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:10:01,784][TRACE][jmx ] [Dawson, Tex] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:10:01,784][TRACE][jmx ] [Dawson, Tex] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:10:01,784][TRACE][jmx ] [Dawson, Tex] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:10:01,785][TRACE][jmx ] [Dawson, Tex] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:10:01,785][TRACE][jmx ] [Dawson, Tex] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:10:01,785][TRACE][jmx ] [Dawson, Tex] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:10:01,786][DEBUG][http.netty ] [Dawson, Tex] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:10:01,792][DEBUG][indices.memory ] [Dawson, Tex] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:10:01,802][DEBUG][indices.cache.filter ] [Dawson, Tex] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:10:01,897][INFO ][node ] [Dawson, Tex] {0.18.7}[11933]: initialized
[2012-01-16 09:10:01,897][INFO ][node ] [Dawson, Tex] {0.18.7}[11933]: starting ...
[2012-01-16 09:10:01,947][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:10:02,037][DEBUG][transport.netty ] [Dawson, Tex] Bound to address [/0.0.0.0:9300]
[2012-01-16 09:10:02,040][INFO ][transport ] [Dawson, Tex] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/10.0.1.5:9300]}
[2012-01-16 09:10:02,145][TRACE][discovery ] [Dawson, Tex] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:10:02,173][DEBUG][transport.netty ] [Dawson, Tex] Connected to node [[#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:10:02,174][TRACE][discovery.zen.ping.unicast] [Dawson, Tex] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:10:02,212][DEBUG][transport.netty ] [Dawson, Tex] Connected to node [[#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:10:02,212][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x14cee41f, /127.0.0.1:62887 => /127.0.0.1:9300]
[2012-01-16 09:10:02,228][TRACE][discovery.zen.ping.unicast] [Dawson, Tex] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:10:02,235][TRACE][discovery.zen.ping.unicast] [Dawson, Tex] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:10:02,242][TRACE][discovery.zen.ping.unicast] [Dawson, Tex] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:10:03,651][TRACE][discovery.zen.ping.unicast] [Dawson, Tex] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:10:03,651][TRACE][discovery.zen.ping.unicast] [Dawson, Tex] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:10:03,655][TRACE][discovery.zen.ping.unicast] [Dawson, Tex] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:10:03,655][TRACE][discovery.zen.ping.unicast] [Dawson, Tex] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:10:05,153][TRACE][discovery.zen.ping.unicast] [Dawson, Tex] [1] connecting to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
[2012-01-16 09:10:05,154][TRACE][discovery.zen.ping.unicast] [Dawson, Tex] [1] connecting to [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]
[2012-01-16 09:10:05,157][TRACE][discovery.zen.ping.unicast] [Dawson, Tex] [1] received response from [#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]: [ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}]
[2012-01-16 09:10:05,157][TRACE][discovery.zen.ping.unicast] [Dawson, Tex] [1] received response from [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]: [ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]], master [null], cluster_name[elasticsearch]}, ping_response{target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], cluster_name[elasticsearch]}]
[2012-01-16 09:10:05,160][DEBUG][discovery.zen ] [Dawson, Tex] ping responses:
--> target [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:10:05,162][DEBUG][transport.netty ] [Dawson, Tex] Disconnected from [[#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]]
[2012-01-16 09:10:05,174][DEBUG][transport.netty ] [Dawson, Tex] Disconnected from [[#zen_unicast_2#][inet[localhost/127.0.0.1:9300]]]
[2012-01-16 09:10:05,175][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x14cee41f, /127.0.0.1:62887 :> /127.0.0.1:9300]
[2012-01-16 09:10:05,192][DEBUG][transport.netty ] [Dawson, Tex] Connected to node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:10:05,203][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x675ee9e3, /10.0.1.5:62895 => /10.0.1.5:9300]
[2012-01-16 09:10:05,208][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x1ee99d0f, /10.0.1.5:62896 => /10.0.1.5:9300]
[2012-01-16 09:10:05,217][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x6f603bdc, /10.0.1.5:62897 => /10.0.1.5:9300]
[2012-01-16 09:10:05,219][DEBUG][discovery.zen.fd ] [Dawson, Tex] [master] starting fault detection against master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], reason [initial_join]
[2012-01-16 09:10:05,223][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x05eb9fde, /10.0.1.5:62898 => /10.0.1.5:9300]
[2012-01-16 09:10:05,223][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x5ad3c69c, /10.0.1.5:62899 => /10.0.1.5:9300]
[2012-01-16 09:10:05,224][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x4eb7cd92, /10.0.1.5:62900 => /10.0.1.5:9300]
[2012-01-16 09:10:05,225][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x181f327e, /10.0.1.5:62901 => /10.0.1.5:9300]
[2012-01-16 09:10:05,227][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-join (detected master)]: execute
[2012-01-16 09:10:05,234][TRACE][cluster.service ] [Dawson, Tex] cluster state updated:
version [9], source [zen-disco-join (detected master)]
nodes:
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:10:05,240][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x6bb5eba4, /10.0.1.5:62902 => /10.0.1.5:9300]
[2012-01-16 09:10:05,240][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x66e90097, /10.0.1.5:62903 => /10.0.1.5:9300]
[2012-01-16 09:10:05,240][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x4b25ee49, /10.0.1.5:62904 => /10.0.1.5:9300]
[2012-01-16 09:10:05,241][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x273f212a, /10.0.1.5:62905 => /10.0.1.5:9300]
[2012-01-16 09:10:05,241][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x219a6087, /10.0.1.5:62906 => /10.0.1.5:9300]
[2012-01-16 09:10:05,245][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x6e247d4a, /10.0.1.5:62907 => /10.0.1.5:9300]
[2012-01-16 09:10:05,245][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x1d9dbdc4, /10.0.1.5:62908 => /10.0.1.5:9300]
[2012-01-16 09:10:05,250][DEBUG][transport.netty ] [Dawson, Tex] Connected to node [[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]]]
[2012-01-16 09:10:05,251][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-join (detected master)]: done applying updated cluster_state
[2012-01-16 09:10:05,251][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:10:05,251][TRACE][cluster.service ] [Dawson, Tex] cluster state updated:
version [10], source [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], local
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:10:05,251][INFO ][cluster.service ] [Dawson, Tex] detected_master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], added {[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]],}, reason: zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])
[2012-01-16 09:10:05,252][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:10:05,253][TRACE][discovery ] [Dawson, Tex] initial state set from discovery
[2012-01-16 09:10:05,253][INFO ][discovery ] [Dawson, Tex] elasticsearch/wZH_t0kwSx-gG5blmuSBJQ
[2012-01-16 09:10:05,254][TRACE][gateway.local ] [Dawson, Tex] [find_latest_state]: processing [metadata-3]
[2012-01-16 09:10:05,257][DEBUG][gateway.local ] [Dawson, Tex] [find_latest_state]: loading metadata from [/Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0/_state/metadata-3]
[2012-01-16 09:10:05,258][TRACE][gateway.local ] [Dawson, Tex] [find_latest_state]: processing [metadata-3]
[2012-01-16 09:10:05,263][DEBUG][gateway.local ] [Dawson, Tex] [find_latest_state]: no started shards loaded
[2012-01-16 09:10:05,271][INFO ][http ] [Dawson, Tex] bound_address {inet[/0.0.0.0:9201]}, publish_address {inet[/10.0.1.5:9201]}
[2012-01-16 09:10:05,271][TRACE][jmx ] [Dawson, Tex] Registered org.elasticsearch.jmx.ResourceDMBean@228ab65 under org.elasticsearch:service=transport
[2012-01-16 09:10:05,271][TRACE][jmx ] [Dawson, Tex] Registered org.elasticsearch.jmx.ResourceDMBean@3c0c74fe under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:10:05,271][INFO ][node ] [Dawson, Tex] {0.18.7}[11933]: started
[2012-01-16 09:14:55,619][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:14:55,653][TRACE][cluster.service ] [Dawson, Tex] cluster state updated:
version [11], source [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
--------[twitter][0], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][1]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
---- unassigned
--------[twitter][0], node[null], [R], s[UNASSIGNED]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:14:55,671][DEBUG][indices.cluster ] [Dawson, Tex] [twitter] creating index
[2012-01-16 09:14:55,673][DEBUG][indices ] [Dawson, Tex] creating Index [twitter], shards [5]/[1]
[2012-01-16 09:14:59,210][DEBUG][index.mapper ] [Dawson, Tex] [twitter] using dynamic[true], default mapping: location[null] and source[{
"_default_" : {
}
}]
[2012-01-16 09:14:59,211][DEBUG][index.cache.field.data.resident] [Dawson, Tex] [twitter] using [resident] field cache with max_size [-1], expire [null]
[2012-01-16 09:14:59,232][DEBUG][index.cache ] [Dawson, Tex] [twitter] Using stats.refresh_interval [1s]
[2012-01-16 09:14:59,490][TRACE][jmx ] [Dawson, Tex] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:14:59,493][TRACE][jmx ] [Dawson, Tex] Registered org.elasticsearch.jmx.ResourceDMBean@22c393a1 under org.elasticsearch:service=indices,index=twitter
[2012-01-16 09:14:59,494][DEBUG][indices.cluster ] [Dawson, Tex] [twitter][1] creating shard
[2012-01-16 09:14:59,494][DEBUG][index.service ] [Dawson, Tex] [twitter] creating shard_id [1]
[2012-01-16 09:15:00,221][DEBUG][index.deletionpolicy ] [Dawson, Tex] [twitter][1] Using [keep_only_last] deletion policy
[2012-01-16 09:15:00,223][DEBUG][index.merge.policy ] [Dawson, Tex] [twitter][1] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:15:00,224][DEBUG][index.merge.scheduler ] [Dawson, Tex] [twitter][1] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:15:00,227][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][1] state: [CREATED]
[2012-01-16 09:15:00,244][TRACE][jmx ] [Dawson, Tex] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:00,245][TRACE][jmx ] [Dawson, Tex] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:00,245][TRACE][jmx ] [Dawson, Tex] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:00,245][TRACE][jmx ] [Dawson, Tex] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:00,245][TRACE][jmx ] [Dawson, Tex] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:00,245][TRACE][jmx ] [Dawson, Tex] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:00,245][TRACE][jmx ] [Dawson, Tex] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:00,245][TRACE][jmx ] [Dawson, Tex] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:00,246][TRACE][jmx ] [Dawson, Tex] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:15:00,246][TRACE][jmx ] [Dawson, Tex] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:00,246][TRACE][jmx ] [Dawson, Tex] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:00,246][TRACE][jmx ] [Dawson, Tex] Registered org.elasticsearch.jmx.ResourceDMBean@7eedec92 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1
[2012-01-16 09:15:00,248][TRACE][jmx ] [Dawson, Tex] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:00,248][TRACE][jmx ] [Dawson, Tex] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:00,248][TRACE][jmx ] [Dawson, Tex] Registered org.elasticsearch.jmx.ResourceDMBean@5852f73e under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1,shardType=store
[2012-01-16 09:15:00,249][DEBUG][index.translog ] [Dawson, Tex] [twitter][1] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:15:00,255][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][1] state: [CREATED]->[RECOVERING], reason [from gateway]
[2012-01-16 09:15:00,264][DEBUG][indices.cluster ] [Dawson, Tex] [twitter][3] creating shard
[2012-01-16 09:15:00,265][DEBUG][index.service ] [Dawson, Tex] [twitter] creating shard_id [3]
[2012-01-16 09:15:00,265][DEBUG][index.gateway ] [Dawson, Tex] [twitter][1] starting recovery from local ...
[2012-01-16 09:15:00,305][DEBUG][index.engine.robin ] [Dawson, Tex] [twitter][1] Starting engine
[2012-01-16 09:15:00,352][DEBUG][index.deletionpolicy ] [Dawson, Tex] [twitter][3] Using [keep_only_last] deletion policy
[2012-01-16 09:15:00,353][DEBUG][index.merge.policy ] [Dawson, Tex] [twitter][3] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:15:00,361][DEBUG][index.merge.scheduler ] [Dawson, Tex] [twitter][3] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:15:00,362][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][3] state: [CREATED]
[2012-01-16 09:15:00,365][TRACE][jmx ] [Dawson, Tex] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:00,366][TRACE][jmx ] [Dawson, Tex] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:00,366][TRACE][jmx ] [Dawson, Tex] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:00,367][TRACE][jmx ] [Dawson, Tex] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:00,367][TRACE][jmx ] [Dawson, Tex] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:00,367][TRACE][jmx ] [Dawson, Tex] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:00,367][TRACE][jmx ] [Dawson, Tex] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:00,368][TRACE][jmx ] [Dawson, Tex] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:00,368][TRACE][jmx ] [Dawson, Tex] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:15:00,368][TRACE][jmx ] [Dawson, Tex] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:00,369][TRACE][jmx ] [Dawson, Tex] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:00,369][TRACE][jmx ] [Dawson, Tex] Registered org.elasticsearch.jmx.ResourceDMBean@708d8f67 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3
[2012-01-16 09:15:00,370][TRACE][jmx ] [Dawson, Tex] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:00,371][TRACE][jmx ] [Dawson, Tex] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:00,372][TRACE][jmx ] [Dawson, Tex] Registered org.elasticsearch.jmx.ResourceDMBean@66a96863 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3,shardType=store
[2012-01-16 09:15:00,372][DEBUG][index.translog ] [Dawson, Tex] [twitter][3] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:15:00,374][DEBUG][indices.memory ] [Dawson, Tex] recalculating shard indexing buffer (reason=created_shard[twitter][3]), total is [101.9mb] with [1] active shards, each shard set to [101.9mb]
[2012-01-16 09:15:00,841][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][1] scheduling refresher every 1s
[2012-01-16 09:15:00,885][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][3] state: [CREATED]->[RECOVERING], reason [from gateway]
[2012-01-16 09:15:00,886][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][1] scheduling optimizer / merger every 1s
[2012-01-16 09:15:00,887][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][1] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2012-01-16 09:15:00,887][TRACE][index.shard.service ] [Dawson, Tex] [twitter][1] refresh with waitForOperations[false]
[2012-01-16 09:15:00,887][DEBUG][index.gateway ] [Dawson, Tex] [twitter][1] recovery completed from local, took [623ms]
index : files [0] with total_size [0b], took[38ms]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
translog : number_of_operations [0], took [620ms]
[2012-01-16 09:15:00,888][DEBUG][index.gateway ] [Dawson, Tex] [twitter][3] starting recovery from local ...
[2012-01-16 09:15:00,902][DEBUG][cluster.action.shard ] [Dawson, Tex] sending shard started for [twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:15:00,902][DEBUG][index.engine.robin ] [Dawson, Tex] [twitter][3] Starting engine
[2012-01-16 09:15:00,926][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:15:00,926][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:15:00,927][TRACE][cluster.service ] [Dawson, Tex] cluster state updated:
version [12], source [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][1]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[INITIALIZING]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
---- unassigned
--------[twitter][0], node[null], [R], s[UNASSIGNED]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:15:00,927][TRACE][indices.cluster ] [Dawson, Tex] [{}][{}] master [{}] marked shard as initializing, but shard already created, mark shard as started
[2012-01-16 09:15:00,928][DEBUG][cluster.action.shard ] [Dawson, Tex] sending shard started for [twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING], reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:15:00,928][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:15:00,929][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:15:00,929][TRACE][cluster.service ] [Dawson, Tex] cluster state updated:
version [13], source [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][1]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
---- unassigned
--------[twitter][0], node[null], [R], s[UNASSIGNED]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:15:00,933][TRACE][indices.cluster ] [Dawson, Tex] [{}][{}] master [{}] marked shard as initializing, but shard already created, mark shard as started
[2012-01-16 09:15:00,933][DEBUG][cluster.action.shard ] [Dawson, Tex] sending shard started for [twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING], reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:15:00,934][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:15:00,960][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][3] scheduling refresher every 1s
[2012-01-16 09:15:00,969][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][3] scheduling optimizer / merger every 1s
[2012-01-16 09:15:00,969][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][3] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2012-01-16 09:15:00,969][TRACE][index.shard.service ] [Dawson, Tex] [twitter][3] refresh with waitForOperations[false]
[2012-01-16 09:15:00,969][DEBUG][index.gateway ] [Dawson, Tex] [twitter][3] recovery completed from local, took [81ms]
index : files [0] with total_size [0b], took[0s]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
translog : number_of_operations [0], took [67ms]
[2012-01-16 09:15:00,969][DEBUG][cluster.action.shard ] [Dawson, Tex] sending shard started for [twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:15:01,283][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:15:01,283][TRACE][cluster.service ] [Dawson, Tex] cluster state updated:
version [14], source [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING]
---- unassigned
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:15:01,284][DEBUG][indices.cluster ] [Dawson, Tex] [twitter][0] creating shard
[2012-01-16 09:15:01,283][TRACE][indices.recovery ] [Dawson, Tex] [twitter][1] starting recovery to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], mark_as_relocated false
[2012-01-16 09:15:01,350][DEBUG][index.service ] [Dawson, Tex] [twitter] creating shard_id [0]
[2012-01-16 09:15:01,509][DEBUG][index.deletionpolicy ] [Dawson, Tex] [twitter][0] Using [keep_only_last] deletion policy
[2012-01-16 09:15:01,528][TRACE][indices.recovery ] [Dawson, Tex] [twitter][1] recovery [phase1] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: recovering [segments_1], does not exists in remote
[2012-01-16 09:15:01,529][TRACE][indices.recovery ] [Dawson, Tex] [twitter][1] recovery [phase1] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: recovering_files [1] with total_size [58b], reusing_files [0] with total_size [0b]
[2012-01-16 09:15:01,577][DEBUG][index.merge.policy ] [Dawson, Tex] [twitter][0] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:15:01,579][DEBUG][index.merge.scheduler ] [Dawson, Tex] [twitter][0] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:15:01,580][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][0] state: [CREATED]
[2012-01-16 09:15:01,614][TRACE][jmx ] [Dawson, Tex] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:01,614][TRACE][jmx ] [Dawson, Tex] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:01,615][TRACE][jmx ] [Dawson, Tex] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:01,615][TRACE][jmx ] [Dawson, Tex] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:01,615][TRACE][jmx ] [Dawson, Tex] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,615][TRACE][jmx ] [Dawson, Tex] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,615][TRACE][jmx ] [Dawson, Tex] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,676][TRACE][jmx ] [Dawson, Tex] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,684][TRACE][jmx ] [Dawson, Tex] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:15:01,684][TRACE][jmx ] [Dawson, Tex] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:01,684][TRACE][jmx ] [Dawson, Tex] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,684][TRACE][jmx ] [Dawson, Tex] Registered org.elasticsearch.jmx.ResourceDMBean@779b3e under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0
[2012-01-16 09:15:01,689][TRACE][jmx ] [Dawson, Tex] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:01,689][TRACE][jmx ] [Dawson, Tex] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:01,689][TRACE][jmx ] [Dawson, Tex] Registered org.elasticsearch.jmx.ResourceDMBean@2fa8ecf4 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0,shardType=store
[2012-01-16 09:15:01,690][DEBUG][index.translog ] [Dawson, Tex] [twitter][0] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:15:01,690][DEBUG][indices.memory ] [Dawson, Tex] recalculating shard indexing buffer (reason=created_shard[twitter][0]), total is [101.9mb] with [2] active shards, each shard set to [50.9mb]
[2012-01-16 09:15:01,699][TRACE][indices.recovery ] [Dawson, Tex] [twitter][3] starting recovery to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], mark_as_relocated false
[2012-01-16 09:15:01,699][TRACE][indices.recovery ] [Dawson, Tex] [twitter][3] recovery [phase1] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: recovering [segments_1], does not exists in remote
[2012-01-16 09:15:01,700][TRACE][indices.recovery ] [Dawson, Tex] [twitter][3] recovery [phase1] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: recovering_files [1] with total_size [58b], reusing_files [0] with total_size [0b]
[2012-01-16 09:15:01,749][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][0] state: [CREATED]->[RECOVERING], reason [from [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:01,757][TRACE][indices.cluster ] [Dawson, Tex] [{}][{}] master [{}] marked shard as initializing, but shard already created, mark shard as started
[2012-01-16 09:15:01,757][DEBUG][cluster.action.shard ] [Dawson, Tex] sending shard started for [twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[INITIALIZING], reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:15:01,759][TRACE][indices.recovery ] [Dawson, Tex] [twitter][0] starting recovery from [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]
[2012-01-16 09:15:01,778][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:15:01,778][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:15:01,779][TRACE][cluster.service ] [Dawson, Tex] cluster state updated:
version [15], source [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
---- unassigned
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:15:01,781][DEBUG][indices.cluster ] [Dawson, Tex] [twitter][2] creating shard
[2012-01-16 09:15:01,782][DEBUG][index.service ] [Dawson, Tex] [twitter] creating shard_id [2]
[2012-01-16 09:15:01,865][TRACE][indices.recovery ] [Dawson, Tex] [twitter][1] recovery [phase1] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: took [336ms]
[2012-01-16 09:15:01,894][TRACE][indices.recovery ] [Dawson, Tex] [twitter][1] recovery [phase2] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: sending transaction log operations
[2012-01-16 09:15:02,117][TRACE][indices.recovery ] [Dawson, Tex] [twitter][3] recovery [phase1] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: took [418ms]
[2012-01-16 09:15:02,118][DEBUG][index.deletionpolicy ] [Dawson, Tex] [twitter][2] Using [keep_only_last] deletion policy
[2012-01-16 09:15:02,119][DEBUG][index.merge.policy ] [Dawson, Tex] [twitter][2] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:15:02,119][TRACE][indices.recovery ] [Dawson, Tex] [twitter][3] recovery [phase2] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: sending transaction log operations
[2012-01-16 09:15:02,120][DEBUG][index.merge.scheduler ] [Dawson, Tex] [twitter][2] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:15:02,120][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][2] state: [CREATED]
[2012-01-16 09:15:02,123][TRACE][jmx ] [Dawson, Tex] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:02,123][TRACE][jmx ] [Dawson, Tex] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:02,124][TRACE][jmx ] [Dawson, Tex] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:02,124][TRACE][jmx ] [Dawson, Tex] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:02,124][TRACE][jmx ] [Dawson, Tex] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:02,124][TRACE][jmx ] [Dawson, Tex] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:02,124][TRACE][jmx ] [Dawson, Tex] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:02,125][TRACE][jmx ] [Dawson, Tex] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:02,125][TRACE][jmx ] [Dawson, Tex] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:15:02,125][TRACE][jmx ] [Dawson, Tex] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:02,125][TRACE][jmx ] [Dawson, Tex] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:02,126][TRACE][jmx ] [Dawson, Tex] Registered org.elasticsearch.jmx.ResourceDMBean@f0fba68 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2
[2012-01-16 09:15:02,127][TRACE][jmx ] [Dawson, Tex] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:02,127][TRACE][jmx ] [Dawson, Tex] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:02,127][TRACE][jmx ] [Dawson, Tex] Registered org.elasticsearch.jmx.ResourceDMBean@12f53870 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2,shardType=store
[2012-01-16 09:15:02,127][DEBUG][index.translog ] [Dawson, Tex] [twitter][2] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:15:02,128][DEBUG][indices.memory ] [Dawson, Tex] recalculating shard indexing buffer (reason=created_shard[twitter][2]), total is [101.9mb] with [3] active shards, each shard set to [33.9mb]
[2012-01-16 09:15:02,128][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][2] state: [CREATED]->[RECOVERING], reason [from [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:02,131][TRACE][indices.recovery ] [Dawson, Tex] [twitter][2] starting recovery from [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]
[2012-01-16 09:15:02,134][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:15:02,147][TRACE][indices.recovery ] [Dawson, Tex] [twitter][1] recovery [phase2] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: took [253ms]
[2012-01-16 09:15:02,148][TRACE][indices.recovery ] [Dawson, Tex] [twitter][1] recovery [phase3] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: sending transaction log operations
[2012-01-16 09:15:02,150][TRACE][indices.recovery ] [Dawson, Tex] [twitter][3] recovery [phase2] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: took [31ms]
[2012-01-16 09:15:02,150][TRACE][indices.recovery ] [Dawson, Tex] [twitter][3] recovery [phase3] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: sending transaction log operations
[2012-01-16 09:15:02,190][TRACE][indices.recovery ] [Dawson, Tex] [twitter][3] recovery [phase3] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: took [39ms]
[2012-01-16 09:15:02,190][TRACE][indices.recovery ] [Dawson, Tex] [twitter][1] recovery [phase3] to [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]: took [42ms]
[2012-01-16 09:15:02,205][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:15:02,215][TRACE][cluster.service ] [Dawson, Tex] cluster state updated:
version [16], source [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[INITIALIZING]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
---- unassigned
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:15:02,224][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:15:02,225][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:15:02,228][TRACE][cluster.service ] [Dawson, Tex] cluster state updated:
version [17], source [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
---- unassigned
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:15:02,232][DEBUG][index.engine.robin ] [Dawson, Tex] [twitter][0] Starting engine
[2012-01-16 09:15:02,232][DEBUG][index.engine.robin ] [Dawson, Tex] [twitter][2] Starting engine
[2012-01-16 09:15:02,247][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:15:02,269][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][0] state: [RECOVERING]->[STARTED], reason [post recovery]
[2012-01-16 09:15:02,270][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][0] scheduling refresher every 1s
[2012-01-16 09:15:02,270][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][0] scheduling optimizer / merger every 1s
[2012-01-16 09:15:02,271][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][2] state: [RECOVERING]->[STARTED], reason [post recovery]
[2012-01-16 09:15:02,271][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][2] scheduling refresher every 1s
[2012-01-16 09:15:02,271][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][2] scheduling optimizer / merger every 1s
[2012-01-16 09:15:02,281][DEBUG][indices.recovery ] [Dawson, Tex] [twitter][0] recovery completed from [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], took[522ms]
phase1: recovered_files [1] with total_size of [58b], took [236ms], throttling_wait [0s]
: reusing_files [0] with total_size of [0b]
phase2: recovered [0] transaction log operations, took [32ms]
phase3: recovered [0] transaction log operations, took [12ms]
[2012-01-16 09:15:02,282][DEBUG][cluster.action.shard ] [Dawson, Tex] sending shard started for [twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:15:02,282][DEBUG][indices.recovery ] [Dawson, Tex] [twitter][2] recovery completed from [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], took[150ms]
phase1: recovered_files [1] with total_size of [58b], took [58ms], throttling_wait [0s]
: reusing_files [0] with total_size of [0b]
phase2: recovered [0] transaction log operations, took [36ms]
phase3: recovered [0] transaction log operations, took [13ms]
[2012-01-16 09:15:02,287][DEBUG][cluster.action.shard ] [Dawson, Tex] sending shard started for [twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:15:02,315][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:15:02,316][TRACE][cluster.service ] [Dawson, Tex] cluster state updated:
version [18], source [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
---- unassigned
[2012-01-16 09:15:02,335][TRACE][indices.cluster ] [Dawson, Tex] [{}][{}] master [{}] marked shard as initializing, but shard already created, mark shard as started
[2012-01-16 09:15:02,335][DEBUG][cluster.action.shard ] [Dawson, Tex] sending shard started for [twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING], reason [master [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:15:02,335][DEBUG][indices.cluster ] [Dawson, Tex] [twitter][4] creating shard
[2012-01-16 09:15:02,335][DEBUG][index.service ] [Dawson, Tex] [twitter] creating shard_id [4]
[2012-01-16 09:15:02,414][DEBUG][index.deletionpolicy ] [Dawson, Tex] [twitter][4] Using [keep_only_last] deletion policy
[2012-01-16 09:15:02,415][DEBUG][index.merge.policy ] [Dawson, Tex] [twitter][4] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:15:02,416][DEBUG][index.merge.scheduler ] [Dawson, Tex] [twitter][4] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:15:02,417][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][4] state: [CREATED]
[2012-01-16 09:15:02,419][TRACE][jmx ] [Dawson, Tex] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:02,419][TRACE][jmx ] [Dawson, Tex] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:02,419][TRACE][jmx ] [Dawson, Tex] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:02,419][TRACE][jmx ] [Dawson, Tex] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:02,420][TRACE][jmx ] [Dawson, Tex] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:02,420][TRACE][jmx ] [Dawson, Tex] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:02,420][TRACE][jmx ] [Dawson, Tex] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:02,420][TRACE][jmx ] [Dawson, Tex] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:02,420][TRACE][jmx ] [Dawson, Tex] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:15:02,420][TRACE][jmx ] [Dawson, Tex] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:15:02,420][TRACE][jmx ] [Dawson, Tex] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:02,420][TRACE][jmx ] [Dawson, Tex] Registered org.elasticsearch.jmx.ResourceDMBean@5b7ed710 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4
[2012-01-16 09:15:02,422][TRACE][jmx ] [Dawson, Tex] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:15:02,422][TRACE][jmx ] [Dawson, Tex] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:15:02,423][TRACE][jmx ] [Dawson, Tex] Registered org.elasticsearch.jmx.ResourceDMBean@328b1323 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4,shardType=store
[2012-01-16 09:15:02,423][DEBUG][index.translog ] [Dawson, Tex] [twitter][4] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:15:02,424][DEBUG][indices.memory ] [Dawson, Tex] recalculating shard indexing buffer (reason=created_shard[twitter][4]), total is [101.9mb] with [4] active shards, each shard set to [25.4mb]
[2012-01-16 09:15:02,425][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][4] state: [CREATED]->[RECOVERING], reason [from [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:02,425][TRACE][indices.recovery ] [Dawson, Tex] [twitter][4] starting recovery from [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]
[2012-01-16 09:15:02,426][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:15:02,557][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:15:02,557][TRACE][cluster.service ] [Dawson, Tex] cluster state updated:
version [19], source [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING]
---- unassigned
[2012-01-16 09:15:02,559][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:15:02,580][DEBUG][index.engine.robin ] [Dawson, Tex] [twitter][4] Starting engine
[2012-01-16 09:15:02,594][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][4] state: [RECOVERING]->[STARTED], reason [post recovery]
[2012-01-16 09:15:02,594][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][4] scheduling refresher every 1s
[2012-01-16 09:15:02,594][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][4] scheduling optimizer / merger every 1s
[2012-01-16 09:15:02,596][DEBUG][indices.recovery ] [Dawson, Tex] [twitter][4] recovery completed from [Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], took[44ms]
phase1: recovered_files [1] with total_size of [58b], took [20ms], throttling_wait [0s]
: reusing_files [0] with total_size of [0b]
phase2: recovered [0] transaction log operations, took [12ms]
phase3: recovered [0] transaction log operations, took [7ms]
[2012-01-16 09:15:02,597][DEBUG][cluster.action.shard ] [Dawson, Tex] sending shard started for [twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:15:02,608][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:15:02,609][TRACE][cluster.service ] [Dawson, Tex] cluster state updated:
version [20], source [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
---- unassigned
[2012-01-16 09:15:02,610][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:15:02,840][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:15:02,841][TRACE][cluster.service ] [Dawson, Tex] cluster state updated:
version [21], source [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]
nodes:
[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]], master
[Dawson, Tex][wZH_t0kwSx-gG5blmuSBJQ][inet[/10.0.1.5:9300]], local
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
----shard_id [twitter][3]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
routing_nodes:
-----node_id[8tXWQ1MKTYCHwsZyprSdOA]
--------[twitter][0], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][1], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][2], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
--------[twitter][3], node[8tXWQ1MKTYCHwsZyprSdOA], [R], s[STARTED]
--------[twitter][4], node[8tXWQ1MKTYCHwsZyprSdOA], [P], s[STARTED]
-----node_id[wZH_t0kwSx-gG5blmuSBJQ]
--------[twitter][0], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][1], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][2], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
--------[twitter][3], node[wZH_t0kwSx-gG5blmuSBJQ], [P], s[STARTED]
--------[twitter][4], node[wZH_t0kwSx-gG5blmuSBJQ], [R], s[STARTED]
---- unassigned
[2012-01-16 09:15:02,845][DEBUG][indices.cluster ] [Dawson, Tex] [twitter] adding mapping [tweet], source [{"tweet":{"properties":{"message":{"type":"string"},"post_date":{"type":"date","format":"dateOptionalTime"},"user":{"type":"string"}}}}]
[2012-01-16 09:15:03,087][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(from master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:15:14,252][INFO ][discovery.zen ] [Dawson, Tex] master_left [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], reason [shut_down]
[2012-01-16 09:15:14,316][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x6f603bdc, /10.0.1.5:62897 :> /10.0.1.5:9300]
[2012-01-16 09:15:14,315][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x675ee9e3, /10.0.1.5:62895 :> /10.0.1.5:9300]
[2012-01-16 09:15:14,320][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-master_failed ([Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]])]: execute
[2012-01-16 09:15:14,320][DEBUG][discovery.zen.fd ] [Dawson, Tex] [master] stopping fault detection against master [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]], reason [got elected as new master since master left (reason = shut_down)]
[2012-01-16 09:15:14,315][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x1ee99d0f, /10.0.1.5:62896 :> /10.0.1.5:9300]
[2012-01-16 09:15:14,340][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x181f327e, /10.0.1.5:62901 :> /10.0.1.5:9300]
[2012-01-16 09:15:14,341][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x05eb9fde, /10.0.1.5:62898 :> /10.0.1.5:9300]
[2012-01-16 09:15:14,355][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x4eb7cd92, /10.0.1.5:62900 :> /10.0.1.5:9300]
[2012-01-16 09:15:14,350][DEBUG][transport.netty ] [Dawson, Tex] Disconnected from [[Jade Dragon][8tXWQ1MKTYCHwsZyprSdOA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:14,341][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x5ad3c69c, /10.0.1.5:62899 :> /10.0.1.5:9300]
[2012-01-16 09:15:28,148][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x489a44f1, /127.0.0.1:62997 => /127.0.0.1:9300]
[2012-01-16 09:15:31,148][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x07dc4cd9, /10.0.1.5:62999 => /10.0.1.5:9300]
[2012-01-16 09:15:31,149][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x10393e97, /10.0.1.5:63000 => /10.0.1.5:9300]
[2012-01-16 09:15:31,266][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x489a44f1, /127.0.0.1:62997 :> /127.0.0.1:9300]
[2012-01-16 09:15:31,266][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x56873b9f, /10.0.1.5:63001 => /10.0.1.5:9300]
[2012-01-16 09:15:31,267][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x34baf4ae, /10.0.1.5:63002 => /10.0.1.5:9300]
[2012-01-16 09:15:31,267][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x6fd3633c, /10.0.1.5:63003 => /10.0.1.5:9300]
[2012-01-16 09:15:31,267][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x6d5998cb, /10.0.1.5:63004 => /10.0.1.5:9300]
[2012-01-16 09:15:31,268][TRACE][transport.netty ] [Dawson, Tex] channel opened: [id: 0x0f58046e, /10.0.1.5:63005 => /10.0.1.5:9300]
[2012-01-16 09:15:31,271][DEBUG][transport.netty ] [Dawson, Tex] Connected to node [[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]]
[2012-01-16 09:15:31,273][DEBUG][cluster.service ] [Dawson, Tex] processing [zen-disco-receive(join from node[[Living Totem][2OkEhEd_RV2STF1a7BdfEw][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:15:59,540][INFO ][node ] [Dawson, Tex] {0.18.7}[11933]: stopping ...
[2012-01-16 09:15:59,647][TRACE][jmx ] [Dawson, Tex] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4
[2012-01-16 09:15:59,647][TRACE][jmx ] [Dawson, Tex] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2
[2012-01-16 09:15:59,648][TRACE][jmx ] [Dawson, Tex] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2,shardType=store
[2012-01-16 09:15:59,648][TRACE][jmx ] [Dawson, Tex] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4,shardType=store
[2012-01-16 09:15:59,648][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][4] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:15:59,648][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][2] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:15:59,648][TRACE][jmx ] [Dawson, Tex] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1
[2012-01-16 09:15:59,649][TRACE][jmx ] [Dawson, Tex] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1,shardType=store
[2012-01-16 09:15:59,649][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][1] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:15:59,648][TRACE][jmx ] [Dawson, Tex] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3
[2012-01-16 09:15:59,650][TRACE][jmx ] [Dawson, Tex] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3,shardType=store
[2012-01-16 09:15:59,650][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][3] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:15:59,648][TRACE][jmx ] [Dawson, Tex] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0
[2012-01-16 09:15:59,651][TRACE][jmx ] [Dawson, Tex] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0,shardType=store
[2012-01-16 09:15:59,651][DEBUG][index.shard.service ] [Dawson, Tex] [twitter][0] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:15:59,681][TRACE][jmx ] [Dawson, Tex] Unregistered org.elasticsearch:service=indices,index=twitter
[2012-01-16 09:15:59,691][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x66e90097, /10.0.1.5:62903 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,693][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x4b25ee49, /10.0.1.5:62904 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,694][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x1d9dbdc4, /10.0.1.5:62908 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,695][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x6e247d4a, /10.0.1.5:62907 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,700][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x6bb5eba4, /10.0.1.5:62902 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,701][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x219a6087, /10.0.1.5:62906 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,701][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x273f212a, /10.0.1.5:62905 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,705][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x6d5998cb, /10.0.1.5:63004 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,705][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x10393e97, /10.0.1.5:63000 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,706][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x56873b9f, /10.0.1.5:63001 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,706][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x34baf4ae, /10.0.1.5:63002 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,739][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x0f58046e, /10.0.1.5:63005 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,740][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x6fd3633c, /10.0.1.5:63003 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,740][TRACE][transport.netty ] [Dawson, Tex] channel closed: [id: 0x07dc4cd9, /10.0.1.5:62999 :> /10.0.1.5:9300]
[2012-01-16 09:15:59,747][TRACE][jmx ] [Dawson, Tex] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:15:59,747][TRACE][jmx ] [Dawson, Tex] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:15:59,747][INFO ][node ] [Dawson, Tex] {0.18.7}[11933]: stopped
[2012-01-16 09:15:59,748][INFO ][node ] [Dawson, Tex] {0.18.7}[11933]: closing ...
[2012-01-16 09:15:59,900][TRACE][node ] [Dawson, Tex] Close times for each service:
StopWatch 'node_close': running time = 36ms
-----------------------------------------
ms % Task name
-----------------------------------------
00000 000% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00033 092% indices
00000 000% routing
00000 000% cluster
00001 003% discovery
00000 000% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00001 003% node_cache
00000 000% script
00001 003% thread_pool
00000 000% thread_pool_force_shutdown
[2012-01-16 09:15:59,901][INFO ][node ] [Dawson, Tex] {0.18.7}[11933]: closed
[2012-01-16 09:17:19,594][INFO ][node ] [Major Mapleleaf] {0.18.7}[12072]: initializing ...
[2012-01-16 09:17:19,608][INFO ][plugins ] [Major Mapleleaf] loaded [], sites []
[2012-01-16 09:17:20,785][DEBUG][threadpool ] [Major Mapleleaf] creating thread_pool [cached], type [cached], keep_alive [30s]
[2012-01-16 09:17:20,789][DEBUG][threadpool ] [Major Mapleleaf] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-01-16 09:17:20,789][DEBUG][threadpool ] [Major Mapleleaf] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-01-16 09:17:20,789][DEBUG][threadpool ] [Major Mapleleaf] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-01-16 09:17:20,789][DEBUG][threadpool ] [Major Mapleleaf] creating thread_pool [management], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:17:20,793][DEBUG][threadpool ] [Major Mapleleaf] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
[2012-01-16 09:17:20,793][DEBUG][threadpool ] [Major Mapleleaf] creating thread_pool [snapshot], type [scaling], min [1], size [10], keep_alive [5m]
[2012-01-16 09:17:20,805][DEBUG][transport.netty ] [Major Mapleleaf] using worker_count[4], port[9300], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/4/1]
[2012-01-16 09:17:20,825][DEBUG][discovery.zen.ping.unicast] [Major Mapleleaf] using initial hosts [localhost:9301], with concurrent_connects [10]
[2012-01-16 09:17:20,830][DEBUG][discovery.zen ] [Major Mapleleaf] using ping.timeout [3s]
[2012-01-16 09:17:20,835][DEBUG][discovery.zen.elect ] [Major Mapleleaf] using minimum_master_nodes [-1]
[2012-01-16 09:17:20,836][DEBUG][discovery.zen.fd ] [Major Mapleleaf] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:17:20,840][DEBUG][discovery.zen.fd ] [Major Mapleleaf] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2012-01-16 09:17:20,863][DEBUG][monitor.jvm ] [Major Mapleleaf] enabled [false], last_gc_enabled [false], interval [1s], gc_threshold [5s]
[2012-01-16 09:17:21,374][DEBUG][monitor.os ] [Major Mapleleaf] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@54c9f997] with refresh_interval [1s]
[2012-01-16 09:17:21,379][DEBUG][monitor.process ] [Major Mapleleaf] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@71ce5e7a] with refresh_interval [1s]
[2012-01-16 09:17:21,384][DEBUG][monitor.jvm ] [Major Mapleleaf] Using refresh_interval [1s]
[2012-01-16 09:17:21,384][DEBUG][monitor.network ] [Major Mapleleaf] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@7878529d] with refresh_interval [5s]
[2012-01-16 09:17:21,394][DEBUG][monitor.network ] [Major Mapleleaf] net_info
host [tamas-nemeths-powerbook-g4-12.local]
vnic1 display_name [vnic1]
address [/10.37.129.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
vnic0 display_name [vnic0]
address [/10.211.55.2]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en1 display_name [en1]
address [/fe80:0:0:0:224:36ff:feb2:fe59%5] [/10.0.1.5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/0:0:0:0:0:0:0:1] [/fe80:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]
[2012-01-16 09:17:21,414][TRACE][monitor.network ] [Major Mapleleaf] ifconfig
lo0 Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16384 Metric:0
RX packets:22507 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:22507 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3971608 (3.8M) TX bytes:3971608 (3.8M)
en0 Link encap:Ethernet HWaddr 00:23:DF:9D:EC:72
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:3082 (3.0K)
en1 Link encap:Ethernet HWaddr 00:24:36:B2:FE:59
inet addr:10.0.1.5 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:0
RX packets:2841175 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:1507054 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:3828128923 (3.6G) TX bytes:117930115 (112M)
p2p0 Link encap:Ethernet HWaddr 02:24:36:B2:FE:59
inet addr:0.0.0.0 Bcast:0.0.0.0 Mask:0.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:2304 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic0 Link encap:Ethernet HWaddr 00:1C:42:00:00:08
inet addr:10.211.55.2 Bcast:10.211.55.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
vnic1 Link encap:Ethernet HWaddr 00:1C:42:00:00:09
inet addr:10.37.129.2 Bcast:10.37.129.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:0
RX packets:0 errors:0 dropped:0 overruns:-1 frame:-1
TX packets:0 errors:0 dropped:-1 overruns:-1 carrier:-1
collisions:0
RX bytes:0 ( 0 ) TX bytes:0 ( 0 )
[2012-01-16 09:17:21,417][TRACE][env ] [Major Mapleleaf] obtaining node lock on /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0 ...
[2012-01-16 09:17:21,445][DEBUG][env ] [Major Mapleleaf] using node location [[/Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0]], local_node_id [0]
[2012-01-16 09:17:21,445][TRACE][env ] [Major Mapleleaf] node data locations details:
-> /Users/treff7es/downloads/elasticsearch-0.18.7/data/elasticsearch/nodes/0, free_space [221.7gb, usable_space [221.4gb
[2012-01-16 09:17:21,741][DEBUG][cache.memory ] [Major Mapleleaf] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2012-01-16 09:17:21,754][DEBUG][cluster.routing.allocation.decider] [Major Mapleleaf] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2012-01-16 09:17:21,755][DEBUG][cluster.routing.allocation.decider] [Major Mapleleaf] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2012-01-16 09:17:21,756][DEBUG][cluster.routing.allocation.decider] [Major Mapleleaf] using [cluster_concurrent_rebalance] with [2]
[2012-01-16 09:17:21,759][DEBUG][gateway.local ] [Major Mapleleaf] using initial_shards [quorum], list_timeout [30s]
[2012-01-16 09:17:21,782][DEBUG][indices.recovery ] [Major Mapleleaf] using max_size_per_sec[0b], concurrent_streams [5], file_chunk_size [100kb], translog_size [100kb], translog_ops [1000], and compress [true]
[2012-01-16 09:17:21,973][TRACE][jmx ] [Major Mapleleaf] Attribute TotalNumberOfRequests[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:21,974][TRACE][jmx ] [Major Mapleleaf] Attribute BoundAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:21,974][TRACE][jmx ] [Major Mapleleaf] Attribute PublishAddress[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:21,977][TRACE][jmx ] [Major Mapleleaf] Attribute TcpNoDelay[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:17:21,977][TRACE][jmx ] [Major Mapleleaf] Attribute NumberOfOutboundConnections[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:21,978][TRACE][jmx ] [Major Mapleleaf] Attribute Port[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:21,978][TRACE][jmx ] [Major Mapleleaf] Attribute WorkerCount[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:21,978][TRACE][jmx ] [Major Mapleleaf] Attribute TcpReceiveBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:21,978][TRACE][jmx ] [Major Mapleleaf] Attribute ReuseAddress[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:17:21,978][TRACE][jmx ] [Major Mapleleaf] Attribute ConnectTimeout[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:21,978][TRACE][jmx ] [Major Mapleleaf] Attribute TcpKeepAlive[r=true,w=false,is=false,type=java.lang.Boolean]
[2012-01-16 09:17:21,978][TRACE][jmx ] [Major Mapleleaf] Attribute PublishHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:21,978][TRACE][jmx ] [Major Mapleleaf] Attribute TcpSendBufferSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:21,978][TRACE][jmx ] [Major Mapleleaf] Attribute BindHost[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:21,979][DEBUG][http.netty ] [Major Mapleleaf] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb]
[2012-01-16 09:17:21,985][DEBUG][indices.memory ] [Major Mapleleaf] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2012-01-16 09:17:21,995][DEBUG][indices.cache.filter ] [Major Mapleleaf] using [node] filter cache with size [20%], actual_size [203.9mb]
[2012-01-16 09:17:22,083][INFO ][node ] [Major Mapleleaf] {0.18.7}[12072]: initialized
[2012-01-16 09:17:22,083][INFO ][node ] [Major Mapleleaf] {0.18.7}[12072]: starting ...
[2012-01-16 09:17:22,117][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[2012-01-16 09:17:22,197][DEBUG][transport.netty ] [Major Mapleleaf] Bound to address [/0.0.0.0:9300]
[2012-01-16 09:17:22,200][INFO ][transport ] [Major Mapleleaf] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/10.0.1.5:9300]}
[2012-01-16 09:17:22,275][TRACE][discovery ] [Major Mapleleaf] waiting for 30s for the initial state to be set by the discovery
[2012-01-16 09:17:22,300][TRACE][discovery.zen.ping.unicast] [Major Mapleleaf] [1] failed to connect to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
org.elasticsearch.transport.ConnectTransportException: [][inet[localhost/127.0.0.1:9301]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:533)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:499)
at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:478)
at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:128)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$3.run(UnicastZenPing.java:273)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
... 3 more
[2012-01-16 09:17:22,301][TRACE][transport.netty ] [Major Mapleleaf] (Ignoring) Exception caught on netty layer [[id: 0x22e38fca]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:17:23,783][TRACE][discovery.zen.ping.unicast] [Major Mapleleaf] [1] failed to connect to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
org.elasticsearch.transport.ConnectTransportException: [][inet[localhost/127.0.0.1:9301]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:533)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:499)
at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:478)
at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:128)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$3.run(UnicastZenPing.java:273)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
... 3 more
[2012-01-16 09:17:23,783][TRACE][transport.netty ] [Major Mapleleaf] (Ignoring) Exception caught on netty layer [[id: 0x49b9ef36]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:17:25,286][TRACE][transport.netty ] [Major Mapleleaf] (Ignoring) Exception caught on netty layer [[id: 0x312cfd62]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:17:25,295][TRACE][discovery.zen.ping.unicast] [Major Mapleleaf] [1] failed to connect to [#zen_unicast_1#][inet[localhost/127.0.0.1:9301]]
org.elasticsearch.transport.ConnectTransportException: [][inet[localhost/127.0.0.1:9301]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:533)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:499)
at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:478)
at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:128)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$3.run(UnicastZenPing.java:273)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
... 3 more
[2012-01-16 09:17:25,298][DEBUG][discovery.zen ] [Major Mapleleaf] ping responses: {none}
[2012-01-16 09:17:25,301][DEBUG][cluster.service ] [Major Mapleleaf] processing [zen-disco-join (elected_as_master)]: execute
[2012-01-16 09:17:25,302][TRACE][cluster.service ] [Major Mapleleaf] cluster state updated:
version [1], source [zen-disco-join (elected_as_master)]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:17:25,303][INFO ][cluster.service ] [Major Mapleleaf] new_master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], reason: zen-disco-join (elected_as_master)
[2012-01-16 09:17:25,307][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x44af17c7, /10.0.1.5:63033 => /10.0.1.5:9300]
[2012-01-16 09:17:25,316][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x1c2d5534, /10.0.1.5:63034 => /10.0.1.5:9300]
[2012-01-16 09:17:25,318][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x521ecfeb, /10.0.1.5:63035 => /10.0.1.5:9300]
[2012-01-16 09:17:25,321][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x52352d87, /10.0.1.5:63036 => /10.0.1.5:9300]
[2012-01-16 09:17:25,325][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x6fa8bd74, /10.0.1.5:63037 => /10.0.1.5:9300]
[2012-01-16 09:17:25,326][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x479d4f72, /10.0.1.5:63038 => /10.0.1.5:9300]
[2012-01-16 09:17:25,326][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x28caea19, /10.0.1.5:63039 => /10.0.1.5:9300]
[2012-01-16 09:17:25,341][DEBUG][transport.netty ] [Major Mapleleaf] Connected to node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:25,345][DEBUG][cluster.service ] [Major Mapleleaf] processing [zen-disco-join (elected_as_master)]: done applying updated cluster_state
[2012-01-16 09:17:25,346][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:17:25,346][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:17:25,345][TRACE][discovery ] [Major Mapleleaf] initial state set from discovery
[2012-01-16 09:17:25,346][INFO ][discovery ] [Major Mapleleaf] elasticsearch/HkLh96lnR4yPG1s_DJCxOA
[2012-01-16 09:17:25,347][DEBUG][gateway.local ] [Major Mapleleaf] [find_latest_state]: no metadata state loaded
[2012-01-16 09:17:25,347][DEBUG][gateway.local ] [Major Mapleleaf] [find_latest_state]: no started shards loaded
[2012-01-16 09:17:25,374][DEBUG][gateway.local ] [Major Mapleleaf] no state elected
[2012-01-16 09:17:25,390][DEBUG][cluster.service ] [Major Mapleleaf] processing [local-gateway-elected-state]: execute
[2012-01-16 09:17:25,402][TRACE][cluster.service ] [Major Mapleleaf] cluster state updated:
version [1], source [local-gateway-elected-state]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], local, master
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:17:25,405][INFO ][http ] [Major Mapleleaf] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/10.0.1.5:9200]}
[2012-01-16 09:17:25,406][TRACE][jmx ] [Major Mapleleaf] Registered org.elasticsearch.jmx.ResourceDMBean@5b31fd9 under org.elasticsearch:service=transport
[2012-01-16 09:17:25,407][TRACE][jmx ] [Major Mapleleaf] Registered org.elasticsearch.jmx.ResourceDMBean@32efe27b under org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:17:25,407][INFO ][node ] [Major Mapleleaf] {0.18.7}[12072]: started
[2012-01-16 09:17:25,406][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:17:25,408][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:17:25,458][INFO ][gateway ] [Major Mapleleaf] recovered [0] indices into cluster_state
[2012-01-16 09:17:25,458][DEBUG][cluster.service ] [Major Mapleleaf] processing [local-gateway-elected-state]: done applying updated cluster_state
[2012-01-16 09:17:27,688][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x514f2bd7, /127.0.0.1:63042 => /127.0.0.1:9300]
[2012-01-16 09:17:30,687][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x514f2bd7, /127.0.0.1:63042 :> /127.0.0.1:9300]
[2012-01-16 09:17:30,691][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x7e6baf24, /10.0.1.5:63043 => /10.0.1.5:9300]
[2012-01-16 09:17:30,692][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x4fb7a553, /10.0.1.5:63044 => /10.0.1.5:9300]
[2012-01-16 09:17:30,692][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x21c71508, /10.0.1.5:63045 => /10.0.1.5:9300]
[2012-01-16 09:17:30,693][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x1535d18b, /10.0.1.5:63046 => /10.0.1.5:9300]
[2012-01-16 09:17:30,693][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x0050078e, /10.0.1.5:63047 => /10.0.1.5:9300]
[2012-01-16 09:17:30,694][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x061ffbcb, /10.0.1.5:63048 => /10.0.1.5:9300]
[2012-01-16 09:17:30,694][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x1dcbcf91, /10.0.1.5:63049 => /10.0.1.5:9300]
[2012-01-16 09:17:30,732][DEBUG][transport.netty ] [Major Mapleleaf] Connected to node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:17:30,733][DEBUG][cluster.service ] [Major Mapleleaf] processing [zen-disco-receive(join from node[[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:17:30,734][TRACE][cluster.service ] [Major Mapleleaf] cluster state updated:
version [2], source [zen-disco-receive(join from node[[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]])]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], local, master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]
routing_table:
routing_nodes:
---- unassigned
[2012-01-16 09:17:30,734][INFO ][cluster.service ] [Major Mapleleaf] added {[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]],}, reason: zen-disco-receive(join from node[[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]])
[2012-01-16 09:17:30,743][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:17:30,748][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:17:30,744][DEBUG][cluster.service ] [Major Mapleleaf] processing [zen-disco-receive(join from node[[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]])]: done applying updated cluster_state
[2012-01-16 09:17:35,345][DEBUG][cluster.service ] [Major Mapleleaf] processing [routing-table-updater]: execute
[2012-01-16 09:17:35,346][DEBUG][cluster.service ] [Major Mapleleaf] processing [routing-table-updater]: no change in cluster_state
[2012-01-16 09:17:41,628][TRACE][http.netty ] [Major Mapleleaf] channel opened: [id: 0x7f4c352e, /127.0.0.1:63064 => /127.0.0.1:9200]
[2012-01-16 09:17:41,784][DEBUG][cluster.service ] [Major Mapleleaf] processing [create-index [twitter], cause [auto(index api)]]: execute
[2012-01-16 09:17:41,786][DEBUG][indices ] [Major Mapleleaf] creating Index [twitter], shards [5]/[1]
[2012-01-16 09:17:42,020][DEBUG][index.mapper ] [Major Mapleleaf] [twitter] using dynamic[true], default mapping: location[null] and source[{
"_default_" : {
}
}]
[2012-01-16 09:17:42,021][DEBUG][index.cache.field.data.resident] [Major Mapleleaf] [twitter] using [resident] field cache with max_size [-1], expire [null]
[2012-01-16 09:17:42,024][DEBUG][index.cache ] [Major Mapleleaf] [twitter] Using stats.refresh_interval [1s]
[2012-01-16 09:17:42,036][TRACE][jmx ] [Major Mapleleaf] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,037][TRACE][jmx ] [Major Mapleleaf] Registered org.elasticsearch.jmx.ResourceDMBean@3b3e3940 under org.elasticsearch:service=indices,index=twitter
[2012-01-16 09:17:42,041][INFO ][cluster.metadata ] [Major Mapleleaf] [twitter] creating index, cause [auto(index api)], shards [5]/[1], mappings []
[2012-01-16 09:17:42,061][TRACE][cluster.service ] [Major Mapleleaf] cluster state updated:
version [3], source [create-index [twitter], cause [auto(index api)]]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], local, master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
--------[twitter][0], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][1]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
---- unassigned
--------[twitter][0], node[null], [R], s[UNASSIGNED]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:17:42,062][DEBUG][indices.cluster ] [Major Mapleleaf] [twitter][0] creating shard
[2012-01-16 09:17:42,062][DEBUG][index.service ] [Major Mapleleaf] [twitter] creating shard_id [0]
[2012-01-16 09:17:42,062][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:17:42,064][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:17:42,200][DEBUG][index.deletionpolicy ] [Major Mapleleaf] [twitter][0] Using [keep_only_last] deletion policy
[2012-01-16 09:17:42,201][DEBUG][index.merge.policy ] [Major Mapleleaf] [twitter][0] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:17:42,201][DEBUG][index.merge.scheduler ] [Major Mapleleaf] [twitter][0] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:17:42,204][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][0] state: [CREATED]
[2012-01-16 09:17:42,208][TRACE][jmx ] [Major Mapleleaf] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:42,208][TRACE][jmx ] [Major Mapleleaf] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:42,208][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:42,208][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:42,208][TRACE][jmx ] [Major Mapleleaf] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,208][TRACE][jmx ] [Major Mapleleaf] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,209][TRACE][jmx ] [Major Mapleleaf] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,209][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,209][TRACE][jmx ] [Major Mapleleaf] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:17:42,209][TRACE][jmx ] [Major Mapleleaf] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:42,209][TRACE][jmx ] [Major Mapleleaf] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,209][TRACE][jmx ] [Major Mapleleaf] Registered org.elasticsearch.jmx.ResourceDMBean@66a96863 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0
[2012-01-16 09:17:42,210][TRACE][jmx ] [Major Mapleleaf] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:42,210][TRACE][jmx ] [Major Mapleleaf] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,210][TRACE][jmx ] [Major Mapleleaf] Registered org.elasticsearch.jmx.ResourceDMBean@764b2c0 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0,shardType=store
[2012-01-16 09:17:42,211][DEBUG][index.translog ] [Major Mapleleaf] [twitter][0] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:17:42,215][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][0] state: [CREATED]->[RECOVERING], reason [from gateway]
[2012-01-16 09:17:42,216][DEBUG][index.gateway ] [Major Mapleleaf] [twitter][0] starting recovery from local ...
[2012-01-16 09:17:42,225][DEBUG][index.engine.robin ] [Major Mapleleaf] [twitter][0] Starting engine
[2012-01-16 09:17:42,300][DEBUG][indices.cluster ] [Major Mapleleaf] [twitter][2] creating shard
[2012-01-16 09:17:42,300][DEBUG][index.service ] [Major Mapleleaf] [twitter] creating shard_id [2]
[2012-01-16 09:17:42,363][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][0] scheduling refresher every 1s
[2012-01-16 09:17:42,365][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][0] scheduling optimizer / merger every 1s
[2012-01-16 09:17:42,366][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][0] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2012-01-16 09:17:42,366][TRACE][index.shard.service ] [Major Mapleleaf] [twitter][0] refresh with waitForOperations[false]
[2012-01-16 09:17:42,366][DEBUG][index.gateway ] [Major Mapleleaf] [twitter][0] recovery completed from local, took [150ms]
index : files [0] with total_size [0b], took[8ms]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
translog : number_of_operations [0], took [149ms]
[2012-01-16 09:17:42,367][DEBUG][cluster.action.shard ] [Major Mapleleaf] sending shard started for [twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:17:42,367][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:17:42,389][DEBUG][index.deletionpolicy ] [Major Mapleleaf] [twitter][2] Using [keep_only_last] deletion policy
[2012-01-16 09:17:42,390][DEBUG][index.merge.policy ] [Major Mapleleaf] [twitter][2] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:17:42,390][DEBUG][index.merge.scheduler ] [Major Mapleleaf] [twitter][2] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:17:42,392][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][2] state: [CREATED]
[2012-01-16 09:17:42,394][TRACE][jmx ] [Major Mapleleaf] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:42,395][TRACE][jmx ] [Major Mapleleaf] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:42,395][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:42,395][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:42,395][TRACE][jmx ] [Major Mapleleaf] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,395][TRACE][jmx ] [Major Mapleleaf] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,395][TRACE][jmx ] [Major Mapleleaf] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,395][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,395][TRACE][jmx ] [Major Mapleleaf] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:17:42,395][TRACE][jmx ] [Major Mapleleaf] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:42,396][TRACE][jmx ] [Major Mapleleaf] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,396][TRACE][jmx ] [Major Mapleleaf] Registered org.elasticsearch.jmx.ResourceDMBean@4a412f4b under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2
[2012-01-16 09:17:42,397][TRACE][jmx ] [Major Mapleleaf] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:42,397][TRACE][jmx ] [Major Mapleleaf] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,397][TRACE][jmx ] [Major Mapleleaf] Registered org.elasticsearch.jmx.ResourceDMBean@6e8af0b0 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2,shardType=store
[2012-01-16 09:17:42,398][DEBUG][index.translog ] [Major Mapleleaf] [twitter][2] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:17:42,398][DEBUG][indices.memory ] [Major Mapleleaf] recalculating shard indexing buffer (reason=created_shard[twitter][2]), total is [101.9mb] with [1] active shards, each shard set to [101.9mb]
[2012-01-16 09:17:42,399][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][2] state: [CREATED]->[RECOVERING], reason [from gateway]
[2012-01-16 09:17:42,399][DEBUG][indices.cluster ] [Major Mapleleaf] [twitter][4] creating shard
[2012-01-16 09:17:42,400][DEBUG][index.service ] [Major Mapleleaf] [twitter] creating shard_id [4]
[2012-01-16 09:17:42,403][DEBUG][index.gateway ] [Major Mapleleaf] [twitter][2] starting recovery from local ...
[2012-01-16 09:17:42,403][DEBUG][index.engine.robin ] [Major Mapleleaf] [twitter][2] Starting engine
[2012-01-16 09:17:42,410][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][2] scheduling refresher every 1s
[2012-01-16 09:17:42,410][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][2] scheduling optimizer / merger every 1s
[2012-01-16 09:17:42,410][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][2] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2012-01-16 09:17:42,411][TRACE][index.shard.service ] [Major Mapleleaf] [twitter][2] refresh with waitForOperations[false]
[2012-01-16 09:17:42,411][DEBUG][index.gateway ] [Major Mapleleaf] [twitter][2] recovery completed from local, took [8ms]
index : files [0] with total_size [0b], took[0s]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
translog : number_of_operations [0], took [8ms]
[2012-01-16 09:17:42,411][DEBUG][cluster.action.shard ] [Major Mapleleaf] sending shard started for [twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:17:42,412][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:17:42,478][DEBUG][index.deletionpolicy ] [Major Mapleleaf] [twitter][4] Using [keep_only_last] deletion policy
[2012-01-16 09:17:42,478][DEBUG][index.merge.policy ] [Major Mapleleaf] [twitter][4] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:17:42,478][DEBUG][index.merge.scheduler ] [Major Mapleleaf] [twitter][4] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:17:42,479][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][4] state: [CREATED]
[2012-01-16 09:17:42,481][TRACE][jmx ] [Major Mapleleaf] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:42,481][TRACE][jmx ] [Major Mapleleaf] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:42,481][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:42,481][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:42,481][TRACE][jmx ] [Major Mapleleaf] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,482][TRACE][jmx ] [Major Mapleleaf] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,482][TRACE][jmx ] [Major Mapleleaf] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,482][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,482][TRACE][jmx ] [Major Mapleleaf] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:17:42,482][TRACE][jmx ] [Major Mapleleaf] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:42,482][TRACE][jmx ] [Major Mapleleaf] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,482][TRACE][jmx ] [Major Mapleleaf] Registered org.elasticsearch.jmx.ResourceDMBean@6bd3e069 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4
[2012-01-16 09:17:42,483][TRACE][jmx ] [Major Mapleleaf] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:42,483][TRACE][jmx ] [Major Mapleleaf] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:42,483][TRACE][jmx ] [Major Mapleleaf] Registered org.elasticsearch.jmx.ResourceDMBean@394300c8 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4,shardType=store
[2012-01-16 09:17:42,483][DEBUG][index.translog ] [Major Mapleleaf] [twitter][4] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:17:42,484][DEBUG][indices.memory ] [Major Mapleleaf] recalculating shard indexing buffer (reason=created_shard[twitter][4]), total is [101.9mb] with [2] active shards, each shard set to [50.9mb]
[2012-01-16 09:17:42,484][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][4] state: [CREATED]->[RECOVERING], reason [from gateway]
[2012-01-16 09:17:42,494][DEBUG][index.gateway ] [Major Mapleleaf] [twitter][4] starting recovery from local ...
[2012-01-16 09:17:42,494][DEBUG][index.engine.robin ] [Major Mapleleaf] [twitter][4] Starting engine
[2012-01-16 09:17:42,498][DEBUG][cluster.service ] [Major Mapleleaf] processing [create-index [twitter], cause [auto(index api)]]: done applying updated cluster_state
[2012-01-16 09:17:42,498][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2012-01-16 09:17:42,498][DEBUG][cluster.action.shard ] [Major Mapleleaf] applying started shards [[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING], [twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]], reason [after recovery from gateway]
[2012-01-16 09:17:42,500][TRACE][cluster.service ] [Major Mapleleaf] cluster state updated:
version [4], source [shard-started ([twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], local, master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][1]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
---- unassigned
--------[twitter][0], node[null], [R], s[UNASSIGNED]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:17:42,502][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:17:42,509][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:17:42,513][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]: done applying updated cluster_state
[2012-01-16 09:17:42,513][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2012-01-16 09:17:42,513][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]: no change in cluster_state
[2012-01-16 09:17:42,558][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][4] scheduling refresher every 1s
[2012-01-16 09:17:42,558][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][4] scheduling optimizer / merger every 1s
[2012-01-16 09:17:42,558][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][4] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2012-01-16 09:17:42,558][TRACE][index.shard.service ] [Major Mapleleaf] [twitter][4] refresh with waitForOperations[false]
[2012-01-16 09:17:42,558][DEBUG][index.gateway ] [Major Mapleleaf] [twitter][4] recovery completed from local, took [64ms]
index : files [0] with total_size [0b], took[0s]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
translog : number_of_operations [0], took [64ms]
[2012-01-16 09:17:42,558][DEBUG][cluster.action.shard ] [Major Mapleleaf] sending shard started for [twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:17:42,559][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:17:42,560][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2012-01-16 09:17:42,560][DEBUG][cluster.action.shard ] [Major Mapleleaf] applying started shards [[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]], reason [after recovery from gateway]
[2012-01-16 09:17:42,569][TRACE][cluster.service ] [Major Mapleleaf] cluster state updated:
version [5], source [shard-started ([twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], local, master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][1]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][3]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]
---- unassigned
--------[twitter][0], node[null], [R], s[UNASSIGNED]
--------[twitter][1], node[null], [R], s[UNASSIGNED]
--------[twitter][2], node[null], [R], s[UNASSIGNED]
--------[twitter][3], node[null], [R], s[UNASSIGNED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:17:42,577][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:17:42,583][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:17:42,589][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[INITIALIZING]), reason [after recovery from gateway]]: done applying updated cluster_state
[2012-01-16 09:17:43,162][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:17:43,172][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2012-01-16 09:17:43,184][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2012-01-16 09:17:43,185][DEBUG][cluster.action.shard ] [Major Mapleleaf] applying started shards [[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], [twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]], reason [after recovery from gateway]
[2012-01-16 09:17:43,243][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,245][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,250][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,276][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,294][TRACE][gateway.local ] [Major Mapleleaf] [twitter][0], node[null], [R], s[UNASSIGNED]: checking node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:43,295][TRACE][gateway.local ] [Major Mapleleaf] [twitter][0], node[null], [R], s[UNASSIGNED]: checking node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:17:43,317][TRACE][gateway.local ] [Major Mapleleaf] [twitter][1], node[null], [R], s[UNASSIGNED]: checking node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:43,317][TRACE][gateway.local ] [Major Mapleleaf] [twitter][1], node[null], [R], s[UNASSIGNED]: checking node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:17:43,328][TRACE][gateway.local ] [Major Mapleleaf] [twitter][2], node[null], [R], s[UNASSIGNED]: checking node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:43,328][TRACE][gateway.local ] [Major Mapleleaf] [twitter][2], node[null], [R], s[UNASSIGNED]: checking node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:17:43,331][TRACE][gateway.local ] [Major Mapleleaf] [twitter][3], node[null], [R], s[UNASSIGNED]: checking node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:43,331][TRACE][gateway.local ] [Major Mapleleaf] [twitter][3], node[null], [R], s[UNASSIGNED]: checking node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:17:43,335][TRACE][gateway.local ] [Major Mapleleaf] [twitter][4], node[null], [R], s[UNASSIGNED]: checking node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:43,335][TRACE][gateway.local ] [Major Mapleleaf] [twitter][4], node[null], [R], s[UNASSIGNED]: checking node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:17:43,340][TRACE][cluster.service ] [Major Mapleleaf] cluster state updated:
version [6], source [shard-started ([twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]), reason [after recovery from gateway]]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], local, master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
---- unassigned
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:17:43,344][DEBUG][indices.cluster ] [Major Mapleleaf] [twitter][1] creating shard
[2012-01-16 09:17:43,345][DEBUG][index.service ] [Major Mapleleaf] [twitter] creating shard_id [1]
[2012-01-16 09:17:43,354][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:17:43,356][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:17:43,403][DEBUG][index.deletionpolicy ] [Major Mapleleaf] [twitter][1] Using [keep_only_last] deletion policy
[2012-01-16 09:17:43,404][DEBUG][index.merge.policy ] [Major Mapleleaf] [twitter][1] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:17:43,404][DEBUG][index.merge.scheduler ] [Major Mapleleaf] [twitter][1] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:17:43,405][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][1] state: [CREATED]
[2012-01-16 09:17:43,407][TRACE][jmx ] [Major Mapleleaf] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,407][TRACE][jmx ] [Major Mapleleaf] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,407][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,407][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,407][TRACE][jmx ] [Major Mapleleaf] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,408][TRACE][jmx ] [Major Mapleleaf] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,408][TRACE][jmx ] [Major Mapleleaf] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,408][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,408][TRACE][jmx ] [Major Mapleleaf] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:17:43,408][TRACE][jmx ] [Major Mapleleaf] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,408][TRACE][jmx ] [Major Mapleleaf] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,409][TRACE][jmx ] [Major Mapleleaf] Registered org.elasticsearch.jmx.ResourceDMBean@31c248a under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1
[2012-01-16 09:17:43,411][TRACE][jmx ] [Major Mapleleaf] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,412][TRACE][jmx ] [Major Mapleleaf] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,412][TRACE][jmx ] [Major Mapleleaf] Registered org.elasticsearch.jmx.ResourceDMBean@798a62f6 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1,shardType=store
[2012-01-16 09:17:43,412][DEBUG][index.translog ] [Major Mapleleaf] [twitter][1] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:17:43,413][DEBUG][indices.memory ] [Major Mapleleaf] recalculating shard indexing buffer (reason=created_shard[twitter][1]), total is [101.9mb] with [3] active shards, each shard set to [33.9mb]
[2012-01-16 09:17:43,423][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][1] state: [CREATED]->[RECOVERING], reason [from [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:17:43,423][DEBUG][indices.cluster ] [Major Mapleleaf] [twitter][3] creating shard
[2012-01-16 09:17:43,423][DEBUG][index.service ] [Major Mapleleaf] [twitter] creating shard_id [3]
[2012-01-16 09:17:43,440][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][1] starting recovery from [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]
[2012-01-16 09:17:43,482][DEBUG][index.deletionpolicy ] [Major Mapleleaf] [twitter][3] Using [keep_only_last] deletion policy
[2012-01-16 09:17:43,483][DEBUG][index.merge.policy ] [Major Mapleleaf] [twitter][3] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2012-01-16 09:17:43,483][DEBUG][index.merge.scheduler ] [Major Mapleleaf] [twitter][3] using [concurrent] merge scheduler with max_thread_count[1]
[2012-01-16 09:17:43,484][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][3] state: [CREATED]
[2012-01-16 09:17:43,490][TRACE][jmx ] [Major Mapleleaf] Attribute MaxDoc[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,490][TRACE][jmx ] [Major Mapleleaf] Attribute NumDocs[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,490][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogId[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,490][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogNumberOfOperations[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,490][TRACE][jmx ] [Major Mapleleaf] Attribute Index[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,491][TRACE][jmx ] [Major Mapleleaf] Attribute State[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,491][TRACE][jmx ] [Major Mapleleaf] Attribute RoutingState[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,491][TRACE][jmx ] [Major Mapleleaf] Attribute TranslogSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,491][TRACE][jmx ] [Major Mapleleaf] Attribute Primary[r=true,w=false,is=true,type=boolean]
[2012-01-16 09:17:43,491][TRACE][jmx ] [Major Mapleleaf] Attribute ShardId[r=true,w=false,is=false,type=int]
[2012-01-16 09:17:43,491][TRACE][jmx ] [Major Mapleleaf] Attribute StoreSize[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,498][TRACE][jmx ] [Major Mapleleaf] Registered org.elasticsearch.jmx.ResourceDMBean@8244f74 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3
[2012-01-16 09:17:43,499][TRACE][jmx ] [Major Mapleleaf] Attribute SizeInBytes[r=true,w=false,is=false,type=long]
[2012-01-16 09:17:43,499][TRACE][jmx ] [Major Mapleleaf] Attribute Size[r=true,w=false,is=false,type=java.lang.String]
[2012-01-16 09:17:43,499][TRACE][jmx ] [Major Mapleleaf] Registered org.elasticsearch.jmx.ResourceDMBean@10393e97 under org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3,shardType=store
[2012-01-16 09:17:43,516][DEBUG][index.translog ] [Major Mapleleaf] [twitter][3] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2012-01-16 09:17:43,517][DEBUG][indices.memory ] [Major Mapleleaf] recalculating shard indexing buffer (reason=created_shard[twitter][3]), total is [101.9mb] with [4] active shards, each shard set to [25.4mb]
[2012-01-16 09:17:43,517][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][3] state: [CREATED]->[RECOVERING], reason [from [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:17:43,518][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][3] starting recovery from [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]
[2012-01-16 09:17:43,559][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: done applying updated cluster_state
[2012-01-16 09:17:43,559][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2012-01-16 09:17:43,560][DEBUG][cluster.action.shard ] [Major Mapleleaf] applying started shards [[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING], [twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]], reason [after recovery from gateway]
[2012-01-16 09:17:43,560][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: no change in cluster_state
[2012-01-16 09:17:43,560][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2012-01-16 09:17:43,560][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: no change in cluster_state
[2012-01-16 09:17:43,560][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2012-01-16 09:17:43,560][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: no change in cluster_state
[2012-01-16 09:17:43,560][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2012-01-16 09:17:43,560][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: no change in cluster_state
[2012-01-16 09:17:43,560][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2012-01-16 09:17:43,561][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: no change in cluster_state
[2012-01-16 09:17:43,569][DEBUG][index.engine.robin ] [Major Mapleleaf] [twitter][1] Starting engine
[2012-01-16 09:17:43,591][DEBUG][index.engine.robin ] [Major Mapleleaf] [twitter][3] Starting engine
[2012-01-16 09:17:43,621][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][1] state: [RECOVERING]->[STARTED], reason [post recovery]
[2012-01-16 09:17:43,622][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][1] scheduling refresher every 1s
[2012-01-16 09:17:43,622][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][1] scheduling optimizer / merger every 1s
[2012-01-16 09:17:43,648][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][3] state: [RECOVERING]->[STARTED], reason [post recovery]
[2012-01-16 09:17:43,648][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][3] scheduling refresher every 1s
[2012-01-16 09:17:43,648][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][3] scheduling optimizer / merger every 1s
[2012-01-16 09:17:43,653][DEBUG][indices.recovery ] [Major Mapleleaf] [twitter][3] recovery completed from [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], took[131ms]
phase1: recovered_files [1] with total_size of [58b], took [31ms], throttling_wait [0s]
: reusing_files [0] with total_size of [0b]
phase2: recovered [0] transaction log operations, took [72ms]
phase3: recovered [0] transaction log operations, took [3ms]
[2012-01-16 09:17:43,663][DEBUG][cluster.action.shard ] [Major Mapleleaf] sending shard started for [twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:17:43,663][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:17:43,664][DEBUG][indices.recovery ] [Major Mapleleaf] [twitter][1] recovery completed from [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], took[205ms]
phase1: recovered_files [1] with total_size of [58b], took [70ms], throttling_wait [0s]
: reusing_files [0] with total_size of [0b]
phase2: recovered [0] transaction log operations, took [68ms]
phase3: recovered [0] transaction log operations, took [44ms]
[2012-01-16 09:17:43,664][DEBUG][cluster.action.shard ] [Major Mapleleaf] sending shard started for [twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:17:43,664][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]]]: execute
[2012-01-16 09:17:43,667][DEBUG][cluster.action.shard ] [Major Mapleleaf] applying started shards [[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]], reason [after recovery (replica) from node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:17:43,668][TRACE][cluster.service ] [Major Mapleleaf] cluster state updated:
version [7], source [shard-started ([twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]]]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], local, master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
---- unassigned
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:17:43,670][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:17:43,670][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:17:43,664][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:17:43,680][TRACE][indices.cluster ] [Major Mapleleaf] [{}][{}] master [{}] marked shard as initializing, but shard already created, mark shard as started
[2012-01-16 09:17:43,680][DEBUG][cluster.action.shard ] [Major Mapleleaf] sending shard started for [twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,681][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,686][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]]]: done applying updated cluster_state
[2012-01-16 09:17:43,689][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]]]: execute
[2012-01-16 09:17:43,689][DEBUG][cluster.action.shard ] [Major Mapleleaf] applying started shards [[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING], [twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]], reason [after recovery (replica) from node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]]
[2012-01-16 09:17:43,692][TRACE][cluster.service ] [Major Mapleleaf] cluster state updated:
version [8], source [shard-started ([twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]]]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], local, master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[null], [R], s[UNASSIGNED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
---- unassigned
--------[twitter][4], node[null], [R], s[UNASSIGNED]
[2012-01-16 09:17:43,694][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:17:43,694][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:17:43,722][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]]]: done applying updated cluster_state
[2012-01-16 09:17:43,728][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2012-01-16 09:17:43,747][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: no change in cluster_state
[2012-01-16 09:17:43,758][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][0] starting recovery to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], mark_as_relocated false
[2012-01-16 09:17:43,763][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][0] recovery [phase1] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: recovering [segments_1], does not exists in remote
[2012-01-16 09:17:43,763][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][0] recovery [phase1] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: recovering_files [1] with total_size [58b], reusing_files [0] with total_size [0b]
[2012-01-16 09:17:43,795][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][0] recovery [phase1] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: took [31ms]
[2012-01-16 09:17:43,796][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][0] recovery [phase2] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: sending transaction log operations
[2012-01-16 09:17:43,846][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][0] recovery [phase2] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: took [50ms]
[2012-01-16 09:17:43,847][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][0] recovery [phase3] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: sending transaction log operations
[2012-01-16 09:17:43,849][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][0] recovery [phase3] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: took [2ms]
[2012-01-16 09:17:43,854][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:17:43,854][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]]: execute
[2012-01-16 09:17:43,855][DEBUG][cluster.action.shard ] [Major Mapleleaf] applying started shards [[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]], reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:17:43,855][TRACE][gateway.local ] [Major Mapleleaf] [twitter][4], node[null], [R], s[UNASSIGNED]: checking node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]
[2012-01-16 09:17:43,855][TRACE][gateway.local ] [Major Mapleleaf] [twitter][4], node[null], [R], s[UNASSIGNED]: checking node [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:17:43,856][TRACE][cluster.service ] [Major Mapleleaf] cluster state updated:
version [9], source [shard-started ([twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], local, master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
---- unassigned
[2012-01-16 09:17:43,858][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:17:43,858][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:17:43,862][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]]: done applying updated cluster_state
[2012-01-16 09:17:43,907][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][2] starting recovery to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], mark_as_relocated false
[2012-01-16 09:17:43,908][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][2] recovery [phase1] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: recovering [segments_1], does not exists in remote
[2012-01-16 09:17:43,908][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][2] recovery [phase1] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: recovering_files [1] with total_size [58b], reusing_files [0] with total_size [0b]
[2012-01-16 09:17:43,918][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][2] recovery [phase1] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: took [10ms]
[2012-01-16 09:17:43,918][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][2] recovery [phase2] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: sending transaction log operations
[2012-01-16 09:17:43,922][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,936][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2012-01-16 09:17:43,936][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: no change in cluster_state
[2012-01-16 09:17:43,961][TRACE][index.shard.service ] [Major Mapleleaf] [twitter][2] index [Document<stored,binary,omitNorms,indexOptions=DOCS_ONLY<_source:[B@1e1ff563> indexed,omitNorms,indexOptions=DOCS_ONLY<_type:tweet> stored,indexed,tokenized,omitNorms<_uid:> indexed,tokenized<user:kimchy> indexed,tokenized,omitNorms,indexOptions=DOCS_ONLY<post_date:> indexed,tokenized<message:trying out Elastic Search> indexed,tokenized<_all:>>]
[2012-01-16 09:17:43,967][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][2] recovery [phase2] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: took [49ms]
[2012-01-16 09:17:43,995][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING], reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2012-01-16 09:17:43,996][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2012-01-16 09:17:43,996][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]), reason [master [Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]] marked shard as initializing, but shard already started, mark shard as started]]: no change in cluster_state
[2012-01-16 09:17:44,065][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][2] recovery [phase3] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: sending transaction log operations
[2012-01-16 09:17:44,082][DEBUG][cluster.service ] [Major Mapleleaf] processing [update-mapping [twitter][tweet]]: execute
[2012-01-16 09:17:44,122][DEBUG][cluster.metadata ] [Major Mapleleaf] [twitter] update_mapping [tweet] (dynamic) with source [{"tweet":{"properties":{"message":{"type":"string"},"post_date":{"type":"date","format":"dateOptionalTime"},"user":{"type":"string"}}}}]
[2012-01-16 09:17:44,131][TRACE][cluster.service ] [Major Mapleleaf] cluster state updated:
version [10], source [update-mapping [twitter][tweet]]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], local, master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
---- unassigned
[2012-01-16 09:17:44,133][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:17:44,133][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:17:44,138][DEBUG][cluster.service ] [Major Mapleleaf] processing [update-mapping [twitter][tweet]]: done applying updated cluster_state
[2012-01-16 09:17:44,151][TRACE][http.netty ] [Major Mapleleaf] channel closed: [id: 0x7f4c352e, /127.0.0.1:63064 :> /127.0.0.1:9200]
[2012-01-16 09:17:44,200][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][4] starting recovery to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]], mark_as_relocated false
[2012-01-16 09:17:44,201][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][4] recovery [phase1] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: recovering [segments_1], does not exists in remote
[2012-01-16 09:17:44,201][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][4] recovery [phase1] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: recovering_files [1] with total_size [58b], reusing_files [0] with total_size [0b]
[2012-01-16 09:17:44,284][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][4] recovery [phase1] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: took [83ms]
[2012-01-16 09:17:44,285][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][4] recovery [phase2] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: sending transaction log operations
[2012-01-16 09:17:44,297][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][4] recovery [phase2] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: took [11ms]
[2012-01-16 09:17:44,298][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][4] recovery [phase3] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: sending transaction log operations
[2012-01-16 09:17:44,311][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][4] recovery [phase3] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: took [13ms]
[2012-01-16 09:17:44,313][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:17:44,314][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]]: execute
[2012-01-16 09:17:44,314][DEBUG][cluster.action.shard ] [Major Mapleleaf] applying started shards [[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]], reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:17:44,318][TRACE][cluster.service ] [Major Mapleleaf] cluster state updated:
version [11], source [shard-started ([twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], local, master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
---- unassigned
[2012-01-16 09:17:44,321][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:17:44,322][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:17:44,323][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]]: done applying updated cluster_state
[2012-01-16 09:17:44,413][TRACE][index.shard.service ] [Major Mapleleaf] [twitter][2] refresh with waitForOperations[false]
[2012-01-16 09:17:44,505][TRACE][indices.recovery ] [Major Mapleleaf] [twitter][2] recovery [phase3] to [Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]: took [440ms]
[2012-01-16 09:17:44,507][DEBUG][cluster.action.shard ] [Major Mapleleaf] received shard started for [twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING], reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:17:44,507][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]]: execute
[2012-01-16 09:17:44,507][DEBUG][cluster.action.shard ] [Major Mapleleaf] applying started shards [[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]], reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]
[2012-01-16 09:17:44,508][TRACE][cluster.service ] [Major Mapleleaf] cluster state updated:
version [12], source [shard-started ([twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]]
nodes:
[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]], local, master
[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]
routing_table:
-- index [twitter]
----shard_id [twitter][0]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
----shard_id [twitter][1]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][2]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
----shard_id [twitter][3]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
----shard_id [twitter][4]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
routing_nodes:
-----node_id[HkLh96lnR4yPG1s_DJCxOA]
--------[twitter][0], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][1], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][2], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
--------[twitter][3], node[HkLh96lnR4yPG1s_DJCxOA], [R], s[STARTED]
--------[twitter][4], node[HkLh96lnR4yPG1s_DJCxOA], [P], s[STARTED]
-----node_id[EjOqep-KRKqkFhqBSBA7tg]
--------[twitter][0], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
--------[twitter][1], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
--------[twitter][3], node[EjOqep-KRKqkFhqBSBA7tg], [P], s[STARTED]
--------[twitter][4], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[STARTED]
---- unassigned
[2012-01-16 09:17:44,509][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: execute
[2012-01-16 09:17:44,520][DEBUG][river.cluster ] [Major Mapleleaf] processing [reroute_rivers_node_changed]: no change in cluster_state
[2012-01-16 09:17:44,522][DEBUG][cluster.service ] [Major Mapleleaf] processing [shard-started ([twitter][2], node[EjOqep-KRKqkFhqBSBA7tg], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[Major Mapleleaf][HkLh96lnR4yPG1s_DJCxOA][inet[/10.0.1.5:9300]]]]]: done applying updated cluster_state
[2012-01-16 09:17:47,549][DEBUG][cluster.service ] [Major Mapleleaf] processing [zen-disco-node_left([Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]])]: execute
[2012-01-16 09:17:47,556][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x4fb7a553, /10.0.1.5:63044 :> /10.0.1.5:9300]
[2012-01-16 09:17:47,556][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x7e6baf24, /10.0.1.5:63043 :> /10.0.1.5:9300]
[2012-01-16 09:17:47,559][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x0050078e, /10.0.1.5:63047 :> /10.0.1.5:9300]
[2012-01-16 09:17:47,558][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x061ffbcb, /10.0.1.5:63048 :> /10.0.1.5:9300]
[2012-01-16 09:17:47,560][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x1535d18b, /10.0.1.5:63046 :> /10.0.1.5:9300]
[2012-01-16 09:17:47,561][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x21c71508, /10.0.1.5:63045 :> /10.0.1.5:9300]
[2012-01-16 09:17:47,562][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x1dcbcf91, /10.0.1.5:63049 :> /10.0.1.5:9300]
[2012-01-16 09:17:47,579][DEBUG][transport.netty ] [Major Mapleleaf] Disconnected from [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]]
[2012-01-16 09:17:47,583][TRACE][discovery.zen.fd ] [Major Mapleleaf] [node ] [[Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]] transport disconnected (with verified connect)
[2012-01-16 09:17:47,590][TRACE][transport.netty ] [Major Mapleleaf] (Ignoring) Exception caught on netty layer [[id: 0x5d11985e]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:17:47,593][DEBUG][cluster.service ] [Major Mapleleaf] processing [zen-disco-node_failed([Jester][EjOqep-KRKqkFhqBSBA7tg][inet[/10.0.1.5:9301]]), reason transport disconnected (with verified connect)]: execute
[2012-01-16 09:17:56,888][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x51282707, /127.0.0.1:63075 => /127.0.0.1:9300]
[2012-01-16 09:17:59,890][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x1a2b2cf8, /10.0.1.5:63076 => /10.0.1.5:9300]
[2012-01-16 09:17:59,891][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x08955b34, /10.0.1.5:63077 => /10.0.1.5:9300]
[2012-01-16 09:17:59,892][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x69ddad02, /10.0.1.5:63078 => /10.0.1.5:9300]
[2012-01-16 09:17:59,893][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x5889949a, /10.0.1.5:63079 => /10.0.1.5:9300]
[2012-01-16 09:17:59,893][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x307b37df, /10.0.1.5:63080 => /10.0.1.5:9300]
[2012-01-16 09:17:59,893][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x69912a56, /10.0.1.5:63081 => /10.0.1.5:9300]
[2012-01-16 09:17:59,894][TRACE][transport.netty ] [Major Mapleleaf] channel opened: [id: 0x3972aa3f, /10.0.1.5:63082 => /10.0.1.5:9300]
[2012-01-16 09:17:59,896][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x51282707, /127.0.0.1:63075 :> /127.0.0.1:9300]
[2012-01-16 09:17:59,918][DEBUG][transport.netty ] [Major Mapleleaf] Connected to node [[Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:17:59,919][DEBUG][cluster.service ] [Major Mapleleaf] processing [zen-disco-receive(join from node[[Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]])]: execute
[2012-01-16 09:18:04,110][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x307b37df, /10.0.1.5:63080 :> /10.0.1.5:9300]
[2012-01-16 09:18:04,111][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x1a2b2cf8, /10.0.1.5:63076 :> /10.0.1.5:9300]
[2012-01-16 09:18:04,111][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x3972aa3f, /10.0.1.5:63082 :> /10.0.1.5:9300]
[2012-01-16 09:18:04,112][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x5889949a, /10.0.1.5:63079 :> /10.0.1.5:9300]
[2012-01-16 09:18:04,113][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x69912a56, /10.0.1.5:63081 :> /10.0.1.5:9300]
[2012-01-16 09:18:04,113][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x69ddad02, /10.0.1.5:63078 :> /10.0.1.5:9300]
[2012-01-16 09:18:04,114][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x08955b34, /10.0.1.5:63077 :> /10.0.1.5:9300]
[2012-01-16 09:18:04,117][DEBUG][transport.netty ] [Major Mapleleaf] Disconnected from [[Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]]
[2012-01-16 09:18:12,102][TRACE][transport.netty ] [Major Mapleleaf] (Ignoring) Exception caught on netty layer [[id: 0x74d01311]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:18:12,105][TRACE][transport.netty ] [Major Mapleleaf] (Ignoring) Exception caught on netty layer [[id: 0x44f757b9]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:18:12,105][WARN ][cluster.service ] [Major Mapleleaf] failed to reconnect to node [Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]
org.elasticsearch.transport.ConnectTransportException: [Chameleon][inet[/10.0.1.5:9301]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:560)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:503)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:482)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:124)
at org.elasticsearch.cluster.service.InternalClusterService$ReconnectToNodes.run(InternalClusterService.java:352)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
... 3 more
[2012-01-16 09:18:22,109][TRACE][transport.netty ] [Major Mapleleaf] (Ignoring) Exception caught on netty layer [[id: 0x131f1d25]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:18:22,110][WARN ][cluster.service ] [Major Mapleleaf] failed to reconnect to node [Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]
org.elasticsearch.transport.ConnectTransportException: [Chameleon][inet[/10.0.1.5:9301]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:560)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:503)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:482)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:124)
at org.elasticsearch.cluster.service.InternalClusterService$ReconnectToNodes.run(InternalClusterService.java:352)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
... 3 more
[2012-01-16 09:18:32,113][TRACE][transport.netty ] [Major Mapleleaf] (Ignoring) Exception caught on netty layer [[id: 0x2d63c5bb]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:18:32,116][TRACE][transport.netty ] [Major Mapleleaf] (Ignoring) Exception caught on netty layer [[id: 0x5fab9dac]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:18:32,116][WARN ][cluster.service ] [Major Mapleleaf] failed to reconnect to node [Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]
org.elasticsearch.transport.ConnectTransportException: [Chameleon][inet[/10.0.1.5:9301]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:560)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:503)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:482)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:124)
at org.elasticsearch.cluster.service.InternalClusterService$ReconnectToNodes.run(InternalClusterService.java:352)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
... 3 more
[2012-01-16 09:18:42,119][TRACE][transport.netty ] [Major Mapleleaf] (Ignoring) Exception caught on netty layer [[id: 0x50fc5408]]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:18:42,122][WARN ][cluster.service ] [Major Mapleleaf] failed to reconnect to node [Chameleon][ADud4FIsSJCfmDbjKkvOUA][inet[/10.0.1.5:9301]]
org.elasticsearch.transport.ConnectTransportException: [Chameleon][inet[/10.0.1.5:9301]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:560)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:503)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:482)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:124)
at org.elasticsearch.cluster.service.InternalClusterService$ReconnectToNodes.run(InternalClusterService.java:352)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:401)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:370)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:292)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
... 3 more
[2012-01-16 09:18:42,122][WARN ][netty.channel.socket.nio.NioClientSocketPipelineSink] Unexpected exception in the selector loop.
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:69)
at sun.nio.ch.KQueueSelectorImpl.updateSelectedKeys(KQueueSelectorImpl.java:105)
at sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:74)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:255)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-01-16 09:18:46,191][INFO ][node ] [Major Mapleleaf] {0.18.7}[12072]: stopping ...
[2012-01-16 09:18:46,201][TRACE][jmx ] [Major Mapleleaf] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0
[2012-01-16 09:18:46,201][TRACE][jmx ] [Major Mapleleaf] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=0,shardType=store
[2012-01-16 09:18:46,202][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][0] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:18:46,206][TRACE][jmx ] [Major Mapleleaf] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1
[2012-01-16 09:18:46,210][TRACE][jmx ] [Major Mapleleaf] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=1,shardType=store
[2012-01-16 09:18:46,211][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][1] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:18:46,211][TRACE][jmx ] [Major Mapleleaf] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2
[2012-01-16 09:18:46,212][TRACE][jmx ] [Major Mapleleaf] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=2,shardType=store
[2012-01-16 09:18:46,212][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][2] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:18:46,213][TRACE][jmx ] [Major Mapleleaf] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3
[2012-01-16 09:18:46,213][TRACE][jmx ] [Major Mapleleaf] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4
[2012-01-16 09:18:46,214][TRACE][jmx ] [Major Mapleleaf] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=3,shardType=store
[2012-01-16 09:18:46,215][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][3] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:18:46,214][TRACE][jmx ] [Major Mapleleaf] Unregistered org.elasticsearch:service=indices,index=twitter,subService=shards,shard=4,shardType=store
[2012-01-16 09:18:46,218][DEBUG][index.shard.service ] [Major Mapleleaf] [twitter][4] state: [STARTED]->[CLOSED], reason [shutdown]
[2012-01-16 09:18:46,232][TRACE][jmx ] [Major Mapleleaf] Unregistered org.elasticsearch:service=indices,index=twitter
[2012-01-16 09:18:46,241][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x44af17c7, /10.0.1.5:63033 :> /10.0.1.5:9300]
[2012-01-16 09:18:46,242][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x28caea19, /10.0.1.5:63039 :> /10.0.1.5:9300]
[2012-01-16 09:18:46,242][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x521ecfeb, /10.0.1.5:63035 :> /10.0.1.5:9300]
[2012-01-16 09:18:46,242][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x479d4f72, /10.0.1.5:63038 :> /10.0.1.5:9300]
[2012-01-16 09:18:46,243][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x1c2d5534, /10.0.1.5:63034 :> /10.0.1.5:9300]
[2012-01-16 09:18:46,243][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x52352d87, /10.0.1.5:63036 :> /10.0.1.5:9300]
[2012-01-16 09:18:46,242][TRACE][transport.netty ] [Major Mapleleaf] channel closed: [id: 0x6fa8bd74, /10.0.1.5:63037 :> /10.0.1.5:9300]
[2012-01-16 09:18:46,245][TRACE][jmx ] [Major Mapleleaf] Unregistered org.elasticsearch:service=transport
[2012-01-16 09:18:46,246][TRACE][jmx ] [Major Mapleleaf] Unregistered org.elasticsearch:service=transport,transportType=netty
[2012-01-16 09:18:46,246][INFO ][node ] [Major Mapleleaf] {0.18.7}[12072]: stopped
[2012-01-16 09:18:46,246][INFO ][node ] [Major Mapleleaf] {0.18.7}[12072]: closing ...
[2012-01-16 09:18:46,256][TRACE][node ] [Major Mapleleaf] Close times for each service:
StopWatch 'node_close': running time = 5ms
-----------------------------------------
ms % Task name
-----------------------------------------
00000 000% http
00000 000% rivers
00000 000% client
00000 000% indices_cluster
00001 020% indices
00000 000% routing
00000 000% cluster
00002 040% discovery
00000 000% monitor
00000 000% gateway
00000 000% search
00000 000% rest
00000 000% transport
00001 020% node_cache
00000 000% script
00000 000% thread_pool
00001 020% thread_pool_force_shutdown
[2012-01-16 09:18:46,256][INFO ][node ] [Major Mapleleaf] {0.18.7}[12072]: closed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment