Skip to content

Instantly share code, notes, and snippets.

@benoit-intrw
Created August 29, 2013 06:56
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save benoit-intrw/6374962 to your computer and use it in GitHub Desktop.
Save benoit-intrw/6374962 to your computer and use it in GitHub Desktop.
NullPointerException on empty filter with 0.90.3
[2013-08-29 08:43:32,061][INFO ][node ] [Iron Man] version[0.90.3], pid[8222], build[5c38d60/2013-08-06T13:18:31Z]
[2013-08-29 08:43:32,062][INFO ][node ] [Iron Man] initializing ...
[2013-08-29 08:43:32,062][DEBUG][node ] [Iron Man] using home [/home/blaurent/downloads/elasticsearch-0.90.3], config [/home/blaurent/downloads/elasticsearch-0.90.3/config], data [[/home/blaurent/downloads/elasticsearch-0.90.3/data]], logs [/home/blaurent/downloads/elasticsearch-0.90.3/logs], work [/home/blaurent/downloads/elasticsearch-0.90.3/work], plugins [/home/blaurent/downloads/elasticsearch-0.90.3/plugins]
[2013-08-29 08:43:32,067][INFO ][plugins ] [Iron Man] loaded [], sites []
[2013-08-29 08:43:32,086][DEBUG][common.compress.lzf ] using [UnsafeChunkDecoder] decoder
[2013-08-29 08:43:32,096][DEBUG][env ] [Iron Man] using node location [[/home/blaurent/downloads/elasticsearch-0.90.3/data/elasticsearch/nodes/0]], local_node_id [0]
[2013-08-29 08:43:32,954][DEBUG][threadpool ] [Iron Man] creating thread_pool [generic], type [cached], keep_alive [30s]
[2013-08-29 08:43:32,962][DEBUG][threadpool ] [Iron Man] creating thread_pool [index], type [fixed], size [2], queue_size [null], reject_policy [abort], queue_type [linked]
[2013-08-29 08:43:32,962][DEBUG][threadpool ] [Iron Man] creating thread_pool [bulk], type [fixed], size [2], queue_size [null], reject_policy [abort], queue_type [linked]
[2013-08-29 08:43:32,962][DEBUG][threadpool ] [Iron Man] creating thread_pool [get], type [fixed], size [2], queue_size [null], reject_policy [abort], queue_type [linked]
[2013-08-29 08:43:32,964][DEBUG][threadpool ] [Iron Man] creating thread_pool [search], type [fixed], size [6], queue_size [1k], reject_policy [abort], queue_type [linked]
[2013-08-29 08:43:32,965][DEBUG][threadpool ] [Iron Man] creating thread_pool [percolate], type [fixed], size [2], queue_size [null], reject_policy [abort], queue_type [linked]
[2013-08-29 08:43:32,965][DEBUG][threadpool ] [Iron Man] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
[2013-08-29 08:43:32,966][DEBUG][threadpool ] [Iron Man] creating thread_pool [flush], type [scaling], min [1], size [1], keep_alive [5m]
[2013-08-29 08:43:32,966][DEBUG][threadpool ] [Iron Man] creating thread_pool [merge], type [scaling], min [1], size [1], keep_alive [5m]
[2013-08-29 08:43:32,966][DEBUG][threadpool ] [Iron Man] creating thread_pool [refresh], type [scaling], min [1], size [1], keep_alive [5m]
[2013-08-29 08:43:32,966][DEBUG][threadpool ] [Iron Man] creating thread_pool [warmer], type [scaling], min [1], size [1], keep_alive [5m]
[2013-08-29 08:43:32,967][DEBUG][threadpool ] [Iron Man] creating thread_pool [snapshot], type [scaling], min [1], size [1], keep_alive [5m]
[2013-08-29 08:43:32,967][DEBUG][threadpool ] [Iron Man] creating thread_pool [optimize], type [fixed], size [1], queue_size [null], reject_policy [abort], queue_type [linked]
[2013-08-29 08:43:32,983][DEBUG][transport.netty ] [Iron Man] using worker_count[4], port[9300-9400], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/6/1/1], receive_predictor[512kb->512kb]
[2013-08-29 08:43:32,997][DEBUG][discovery.zen.ping.multicast] [Iron Man] using group [224.2.2.4], with port [54328], ttl [3], and address [null]
[2013-08-29 08:43:33,000][DEBUG][discovery.zen.ping.unicast] [Iron Man] using initial hosts [], with concurrent_connects [10]
[2013-08-29 08:43:33,001][DEBUG][discovery.zen ] [Iron Man] using ping.timeout [3s], master_election.filter_client [true], master_election.filter_data [false]
[2013-08-29 08:43:33,001][DEBUG][discovery.zen.elect ] [Iron Man] using minimum_master_nodes [-1]
[2013-08-29 08:43:33,002][DEBUG][discovery.zen.fd ] [Iron Man] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2013-08-29 08:43:33,005][DEBUG][discovery.zen.fd ] [Iron Man] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2013-08-29 08:43:33,044][DEBUG][monitor.jvm ] [Iron Man] enabled [true], last_gc_enabled [false], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, ParNew=GcThreshold{name='ParNew', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, ConcurrentMarkSweep=GcThreshold{name='ConcurrentMarkSweep', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}]
[2013-08-29 08:43:33,553][DEBUG][monitor.os ] [Iron Man] Using probe [org.elasticsearch.monitor.os.SigarOsProbe@3a15da7d] with refresh_interval [1s]
[2013-08-29 08:43:33,558][DEBUG][monitor.process ] [Iron Man] Using probe [org.elasticsearch.monitor.process.SigarProcessProbe@1bf1e666] with refresh_interval [1s]
[2013-08-29 08:43:33,563][DEBUG][monitor.jvm ] [Iron Man] Using refresh_interval [1s]
[2013-08-29 08:43:33,563][DEBUG][monitor.network ] [Iron Man] Using probe [org.elasticsearch.monitor.network.SigarNetworkProbe@469695f] with refresh_interval [5s]
[2013-08-29 08:43:33,566][DEBUG][monitor.network ] [Iron Man] net_info
host [hobbes]
eth0 display_name [eth0]
address [/fe80:0:0:0:be30:5bff:feb0:1892%2] [/192.168.128.126]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo display_name [lo]
address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16436] multicast [false] ptp [false] loopback [true] up [true] virtual [false]
[2013-08-29 08:43:33,570][DEBUG][monitor.fs ] [Iron Man] Using probe [org.elasticsearch.monitor.fs.SigarFsProbe@69fc9f88] with refresh_interval [1s]
[2013-08-29 08:43:33,761][DEBUG][indices.store ] [Iron Man] using indices.store.throttle.type [MERGE], with index.store.throttle.max_bytes_per_sec [20mb]
[2013-08-29 08:43:33,766][DEBUG][cache.memory ] [Iron Man] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
[2013-08-29 08:43:33,772][DEBUG][script ] [Iron Man] using script cache with max_size [500], expire [null]
[2013-08-29 08:43:33,792][DEBUG][cluster.routing.allocation.decider] [Iron Man] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2013-08-29 08:43:33,792][DEBUG][cluster.routing.allocation.decider] [Iron Man] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2013-08-29 08:43:33,793][DEBUG][cluster.routing.allocation.decider] [Iron Man] using [cluster_concurrent_rebalance] with [2]
[2013-08-29 08:43:33,795][DEBUG][gateway.local ] [Iron Man] using initial_shards [quorum], list_timeout [30s]
[2013-08-29 08:43:33,902][DEBUG][indices.recovery ] [Iron Man] using max_bytes_per_sec[20mb], concurrent_streams [3], file_chunk_size [512kb], translog_size [512kb], translog_ops [1000], and compress [true]
[2013-08-29 08:43:33,935][DEBUG][http.netty ] [Iron Man] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb], receive_predictor[512kb->512kb]
[2013-08-29 08:43:33,940][DEBUG][indices.memory ] [Iron Man] using index_buffer_size [101.9mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2013-08-29 08:43:33,941][DEBUG][indices.cache.filter ] [Iron Man] using [node] weighted filter cache with size [20%], actual_size [203.9mb], expire [null], clean_interval [1m]
[2013-08-29 08:43:33,942][DEBUG][indices.fielddata.cache ] [Iron Man] using size [-1] [-1b], expire [null]
[2013-08-29 08:43:33,950][DEBUG][gateway.local.state.meta ] [Iron Man] using gateway.local.auto_import_dangled [YES], with gateway.local.dangling_timeout [2h]
[2013-08-29 08:43:33,951][DEBUG][gateway.local.state.meta ] [Iron Man] took 0s to load state
[2013-08-29 08:43:33,951][DEBUG][gateway.local.state.shards] [Iron Man] took 0s to load started shards state
[2013-08-29 08:43:33,953][DEBUG][bulk.udp ] [Iron Man] using enabled [false], host [null], port [9700-9800], bulk_actions [1000], bulk_size [5mb], flush_interval [5s], concurrent_requests [4]
[2013-08-29 08:43:33,956][DEBUG][cluster.routing.allocation.decider] [Iron Man] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2013-08-29 08:43:33,956][DEBUG][cluster.routing.allocation.decider] [Iron Man] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2013-08-29 08:43:33,957][DEBUG][cluster.routing.allocation.decider] [Iron Man] using [cluster_concurrent_rebalance] with [2]
[2013-08-29 08:43:33,957][DEBUG][cluster.routing.allocation.decider] [Iron Man] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
[2013-08-29 08:43:33,957][DEBUG][cluster.routing.allocation.decider] [Iron Man] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2013-08-29 08:43:33,957][DEBUG][cluster.routing.allocation.decider] [Iron Man] using [cluster_concurrent_rebalance] with [2]
[2013-08-29 08:43:33,965][INFO ][node ] [Iron Man] initialized
[2013-08-29 08:43:33,965][INFO ][node ] [Iron Man] starting ...
[2013-08-29 08:43:33,994][DEBUG][netty.channel.socket.nio.SelectorUtil] Using select timeout of 500
[2013-08-29 08:43:33,994][DEBUG][netty.channel.socket.nio.SelectorUtil] Epoll-bug workaround enabled = false
[2013-08-29 08:43:34,045][DEBUG][transport.netty ] [Iron Man] Bound to address [/0:0:0:0:0:0:0:0:9300]
[2013-08-29 08:43:34,047][INFO ][transport ] [Iron Man] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.128.126:9300]}
[2013-08-29 08:43:37,067][DEBUG][discovery.zen ] [Iron Man] filtered ping responses: (filter_client[true], filter_data[false]) {none}
[2013-08-29 08:43:37,072][DEBUG][cluster.service ] [Iron Man] processing [zen-disco-join (elected_as_master)]: execute
[2013-08-29 08:43:37,072][DEBUG][cluster.service ] [Iron Man] cluster state updated, version [1], source [zen-disco-join (elected_as_master)]
[2013-08-29 08:43:37,073][INFO ][cluster.service ] [Iron Man] new_master [Iron Man][UaADQshIT4OBNfFDrBXWsg][inet[/192.168.128.126:9300]], reason: zen-disco-join (elected_as_master)
[2013-08-29 08:43:37,099][DEBUG][transport.netty ] [Iron Man] connected to node [[Iron Man][UaADQshIT4OBNfFDrBXWsg][inet[/192.168.128.126:9300]]]
[2013-08-29 08:43:37,102][DEBUG][cluster.service ] [Iron Man] processing [zen-disco-join (elected_as_master)]: done applying updated cluster_state
[2013-08-29 08:43:37,102][INFO ][discovery ] [Iron Man] elasticsearch/UaADQshIT4OBNfFDrBXWsg
[2013-08-29 08:43:37,106][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: execute
[2013-08-29 08:43:37,106][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: no change in cluster_state
[2013-08-29 08:43:37,108][DEBUG][cluster.service ] [Iron Man] processing [local-gateway-elected-state]: execute
[2013-08-29 08:43:37,119][DEBUG][cluster.service ] [Iron Man] cluster state updated, version [2], source [local-gateway-elected-state]
[2013-08-29 08:43:37,120][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: execute
[2013-08-29 08:43:37,120][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: no change in cluster_state
[2013-08-29 08:43:37,122][INFO ][http ] [Iron Man] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.128.126:9200]}
[2013-08-29 08:43:37,122][INFO ][node ] [Iron Man] started
[2013-08-29 08:43:37,162][INFO ][gateway ] [Iron Man] recovered [0] indices into cluster_state
[2013-08-29 08:43:37,162][DEBUG][cluster.service ] [Iron Man] processing [local-gateway-elected-state]: done applying updated cluster_state
[2013-08-29 08:43:44,872][DEBUG][cluster.service ] [Iron Man] processing [create-index [twitter], cause [auto(index api)]]: execute
[2013-08-29 08:43:44,872][DEBUG][indices ] [Iron Man] creating Index [twitter], shards [5]/[1]
[2013-08-29 08:43:45,038][DEBUG][index.mapper ] [Iron Man] [twitter] using dynamic[true], default mapping: default_mapping_location[null], loaded_from[jar:file:/home/blaurent/downloads/elasticsearch-0.90.3/lib/elasticsearch-0.90.3.jar!/org/elasticsearch/index/mapper/default-mapping.json] and source[{
"_default_":{
}
}]
[2013-08-29 08:43:45,038][DEBUG][index.cache.query.parser.resident] [Iron Man] [twitter] using [resident] query cache with max_size [100], expire [null]
[2013-08-29 08:43:45,047][DEBUG][index.store.fs ] [Iron Man] [twitter] using index.store.throttle.type [node], with index.store.throttle.max_bytes_per_sec [0b]
[2013-08-29 08:43:45,101][INFO ][cluster.metadata ] [Iron Man] [twitter] creating index, cause [auto(index api)], shards [5]/[1], mappings []
[2013-08-29 08:43:45,110][DEBUG][index.cache.filter.weighted] [Iron Man] [twitter] full cache clear, reason [close]
[2013-08-29 08:43:45,110][DEBUG][cluster.service ] [Iron Man] cluster state updated, version [3], source [create-index [twitter], cause [auto(index api)]]
[2013-08-29 08:43:45,110][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: execute
[2013-08-29 08:43:45,110][DEBUG][indices.cluster ] [Iron Man] [twitter] creating index
[2013-08-29 08:43:45,110][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: no change in cluster_state
[2013-08-29 08:43:45,110][DEBUG][indices ] [Iron Man] creating Index [twitter], shards [5]/[1]
[2013-08-29 08:43:45,143][DEBUG][index.mapper ] [Iron Man] [twitter] using dynamic[true], default mapping: default_mapping_location[null], loaded_from[jar:file:/home/blaurent/downloads/elasticsearch-0.90.3/lib/elasticsearch-0.90.3.jar!/org/elasticsearch/index/mapper/default-mapping.json] and source[{
"_default_":{
}
}]
[2013-08-29 08:43:45,143][DEBUG][index.cache.query.parser.resident] [Iron Man] [twitter] using [resident] query cache with max_size [100], expire [null]
[2013-08-29 08:43:45,144][DEBUG][index.store.fs ] [Iron Man] [twitter] using index.store.throttle.type [node], with index.store.throttle.max_bytes_per_sec [0b]
[2013-08-29 08:43:45,145][DEBUG][indices.cluster ] [Iron Man] [twitter][0] creating shard
[2013-08-29 08:43:45,145][DEBUG][index.service ] [Iron Man] [twitter] creating shard_id [0]
[2013-08-29 08:43:45,217][DEBUG][index.deletionpolicy ] [Iron Man] [twitter][0] Using [keep_only_last] deletion policy
[2013-08-29 08:43:45,219][DEBUG][index.merge.policy ] [Iron Man] [twitter][0] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2013-08-29 08:43:45,219][DEBUG][index.merge.scheduler ] [Iron Man] [twitter][0] using [concurrent] merge scheduler with max_thread_count[1]
[2013-08-29 08:43:45,222][DEBUG][index.shard.service ] [Iron Man] [twitter][0] state: [CREATED]
[2013-08-29 08:43:45,223][DEBUG][index.translog ] [Iron Man] [twitter][0] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2013-08-29 08:43:45,226][DEBUG][index.shard.service ] [Iron Man] [twitter][0] state: [CREATED]->[RECOVERING], reason [from gateway]
[2013-08-29 08:43:45,227][DEBUG][indices.cluster ] [Iron Man] [twitter][1] creating shard
[2013-08-29 08:43:45,227][DEBUG][index.gateway ] [Iron Man] [twitter][0] starting recovery from local ...
[2013-08-29 08:43:45,227][DEBUG][index.service ] [Iron Man] [twitter] creating shard_id [1]
[2013-08-29 08:43:45,230][DEBUG][index.engine.robin ] [Iron Man] [twitter][0] starting engine
[2013-08-29 08:43:45,245][DEBUG][index.deletionpolicy ] [Iron Man] [twitter][1] Using [keep_only_last] deletion policy
[2013-08-29 08:43:45,245][DEBUG][index.merge.policy ] [Iron Man] [twitter][1] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2013-08-29 08:43:45,245][DEBUG][index.merge.scheduler ] [Iron Man] [twitter][1] using [concurrent] merge scheduler with max_thread_count[1]
[2013-08-29 08:43:45,246][DEBUG][index.shard.service ] [Iron Man] [twitter][1] state: [CREATED]
[2013-08-29 08:43:45,246][DEBUG][index.translog ] [Iron Man] [twitter][1] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2013-08-29 08:43:45,247][DEBUG][index.shard.service ] [Iron Man] [twitter][1] state: [CREATED]->[RECOVERING], reason [from gateway]
[2013-08-29 08:43:45,247][DEBUG][index.gateway ] [Iron Man] [twitter][1] starting recovery from local ...
[2013-08-29 08:43:45,247][DEBUG][index.engine.robin ] [Iron Man] [twitter][1] starting engine
[2013-08-29 08:43:45,305][DEBUG][cluster.service ] [Iron Man] processing [create-index [twitter], cause [auto(index api)]]: done applying updated cluster_state
[2013-08-29 08:43:45,389][DEBUG][index.shard.service ] [Iron Man] [twitter][1] scheduling refresher every 1s
[2013-08-29 08:43:45,389][DEBUG][index.shard.service ] [Iron Man] [twitter][1] scheduling optimizer / merger every 1s
[2013-08-29 08:43:45,389][DEBUG][index.shard.service ] [Iron Man] [twitter][1] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2013-08-29 08:43:45,390][DEBUG][index.gateway ] [Iron Man] [twitter][1] recovery completed from local, took [142ms]
index : files [0] with total_size [0b], took[0s]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
start : took [142ms], check_index [0s]
translog : number_of_operations [0], took [0s]
[2013-08-29 08:43:45,390][DEBUG][cluster.action.shard ] [Iron Man] sending shard started for [twitter][1], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2013-08-29 08:43:45,390][DEBUG][cluster.action.shard ] [Iron Man] received shard started for [twitter][1], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2013-08-29 08:43:45,391][DEBUG][cluster.service ] [Iron Man] processing [shard-started ([twitter][1], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2013-08-29 08:43:45,391][DEBUG][cluster.action.shard ] [Iron Man] applying started shards [[twitter][1], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]], reason [after recovery from gateway]
[2013-08-29 08:43:45,392][DEBUG][index.shard.service ] [Iron Man] [twitter][0] scheduling refresher every 1s
[2013-08-29 08:43:45,392][DEBUG][index.shard.service ] [Iron Man] [twitter][0] scheduling optimizer / merger every 1s
[2013-08-29 08:43:45,392][DEBUG][index.shard.service ] [Iron Man] [twitter][0] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2013-08-29 08:43:45,392][DEBUG][index.gateway ] [Iron Man] [twitter][0] recovery completed from local, took [165ms]
index : files [0] with total_size [0b], took[2ms]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
start : took [162ms], check_index [0s]
translog : number_of_operations [0], took [0s]
[2013-08-29 08:43:45,392][DEBUG][cluster.action.shard ] [Iron Man] sending shard started for [twitter][0], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2013-08-29 08:43:45,392][DEBUG][cluster.action.shard ] [Iron Man] received shard started for [twitter][0], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2013-08-29 08:43:45,393][DEBUG][cluster.service ] [Iron Man] cluster state updated, version [4], source [shard-started ([twitter][1], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]
[2013-08-29 08:43:45,393][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: execute
[2013-08-29 08:43:45,393][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: no change in cluster_state
[2013-08-29 08:43:45,393][DEBUG][cluster.action.shard ] [Iron Man] sending shard started for [twitter][0], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING], reason [master [Iron Man][UaADQshIT4OBNfFDrBXWsg][inet[/192.168.128.126:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2013-08-29 08:43:45,393][DEBUG][cluster.action.shard ] [Iron Man] received shard started for [twitter][0], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING], reason [master [Iron Man][UaADQshIT4OBNfFDrBXWsg][inet[/192.168.128.126:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2013-08-29 08:43:45,393][DEBUG][indices.cluster ] [Iron Man] [twitter][2] creating shard
[2013-08-29 08:43:45,393][DEBUG][index.service ] [Iron Man] [twitter] creating shard_id [2]
[2013-08-29 08:43:45,404][DEBUG][index.deletionpolicy ] [Iron Man] [twitter][2] Using [keep_only_last] deletion policy
[2013-08-29 08:43:45,404][DEBUG][index.merge.policy ] [Iron Man] [twitter][2] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2013-08-29 08:43:45,404][DEBUG][index.merge.scheduler ] [Iron Man] [twitter][2] using [concurrent] merge scheduler with max_thread_count[1]
[2013-08-29 08:43:45,405][DEBUG][index.shard.service ] [Iron Man] [twitter][2] state: [CREATED]
[2013-08-29 08:43:45,405][DEBUG][index.translog ] [Iron Man] [twitter][2] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2013-08-29 08:43:45,406][DEBUG][index.shard.service ] [Iron Man] [twitter][2] state: [CREATED]->[RECOVERING], reason [from gateway]
[2013-08-29 08:43:45,406][DEBUG][index.gateway ] [Iron Man] [twitter][2] starting recovery from local ...
[2013-08-29 08:43:45,406][DEBUG][index.engine.robin ] [Iron Man] [twitter][2] starting engine
[2013-08-29 08:43:45,474][DEBUG][cluster.service ] [Iron Man] processing [shard-started ([twitter][1], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: done applying updated cluster_state
[2013-08-29 08:43:45,474][DEBUG][cluster.service ] [Iron Man] processing [shard-started ([twitter][0], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2013-08-29 08:43:45,474][DEBUG][cluster.action.shard ] [Iron Man] applying started shards [[twitter][0], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING], [twitter][0], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]], reason [after recovery from gateway]
[2013-08-29 08:43:45,476][DEBUG][cluster.service ] [Iron Man] cluster state updated, version [5], source [shard-started ([twitter][0], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]
[2013-08-29 08:43:45,476][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: execute
[2013-08-29 08:43:45,476][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: no change in cluster_state
[2013-08-29 08:43:45,476][DEBUG][indices.cluster ] [Iron Man] [twitter][3] creating shard
[2013-08-29 08:43:45,477][DEBUG][index.service ] [Iron Man] [twitter] creating shard_id [3]
[2013-08-29 08:43:45,487][DEBUG][index.deletionpolicy ] [Iron Man] [twitter][3] Using [keep_only_last] deletion policy
[2013-08-29 08:43:45,487][DEBUG][index.merge.policy ] [Iron Man] [twitter][3] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2013-08-29 08:43:45,487][DEBUG][index.merge.scheduler ] [Iron Man] [twitter][3] using [concurrent] merge scheduler with max_thread_count[1]
[2013-08-29 08:43:45,488][DEBUG][index.shard.service ] [Iron Man] [twitter][3] state: [CREATED]
[2013-08-29 08:43:45,488][DEBUG][index.translog ] [Iron Man] [twitter][3] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2013-08-29 08:43:45,489][DEBUG][index.shard.service ] [Iron Man] [twitter][3] state: [CREATED]->[RECOVERING], reason [from gateway]
[2013-08-29 08:43:45,489][DEBUG][index.gateway ] [Iron Man] [twitter][3] starting recovery from local ...
[2013-08-29 08:43:45,489][DEBUG][index.engine.robin ] [Iron Man] [twitter][3] starting engine
[2013-08-29 08:43:45,493][DEBUG][index.shard.service ] [Iron Man] [twitter][2] scheduling refresher every 1s
[2013-08-29 08:43:45,493][DEBUG][index.shard.service ] [Iron Man] [twitter][2] scheduling optimizer / merger every 1s
[2013-08-29 08:43:45,493][DEBUG][index.shard.service ] [Iron Man] [twitter][2] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2013-08-29 08:43:45,493][DEBUG][index.gateway ] [Iron Man] [twitter][2] recovery completed from local, took [87ms]
index : files [0] with total_size [0b], took[0s]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
start : took [87ms], check_index [0s]
translog : number_of_operations [0], took [0s]
[2013-08-29 08:43:45,493][DEBUG][cluster.action.shard ] [Iron Man] sending shard started for [twitter][2], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2013-08-29 08:43:45,493][DEBUG][cluster.action.shard ] [Iron Man] received shard started for [twitter][2], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2013-08-29 08:43:45,533][DEBUG][cluster.service ] [Iron Man] processing [shard-started ([twitter][0], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: done applying updated cluster_state
[2013-08-29 08:43:45,533][DEBUG][cluster.service ] [Iron Man] processing [shard-started ([twitter][0], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [master [Iron Man][UaADQshIT4OBNfFDrBXWsg][inet[/192.168.128.126:9300]] marked shard as initializing, but shard already started, mark shard as started]]: execute
[2013-08-29 08:43:45,533][DEBUG][cluster.action.shard ] [Iron Man] applying started shards [[twitter][2], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]], reason [master [Iron Man][UaADQshIT4OBNfFDrBXWsg][inet[/192.168.128.126:9300]] marked shard as initializing, but shard already started, mark shard as started]
[2013-08-29 08:43:45,534][DEBUG][cluster.service ] [Iron Man] cluster state updated, version [6], source [shard-started ([twitter][0], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [master [Iron Man][UaADQshIT4OBNfFDrBXWsg][inet[/192.168.128.126:9300]] marked shard as initializing, but shard already started, mark shard as started]]
[2013-08-29 08:43:45,534][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: execute
[2013-08-29 08:43:45,534][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: no change in cluster_state
[2013-08-29 08:43:45,535][DEBUG][indices.cluster ] [Iron Man] [twitter][4] creating shard
[2013-08-29 08:43:45,535][DEBUG][index.service ] [Iron Man] [twitter] creating shard_id [4]
[2013-08-29 08:43:45,544][DEBUG][index.deletionpolicy ] [Iron Man] [twitter][4] Using [keep_only_last] deletion policy
[2013-08-29 08:43:45,545][DEBUG][index.merge.policy ] [Iron Man] [twitter][4] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
[2013-08-29 08:43:45,545][DEBUG][index.merge.scheduler ] [Iron Man] [twitter][4] using [concurrent] merge scheduler with max_thread_count[1]
[2013-08-29 08:43:45,545][DEBUG][index.shard.service ] [Iron Man] [twitter][4] state: [CREATED]
[2013-08-29 08:43:45,546][DEBUG][index.translog ] [Iron Man] [twitter][4] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
[2013-08-29 08:43:45,546][DEBUG][index.shard.service ] [Iron Man] [twitter][4] state: [CREATED]->[RECOVERING], reason [from gateway]
[2013-08-29 08:43:45,547][DEBUG][index.gateway ] [Iron Man] [twitter][4] starting recovery from local ...
[2013-08-29 08:43:45,547][DEBUG][index.engine.robin ] [Iron Man] [twitter][4] starting engine
[2013-08-29 08:43:45,576][DEBUG][index.shard.service ] [Iron Man] [twitter][3] scheduling refresher every 1s
[2013-08-29 08:43:45,576][DEBUG][index.shard.service ] [Iron Man] [twitter][3] scheduling optimizer / merger every 1s
[2013-08-29 08:43:45,576][DEBUG][index.shard.service ] [Iron Man] [twitter][3] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2013-08-29 08:43:45,577][DEBUG][index.gateway ] [Iron Man] [twitter][3] recovery completed from local, took [87ms]
index : files [0] with total_size [0b], took[0s]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
start : took [87ms], check_index [0s]
translog : number_of_operations [0], took [0s]
[2013-08-29 08:43:45,577][DEBUG][cluster.action.shard ] [Iron Man] sending shard started for [twitter][3], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2013-08-29 08:43:45,577][DEBUG][cluster.action.shard ] [Iron Man] received shard started for [twitter][3], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2013-08-29 08:43:45,609][DEBUG][cluster.service ] [Iron Man] processing [shard-started ([twitter][0], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [master [Iron Man][UaADQshIT4OBNfFDrBXWsg][inet[/192.168.128.126:9300]] marked shard as initializing, but shard already started, mark shard as started]]: done applying updated cluster_state
[2013-08-29 08:43:45,609][DEBUG][cluster.service ] [Iron Man] processing [shard-started ([twitter][2], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2013-08-29 08:43:45,609][DEBUG][cluster.action.shard ] [Iron Man] applying started shards [[twitter][3], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]], reason [after recovery from gateway]
[2013-08-29 08:43:45,610][DEBUG][cluster.service ] [Iron Man] cluster state updated, version [7], source [shard-started ([twitter][2], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]
[2013-08-29 08:43:45,610][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: execute
[2013-08-29 08:43:45,610][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: no change in cluster_state
[2013-08-29 08:43:45,655][DEBUG][index.shard.service ] [Iron Man] [twitter][4] scheduling refresher every 1s
[2013-08-29 08:43:45,655][DEBUG][index.shard.service ] [Iron Man] [twitter][4] scheduling optimizer / merger every 1s
[2013-08-29 08:43:45,655][DEBUG][index.shard.service ] [Iron Man] [twitter][4] state: [RECOVERING]->[STARTED], reason [post recovery from gateway, no translog]
[2013-08-29 08:43:45,655][DEBUG][index.gateway ] [Iron Man] [twitter][4] recovery completed from local, took [109ms]
index : files [0] with total_size [0b], took[0s]
: recovered_files [0] with total_size [0b]
: reusing_files [0] with total_size [0b]
start : took [108ms], check_index [0s]
translog : number_of_operations [0], took [0s]
[2013-08-29 08:43:45,655][DEBUG][cluster.action.shard ] [Iron Man] sending shard started for [twitter][4], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2013-08-29 08:43:45,655][DEBUG][cluster.action.shard ] [Iron Man] received shard started for [twitter][4], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING], reason [after recovery from gateway]
[2013-08-29 08:43:45,676][DEBUG][cluster.service ] [Iron Man] processing [shard-started ([twitter][2], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: done applying updated cluster_state
[2013-08-29 08:43:45,676][DEBUG][cluster.service ] [Iron Man] processing [shard-started ([twitter][3], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2013-08-29 08:43:45,676][DEBUG][cluster.action.shard ] [Iron Man] applying started shards [[twitter][4], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]], reason [after recovery from gateway]
[2013-08-29 08:43:45,677][DEBUG][cluster.service ] [Iron Man] cluster state updated, version [8], source [shard-started ([twitter][3], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]
[2013-08-29 08:43:45,677][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: execute
[2013-08-29 08:43:45,677][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: no change in cluster_state
[2013-08-29 08:43:45,710][DEBUG][cluster.service ] [Iron Man] processing [shard-started ([twitter][3], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: done applying updated cluster_state
[2013-08-29 08:43:45,710][DEBUG][cluster.service ] [Iron Man] processing [shard-started ([twitter][4], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: execute
[2013-08-29 08:43:45,710][DEBUG][cluster.service ] [Iron Man] processing [shard-started ([twitter][4], node[UaADQshIT4OBNfFDrBXWsg], [P], s[INITIALIZING]), reason [after recovery from gateway]]: no change in cluster_state
[2013-08-29 08:43:45,710][DEBUG][cluster.service ] [Iron Man] processing [update-mapping [twitter][user]]: execute
[2013-08-29 08:43:45,715][DEBUG][cluster.metadata ] [Iron Man] [twitter] update_mapping [user] (dynamic) with source [{"user":{"properties":{"name":{"type":"string"}}}}]
[2013-08-29 08:43:45,717][DEBUG][cluster.service ] [Iron Man] cluster state updated, version [9], source [update-mapping [twitter][user]]
[2013-08-29 08:43:45,717][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: execute
[2013-08-29 08:43:45,717][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: no change in cluster_state
[2013-08-29 08:43:45,752][DEBUG][cluster.service ] [Iron Man] processing [update-mapping [twitter][user]]: done applying updated cluster_state
[2013-08-29 08:43:47,101][DEBUG][cluster.service ] [Iron Man] processing [routing-table-updater]: execute
[2013-08-29 08:43:47,102][DEBUG][cluster.service ] [Iron Man] processing [routing-table-updater]: no change in cluster_state
[2013-08-29 08:43:55,365][DEBUG][cluster.service ] [Iron Man] processing [update-mapping [twitter][tweet]]: execute
[2013-08-29 08:43:55,367][DEBUG][cluster.metadata ] [Iron Man] [twitter] update_mapping [tweet] (dynamic) with source [{"tweet":{"properties":{"message":{"type":"string"},"postDate":{"type":"date","format":"dateOptionalTime"},"user":{"type":"string"}}}}]
[2013-08-29 08:43:55,367][DEBUG][cluster.service ] [Iron Man] cluster state updated, version [10], source [update-mapping [twitter][tweet]]
[2013-08-29 08:43:55,368][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: execute
[2013-08-29 08:43:55,368][DEBUG][river.cluster ] [Iron Man] processing [reroute_rivers_node_changed]: no change in cluster_state
[2013-08-29 08:43:55,401][DEBUG][cluster.service ] [Iron Man] processing [update-mapping [twitter][tweet]]: done applying updated cluster_state
[2013-08-29 08:44:03,967][DEBUG][indices.memory ] [Iron Man] recalculating shard indexing buffer (reason=active/inactive[false] created/deleted[true]), total is [101.9mb] with [5] active shards, each shard set to indexing=[20.3mb], translog=[64kb]
[2013-08-29 08:44:03,967][DEBUG][index.engine.robin ] [Iron Man] [twitter][0] updating index_buffer_size from [64mb] to [20.3mb]
[2013-08-29 08:44:03,967][DEBUG][index.engine.robin ] [Iron Man] [twitter][1] updating index_buffer_size from [64mb] to [20.3mb]
[2013-08-29 08:44:03,967][DEBUG][index.engine.robin ] [Iron Man] [twitter][2] updating index_buffer_size from [64mb] to [20.3mb]
[2013-08-29 08:44:03,967][DEBUG][index.engine.robin ] [Iron Man] [twitter][3] updating index_buffer_size from [64mb] to [20.3mb]
[2013-08-29 08:44:03,968][DEBUG][index.engine.robin ] [Iron Man] [twitter][4] updating index_buffer_size from [64mb] to [20.3mb]
[2013-08-29 08:44:15,297][DEBUG][action.search.type ] [Iron Man] [twitter][2], node[UaADQshIT4OBNfFDrBXWsg], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@7d6b5518]
org.elasticsearch.search.query.QueryPhaseExecutionException: [twitter][2]: query[ConstantScore(*:*)],from[0],size[10]: Query Failed [Failed to execute main query]
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:138)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:243)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:141)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:212)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:199)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:185)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.lang.NullPointerException
at org.elasticsearch.common.lucene.search.FilteredCollector.setNextReader(FilteredCollector.java:69)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:615)
at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:488)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:444)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:134)
... 9 more
[2013-08-29 08:44:15,297][DEBUG][action.search.type ] [Iron Man] [twitter][3], node[UaADQshIT4OBNfFDrBXWsg], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@7d6b5518]
org.elasticsearch.search.query.QueryPhaseExecutionException: [twitter][3]: query[ConstantScore(*:*)],from[0],size[10]: Query Failed [Failed to execute main query]
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:138)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:243)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:141)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:212)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:199)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:185)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.lang.NullPointerException
at org.elasticsearch.common.lucene.search.FilteredCollector.setNextReader(FilteredCollector.java:69)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:615)
at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:488)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:444)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:134)
... 9 more
Confirmed with a fresh extraction of 0.90.3 tar.gz
#Steps to reproduce problem :
$ tar zxf elasticsearch-0.90.3.tar.gz
$ cd elasticsearch-0.90.3/
$ bin/elasticsearch -f
#Index to from tutorial :
$ curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }'
$ curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elastic Search, so far so good?"
}'
$ curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
# The faulty query
$ curl http://localhost:9200/_search -d '{"filter": {}, "query": {"match_all": {}}}'
{"took":4,"timed_out":false,"_shards":{"total":5,"successful":3,"failed":2,"failures":[{"index":"twitter","shard":2,"status":500,"reason":"QueryPhaseExecutionException[[twitter][2]: query[ConstantScore(*:*)],from[0],size[10]: Query Failed [Failed to execute main query]]; nested: NullPointerException; "},{"index":"twitter","shard":3,"status":500,"reason":"QueryPhaseExecutionException[[twitter][3]: query[ConstantScore(*:*)],from[0],size[10]: Query Failed [Failed to execute main query]]; nested: NullPointerException; "}]},"hits":{"total":0,"max_score":null,"hits":[]}}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment