# apt-get update
# apt-get install elasticsearch
After update, had to reinstall license and marvel
# bin/plugin remove license
# bin/plugin remove marvel-agent
# bin/plugin install license
# bin/plugin install marvel-agent
Instructions from elastic.co
# wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
# sudo apt-get install apt-transport-https
# echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
deb https://artifacts.elastic.co/packages/5.x/apt stable main
# sudo apt-get update
# apt-get install elasticsearch
# apt-get install kibana
With 2GB of RAM on Digital Ocean, 5.0.1 refused to start (due to memory allocation issues), even though 2.4 was OK.
Added this in my /etc/init.d/elasticsearch script:
# Additional Java OPTS
ES_JAVA_OPTS="-Xms1g -Xmx1g"
Had to remove the plugins. elasticsearch-migration is no longer needed, head is now part of core, and i have to reinstall license and marvel-agent:
cd /usr/share/elasticsearch
bin/elasticsearch-plugin list (repeat this until it starts -- removing all plugins along the way)
bin/elasticsearch-plugin remove elasticsearch-migration
bin/elasticsearch-plugin remove head
bin/elasticsearch-plugin remove license
bin/elasticsearch-plugin remove marvel-agent
Next, had to remove relevant settings from elasticsearch.yml:
#shield.enabled: false (no longer needed)
#marvel.agent.interval: 30s
Now I can at least curl localhost:9200 and see that ES is up and running. Let's reinstall those plugins.
marvel-agent and license are not known in ES 5.0... coming back to that. installed x-pack both with ES and Kibana, and the appropriate license file. Everything is good now.
I did have two indices that had some kind of field_stats issue. I reindexed and they're fine now.
Was getting this error, but only when PUTing to an index (to create it), and only with a really large mapping:
*1 upstream sent too big header while reading response header from upstream,
Many internet sources recommended fastcgi settings, but hey didnt do the trick. I'm hoping future googlers find this post. Had to add this to my /etc/nginx.conf file in the http {} block:
proxy_buffer_size 128k;
proxy_buffers 4 256k;
* string, not_analyzed --> keyword
* most of my floats I changed to scaled_float, scaling_factor of 1000 (or so)
Just had a few things that we were using that were deprecated in 2.x:
* Changed filtered queries to bool
* Changed 'query' to must
* Changed 'include' to 'includes'
The only real surprise was that painless is now the default, so you have to specify:
* {"inline": "... original script", "lang": "groovy"}
Two (of my forty or so) indices had this issue where they would query and index fine, but a query to _field_stats would cause an ES exception. This then in turn caused Kibana to not start.
I just reindexed as it was easier than trying to figure out what the problem was, but it was a bit surprising.
Error:
$ curl localhost:9200/my_index/_field_stats?fields=*
{"error":{"root_cause":[{"type":"array_index_out_of_bounds_exception","reason":null}],"type":"array_index_out_of_bounds_exception","reason":null},"status":500}
They were fine after reindex. I wanted to reindex anyway to take advantage of keywords and scaled_floats, so it wasnt a big deal.
After upgrade, TimeLion didn't want to start. This fixed it right up:
curl -XPUT http://IP:9200/.kibana/_mapping/timelion-sheet -d '{"timelion-sheet":{"properties":{"title":{"type":"string"},"hits":{"type":"long"},"description":{"type":"string"},"timelion_sheet":{"type":"string"},"timelion_interval":{"type":"string"},"timelion_other_interval":{"type":"string"},"timelion_chart_height":{"type":"integer"},"timelion_columns":{"type":"integer"},"timelion_rows":{"type":"integer"},"version":{"type":"long"},"kibanaSavedObjectMeta":{"properties":{"searchSourceJSON":{"type":"string"}}}}}}'