Skip to content

Instantly share code, notes, and snippets.

What would you like to do?

Notes on upgrading to Elasticsearch 5.0.1 from 2.4

First get the latest ES (2.4):

# apt-get update
# apt-get install elasticsearch

After update, had to reinstall license and marvel
# bin/plugin remove license
# bin/plugin remove marvel-agent
# bin/plugin install license
# bin/plugin install marvel-agent

Set up repos

Instructions from

# wget -qO - | sudo apt-key add -
# sudo apt-get install apt-transport-https
# echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
deb stable main
# sudo apt-get update
# apt-get install elasticsearch
# apt-get install kibana

Decrease heap size

With 2GB of RAM on Digital Ocean, 5.0.1 refused to start (due to memory allocation issues), even though 2.4 was OK.

Added this in my /etc/init.d/elasticsearch script:

# Additional Java OPTS
ES_JAVA_OPTS="-Xms1g -Xmx1g"

Plugin Woes

Had to remove the plugins. elasticsearch-migration is no longer needed, head is now part of core, and i have to reinstall license and marvel-agent:

cd /usr/share/elasticsearch
bin/elasticsearch-plugin list (repeat this until it starts -- removing all plugins along the way)
bin/elasticsearch-plugin remove elasticsearch-migration
bin/elasticsearch-plugin remove head
bin/elasticsearch-plugin remove license
bin/elasticsearch-plugin remove marvel-agent

Next, had to remove relevant settings from elasticsearch.yml:

#shield.enabled: false (no longer needed)
#marvel.agent.interval: 30s

Now I can at least curl localhost:9200 and see that ES is up and running. Let's reinstall those plugins.

marvel-agent and license are not known in ES 5.0... coming back to that. installed x-pack both with ES and Kibana, and the appropriate license file. Everything is good now.

I did have two indices that had some kind of field_stats issue. I reindexed and they're fine now.

nginx proxy issues

Was getting this error, but only when PUTing to an index (to create it), and only with a really large mapping:

*1 upstream sent too big header while reading response header from upstream,

Many internet sources recommended fastcgi settings, but hey didnt do the trick. I'm hoping future googlers find this post. Had to add this to my /etc/nginx.conf file in the http {} block:

proxy_buffer_size 128k;
proxy_buffers 4 256k;

Upgrading Mappings

  * string, not_analyzed --> keyword
  * most of my floats I changed to scaled_float, scaling_factor of 1000 (or so)

Upgrading Queries

Just had a few things that we were using that were deprecated in 2.x:

  * Changed filtered queries to bool
  * Changed 'query' to must
  * Changed 'include' to 'includes'

The only real surprise was that painless is now the default, so you have to specify:
  * {"inline": "... original script", "lang": "groovy"}

Index Out of Bounds Error

Two (of my forty or so) indices had this issue where they would query and index fine, but a query to _field_stats would cause an ES exception. This then in turn caused Kibana to not start.

I just reindexed as it was easier than trying to figure out what the problem was, but it was a bit surprising.


$ curl localhost:9200/my_index/_field_stats?fields=*

They were fine after reindex. I wanted to reindex anyway to take advantage of keywords and scaled_floats, so it wasnt a big deal.

Upgrading TimeLion

After upgrade, TimeLion didn't want to start. This fixed it right up:

curl -XPUT http://IP:9200/.kibana/_mapping/timelion-sheet -d '{"timelion-sheet":{"properties":{"title":{"type":"string"},"hits":{"type":"long"},"description":{"type":"string"},"timelion_sheet":{"type":"string"},"timelion_interval":{"type":"string"},"timelion_other_interval":{"type":"string"},"timelion_chart_height":{"type":"integer"},"timelion_columns":{"type":"integer"},"timelion_rows":{"type":"integer"},"version":{"type":"long"},"kibanaSavedObjectMeta":{"properties":{"searchSourceJSON":{"type":"string"}}}}}}'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment