Skip to content

Instantly share code, notes, and snippets.

@grachevko
Created November 7, 2018 14:24
Show Gist options
  • Save grachevko/7604a946ac147e80ee02b9df54a3b264 to your computer and use it in GitHub Desktop.
Save grachevko/7604a946ac147e80ee02b9df54a3b264 to your computer and use it in GitHub Desktop.
Often, the Docker daemon is started with too low a limit for lockable memory. That's the limit you are seeing in your logs:
Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
The containers inherit this limit, which then becomes a problem for Elasticsearch when it tries lock more memory than allowed.
If you look at your Docker daemon's limits, you will probably see this:
# grep locked /proc/$(ps --no-headers -o pid -C dockerd | tr -d ' ')/limits
Max locked memory 65536 65536 bytes
When we would much prefer to see:
# grep locked /proc/$(ps --no-headers -o pid -C dockerd | tr -d ' ')/limits
Max locked memory unlimited unlimited bytes
It depends on your system how to go about changing the limit, but on my fairly standard Ubuntu system I was able to do this:
echo -e "[Service]\nLimitMEMLOCK=infinity" | SYSTEMD_EDITOR=tee systemctl edit docker.service
systemctl daemon-reload
systemctl restart docker
https://github.com/elastic/elasticsearch-docker/issues/152#issuecomment-372903395
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment