curl --request GET \
--url http://localhost:9200/_search \
--header 'Content-Type: application/json' \
--header 'cache-control: no-cache' \
--data '{\n "query": {\n "bool": {\n "filter": {\n "range": {\n "@timestamp": {\n "lt": "2019-01-01"\n }\n }\n }\n }\n }\n}'
GET /_cat/indices?v&s=store.size:desc
DELETE /*2018*
elaticsearch set this flag to true in all indices when the disc space is low (https://www.elastic.co/guide/en/elasticsearch/reference/6.7/disk-allocator.html)
PUT _settings
{
"index.blocks.read_only_allow_delete": null
}
- Number of replicas = ideally equals to the number of replica nodes
- number of cluster nodes = primary node + replica nodes
- Number of shards = 1,5 <-> 3 factor of Nodes number
- example: ES cluster with 3 nodes, each indice should have (at most) 9 (3 * 3) shards
- refs:
- Note that if using a single node strategy, you dont need any replica unless you want the data duplicated (for backups purpose)
- For indices that are rotated daily, meaning 1 indice per day, 1 single shard is enough till +- 2B documents
How save ram:
- close indicies after X (eg. 15) days, obtainable with Curator How save disk space:
- Delete indices after X + Y (eg. 30) days, obtainable with Curator
Note the highest priority matching template will be the chosed one.
POST _index_template/default-single-node
{
"priority": 1000,
"template": {
"settings": {
"index": {
"number_of_shards": "1",
"number_of_replicas": "0",
"refresh_interval": "10s"
}
}
},
"index_patterns": [
"*"
],
"composed_of": []
}
legacy:
POST _template/default-cluster
{
"index_patterns" : ["*"],
"order" : 0,
"settings" : {
"refresh_interval" : "10s",
"number_of_shards" : "1",
"number_of_replicas" : "0"
}
}
- Index Lifecycle Management as an built-in alternative of curator (external tool) to manage the old indexes. Requirements ES >= 6.8
Another great tutorial: https://bonsai.io/blog/ideal-elasticsearch-cluster.html
Example from here:
logs-system-{date}
logs-iis-{date}
logs-prometheus-{date}
logs-app-{applicationName}-{date}