Skip to content

Instantly share code, notes, and snippets.

@pitipon
Forked from k-xzc/1-workshop-elasticsearch
Created June 22, 2019 08:57
Show Gist options
  • Save pitipon/96ca1edbd935ed90c87e5a4dfea8d83b to your computer and use it in GitHub Desktop.
Save pitipon/96ca1edbd935ed90c87e5a4dfea8d83b to your computer and use it in GitHub Desktop.
0. elasticsearch installation centos
0.5 docker run elasticsearch
1. elasticsearch.yml
2. restful-api cluster information , _cat information
3. create index , insert data , search data
4. update row , delete row , delete index
5. basic query condition
6. export import data by nodejs
### Prerequisite JAVA => 1.8
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
cat <<EOF | sudo tee /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
sudo yum install elasticsearch
curl -X GET "localhost:9200/"
sudo systemctl enable elasticsearch.service
sudo systemctl start elasticsearch.service
### Prerequisite install docker
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.1.1
### /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: codemania
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
### cluster information
curl localhost:9200
### cat command
curl localhost:9200/_cat
curl localhost:9200/_cat/nodes
curl localhost:9200/_cat/health
curl localhost:9200/_cat/indices
### CREATE index, insert one row
curl -X POST --header 'Content-Type: application/json' \
--data '{"name":"Kant","age":27,"timestamp":"20190622T14:00:00Z"} \
localhost:9200/info/info
### see the result
curl localhost:9200/info/_search
### insert another row
curl -X POST --header 'Content-Type: application/json' \
--data '{"name":"Earn","age":25,"timestamp":"20190622T14:00:00Z"} \
localhost:9200/info/info
### see the result
curl localhost:9200/info/_search
### Update data that just insert
### row_id that you want to update
curl -X PUT --header 'Content-Type: application/json' \
--data '{"name":"Kant","age":27.5,"timestamp":"20190622T15:00:00Z"} \
localhost:9200/info/info/__row_id__
### see the result
curl localhost:9200/info/_search
### Delete one row data
curl -X DELETE localhost:9200/info/info/__row_id__
### see the result
curl localhost:9200/info/_search
### Delete the index
curl -X DELETE localhost:9200/info/
### see the result
curl localhost:9200/info/_search
### CREATE index, and insert some rows
curl -X POST --header 'Content-Type: application/json' \
--data '{"name":"Kant","age":27,"timestamp":"20190622T14:00:00Z"} \
localhost:9200/info/info
curl -X POST --header 'Content-Type: application/json' \
--data '{"name":"Earn","age":25,"timestamp":"20190622T14:00:00Z"} \
localhost:9200/info/info
### select index mapping
curl localhost:9200/info
### Select with no condition
curl localhost:9200/info/_search
### Select limit 1 row
curl localhost:9200/info/_search?&size=1
### Select with condition
curl localhost:9200/info/_search?q=name:Kant
### install nodejs version => 8
### install elasticdump
npm install elasticdump –g
### Export to json
elasticdump \
--input=http://kantz.space:9200/bx \
--output=/opt/bx.json \
--type=data
### Import from json
elasticdump \
--input=/opt/bx.json \
--output=http://localhost:9200/ \
--type=data
### Export and Import in one command
elasticdump \
--input=http://kantz.space:9200/bx \
--output=http://localhost:9200/bx2 \
--type=mapping
elasticdump \
--input=http://kantz.space:9200/bx \
--output=http://localhost:9200/bx2 \
--type=data
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment