I hereby claim:
- I am markwalkom on github.
- I am markwalkom (https://keybase.io/markwalkom) on keybase.
- I have a public key whose fingerprint is 3624 D73D 1018 8785 6475 F84D 5CA5 78CB 1845 5C92
To claim this, I am signing this object:
server { | |
listen *:80; | |
server_name kibana.domain.com; | |
access_log /var/log/nginx/kibana_access.log; | |
error_log /var/log/nginx/kibana_error.log; | |
location /kibana { | |
root /var/www; | |
index index.html; |
This assumes a 5 node cluster - one master aggregator (172.16.1.1), one aggregator (172.16.1.2) and 4 leaves (172.16.1.3-6) - and that you have a download URL from MemSQL. | |
## On all nodes | |
apt-get install g++ mysql-client-core-5.5 | |
wget ${MEMSQL_DOWNLOAD_URL} | |
dpkg -i ${MEMSQL_DOWNLOAD} | |
# Check that things are running and you can connect | |
mysql -u root -h 127.0.0.1 -P 3306 --prompt="memsql> " | |
# Install collectd | |
apt-get install libtool |
## For a node | |
This tells ES to not move shards around if a node/nodes drops out of the cluster; | |
`curl -XPUT eshost.example.com:9200/_cluster/settings -d '{"transient":{"cluster.routing.allocation.enable": none}}'` | |
`sudo service elasticsearch restart` | |
`curl -XPUT eshost.example.com:9200/_cluster/settings -d '{"transient":{"cluster.routing.allocation.enable": all}}'` | |
When you are done and the node/nodes are back, they will reinitialise their local shards and you're cluster should be back to green as fast as the local node can work. | |
## For a cluster |
I hereby claim:
To claim this, I am signing this object:
Name | Description |
---|---|
action.allow_id_generation | - |
action.auto_create_index | - |
action.bulk.compress | - |
action.destructive_requires_name | http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/_parameters.html#_parameters |
action.disable_shutdown | http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-nodes-shutdown.html#_disable_shutdown |
action.get.realtime | http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-get.html#realtime |
input { | |
elasticsearch { | |
hosts => [ "HOSTNAME_HERE" ] | |
port => "9200" | |
index => "INDEXNAME_HERE" | |
size => 1000 | |
scroll => "5m" | |
docinfo => true | |
scan => true | |
} |
input { | |
elasticsearch { | |
hosts => [ "HOSTNAME_HERE" ] | |
port => "9200" | |
index => "INDEXNAME_HERE" | |
size => 500 | |
scroll => "5m" | |
} | |
} | |
output { |
This is an example of using ELK to parse and view collectd data.
Caveat - I haven't fully tested this mapping yet, it doesn't take into account any other fields that may be added with other collectd plugins, just the ones I have specified below.
F2B_DATE %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[ ]%{HOUR}:?%{MINUTE}(?::?%{SECOND}) | |
F2B_ACTION (\w+)\.(?:\w+)(\s+)?\: | |
F2B_JAIL \[(?<jail>\w+\-?\w+?)\] | |
F2B_LEVEL (?<level>\w+)\s+ |
$ logstash-2.2.0/bin/plugin list | |
logstash-codec-avro | |
logstash-codec-cef | |
logstash-codec-cloudfront | |
logstash-codec-cloudtrail | |
logstash-codec-collectd | |
logstash-codec-compress_spooler | |
logstash-codec-dots | |
logstash-codec-edn | |
logstash-codec-edn_lines |