Skip to content

Instantly share code, notes, and snippets.

View markwalkom's full-sized avatar

Mark Walkom markwalkom

View GitHub Profile
@markwalkom
markwalkom / kibana.conf
Last active December 31, 2015 19:29
Proxied kibana via nginx This saves your local desktop from connecting directly to the ES cluster.
server {
listen *:80;
server_name kibana.domain.com;
access_log /var/log/nginx/kibana_access.log;
error_log /var/log/nginx/kibana_error.log;
location /kibana {
root /var/www;
index index.html;
@markwalkom
markwalkom / MemSQL install
Last active August 29, 2015 13:55
MemSQL setup on Ubuntu
This assumes a 5 node cluster - one master aggregator (172.16.1.1), one aggregator (172.16.1.2) and 4 leaves (172.16.1.3-6) - and that you have a download URL from MemSQL.
## On all nodes
apt-get install g++ mysql-client-core-5.5
wget ${MEMSQL_DOWNLOAD_URL}
dpkg -i ${MEMSQL_DOWNLOAD}
# Check that things are running and you can connect
mysql -u root -h 127.0.0.1 -P 3306 --prompt="memsql> "
# Install collectd
apt-get install libtool
@markwalkom
markwalkom / ES restart + recovery
Last active August 29, 2015 13:56
Elasticsearch - restarting and recovering
## For a node
This tells ES to not move shards around if a node/nodes drops out of the cluster;
`curl -XPUT eshost.example.com:9200/_cluster/settings -d '{"transient":{"cluster.routing.allocation.enable": none}}'`
`sudo service elasticsearch restart`
`curl -XPUT eshost.example.com:9200/_cluster/settings -d '{"transient":{"cluster.routing.allocation.enable": all}}'`
When you are done and the node/nodes are back, they will reinitialise their local shards and you're cluster should be back to green as fast as the local node can work.
## For a cluster
@markwalkom
markwalkom / keybase.md
Created August 17, 2014 08:08
keybase.md

Keybase proof

I hereby claim:

  • I am markwalkom on github.
  • I am markwalkom (https://keybase.io/markwalkom) on keybase.
  • I have a public key whose fingerprint is 3624 D73D 1018 8785 6475 F84D 5CA5 78CB 1845 5C92

To claim this, I am signing this object:

@markwalkom
markwalkom / logstash.conf
Last active April 29, 2022 10:23
Reindexing Elasticsearch with Logstash 2.0
input {
elasticsearch {
hosts => [ "HOSTNAME_HERE" ]
port => "9200"
index => "INDEXNAME_HERE"
size => 1000
scroll => "5m"
docinfo => true
scan => true
}
@markwalkom
markwalkom / gist:f47a30e37cd402f2dc5d
Last active August 29, 2015 14:21
Export from ES to a json file
input {
elasticsearch {
hosts => [ "HOSTNAME_HERE" ]
port => "9200"
index => "INDEXNAME_HERE"
size => 500
scroll => "5m"
}
}
output {
@markwalkom
markwalkom / README.md
Last active July 25, 2016 14:46
CollectD to ELK

This is an example of using ELK to parse and view collectd data.

Caveat - I haven't fully tested this mapping yet, it doesn't take into account any other fields that may be added with other collectd plugins, just the ones I have specified below.

@markwalkom
markwalkom / gist:cd8b4a9f82c442079284
Created December 28, 2015 21:48
fail2ban patterns
F2B_DATE %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[ ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})
F2B_ACTION (\w+)\.(?:\w+)(\s+)?\:
F2B_JAIL \[(?<jail>\w+\-?\w+?)\]
F2B_LEVEL (?<level>\w+)\s+
$ logstash-2.2.0/bin/plugin list
logstash-codec-avro
logstash-codec-cef
logstash-codec-cloudfront
logstash-codec-cloudtrail
logstash-codec-collectd
logstash-codec-compress_spooler
logstash-codec-dots
logstash-codec-edn
logstash-codec-edn_lines