Skip to content

Instantly share code, notes, and snippets.

View markwalkom's full-sized avatar

Mark Walkom markwalkom

View GitHub Profile
@markwalkom
markwalkom / MemSQL install
Last active August 29, 2015 13:55
MemSQL setup on Ubuntu
This assumes a 5 node cluster - one master aggregator (172.16.1.1), one aggregator (172.16.1.2) and 4 leaves (172.16.1.3-6) - and that you have a download URL from MemSQL.
## On all nodes
apt-get install g++ mysql-client-core-5.5
wget ${MEMSQL_DOWNLOAD_URL}
dpkg -i ${MEMSQL_DOWNLOAD}
# Check that things are running and you can connect
mysql -u root -h 127.0.0.1 -P 3306 --prompt="memsql> "
# Install collectd
apt-get install libtool
@markwalkom
markwalkom / ES restart + recovery
Last active August 29, 2015 13:56
Elasticsearch - restarting and recovering
## For a node
This tells ES to not move shards around if a node/nodes drops out of the cluster;
`curl -XPUT eshost.example.com:9200/_cluster/settings -d '{"transient":{"cluster.routing.allocation.enable": none}}'`
`sudo service elasticsearch restart`
`curl -XPUT eshost.example.com:9200/_cluster/settings -d '{"transient":{"cluster.routing.allocation.enable": all}}'`
When you are done and the node/nodes are back, they will reinitialise their local shards and you're cluster should be back to green as fast as the local node can work.
## For a cluster
@markwalkom
markwalkom / keybase.md
Created August 17, 2014 08:08
keybase.md

Keybase proof

I hereby claim:

  • I am markwalkom on github.
  • I am markwalkom (https://keybase.io/markwalkom) on keybase.
  • I have a public key whose fingerprint is 3624 D73D 1018 8785 6475 F84D 5CA5 78CB 1845 5C92

To claim this, I am signing this object:

@markwalkom
markwalkom / gist:f47a30e37cd402f2dc5d
Last active August 29, 2015 14:21
Export from ES to a json file
input {
elasticsearch {
hosts => [ "HOSTNAME_HERE" ]
port => "9200"
index => "INDEXNAME_HERE"
size => 500
scroll => "5m"
}
}
output {
@markwalkom
markwalkom / gist:cd8b4a9f82c442079284
Created December 28, 2015 21:48
fail2ban patterns
F2B_DATE %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[ ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})
F2B_ACTION (\w+)\.(?:\w+)(\s+)?\:
F2B_JAIL \[(?<jail>\w+\-?\w+?)\]
F2B_LEVEL (?<level>\w+)\s+
@markwalkom
markwalkom / kibana.conf
Last active December 31, 2015 19:29
Proxied kibana via nginx This saves your local desktop from connecting directly to the ES cluster.
server {
listen *:80;
server_name kibana.domain.com;
access_log /var/log/nginx/kibana_access.log;
error_log /var/log/nginx/kibana_error.log;
location /kibana {
root /var/www;
index index.html;
$ logstash-2.2.0/bin/plugin list
logstash-codec-avro
logstash-codec-cef
logstash-codec-cloudfront
logstash-codec-cloudtrail
logstash-codec-collectd
logstash-codec-compress_spooler
logstash-codec-dots
logstash-codec-edn
logstash-codec-edn_lines
@markwalkom
markwalkom / missing-fields-query.json
Created May 18, 2016 05:13
Via Kibana, only show documents that have a missing field
# Via https://smelloworld.wordpress.com/2016/05/17/missing-fields-search-in-elasticsearch/
{“query”:{“filtered”:{“query”:{“match_all”:{}},”filter”:{“missing”:{“field”:”FIELDNAME”}}}}}
@markwalkom
markwalkom / jqtips.md
Last active May 21, 2016 09:27
jq tips

Elasticsearch

Sum number of docs in a cluster

cat nodes_stats.json|jq '.nodes[].indices.docs.count'|awk '{s+=$0} END {print s}'

Sum total store size

cat nodes_stats.json|jq '.nodes[].indices.store.size_in_bytes'|awk '{s+=$0} END {print s}'

Working with the swapi data

Get a list of planets + key for translate lookup

cat people.json | jq -r '.[]|"\"\(.pk)\"" + ": " + "\"\(.fields.name)\""'