Skip to content

Instantly share code, notes, and snippets.

@joshuar
joshuar / iptables-connection-sharing.sh
Last active December 27, 2020 16:04
Quick script to enable connection sharing (i.e. nat) on an interface in Linux. Based on http://xmodulo.com/2014/06/internet-connection-sharing-iptables-linux.html
#!/bin/bash
while getopts "i:t:" opt; do
case $opt in
i)
$(ip link show $OPTARG 1> /dev/null 2>&1)
if [[ $? != 0 ]]; then
echo "Argument to -${opt} should be an network device."
exit -1
else
@joshuar
joshuar / Three-Node-LustreFS-Cluster-Quickstart.md
Last active July 22, 2020 09:52
Quick three-node Lustre set-up on CentOS 6

Logstash custom patterns for Lustre

LUSTRE_OBJECT %{WORD}(-%{WORD}){1,3}
LUSTRE_LNET %{IP}@%{WORD}
LUSTRE_SOURCECODE (%{USERNAME}.c:%{INT})
LUSTRE_ERRCODE rc (=)? (%{INT:error_code}|%{INT}/%{INT})
LUSTRE_LOGPREFIX1 (Lustre|LustreError|LNetError): (%{WORD}-%{WORD}: )?%{LUSTRE_OBJECT:lustre_object}:
LUSTRE_LOGPREFIX2 (Lustre|LustreError|LNet|LNetError):%{SPACE}?%{WORD}:%{WORD}:\(%{LUSTRE_SOURCECODE:lustre_source}:%{USERNAME:lustre_function}\(\)\)
LUSTRE_LOGPREFIX3 (Lustre|LustreError|LNet|LNetError):
@joshuar
joshuar / rpm-dispatch-conf.sh
Created November 20, 2014 03:55
Email a diff of .rpmnew and .rpmsave file changes to an admin to consider
#!/usr/bin/env bash
mail_to=root
ignorefile=/etc/rpm-dispatch-conf.ignore
newfiles=$(find / -noleaf -ignore_readdir_race -xdev -name \*.rpmsave -or -name \*.rpmnew 2>/dev/null)
for f in $newfiles; do
newfile=$f
oldfile=$f
@joshuar
joshuar / Backblaze-HDD-Data-Elasticsearch.md
Last active August 29, 2015 14:15
Backblaze Hard Drive Test Data in Elasticsearch

Instructions

This is just some quick notes for importing the Backblaze Hard Drive Test Data into Elasticsearch. Of the archives that Backblaze has provided, you only need to download the 2013 and 2014 data-sets and unpack them to a temporary location.

After you've unpacked the data, you'll need to convert the CSV to JSON. I use the csvjson tool from Csvkit for this. In the directory containing the CSV files, run this bash loop:

for csv in *.csv; do name=$(basename $csv .csv); csvjson "${name}.csv" > "${name}.json"; done
"trigger": {
"schedule": {
"interval": "10m"
}
},
"input": {
"search": {
"request": {
"search_type": "count",
"indices": [
@joshuar
joshuar / install-es.sh
Created September 22, 2015 05:33
Quick Elasticsearch install script
#!/usr/bin/env bash
ES_VERSION="1.7.2"
ES_URL="https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-$ES_VERSION.tar.gz"
curl -s -L -o - $ES_URL | tar -xz -C /opt \
&& ln -s /opt/elasticsearch-$ES_VERSION /opt/elasticsearch \
&& mkdir /opt/elasticsearch/{data,logs,plugins}
chown -R vagrant:vagrant /opt/elasticsearch-${ES_VERSION}
@joshuar
joshuar / nginx.conf
Created October 30, 2015 03:53
Logging Elasticsearch HTTP API Requests with Nginx
worker_processes 1;
error_log /var/log/nginx/error.log;
events {
worker_connections 1024;
}
http {
log_format es '$remote_addr - $remote_user [$time_local] '

Reindexing with Logstash can be done with the following Logstash configuration:

 input {
   # We read from the "old" index
   elasticsearch {
     hosts => ["http://<host>:<port>"]
     index => "<old_index>"
     size => 500
 scroll =&gt; "5m"
@joshuar
joshuar / suffix-search.json
Created May 31, 2017 23:20
Performing a suffix search
PUT test
{
"settings": {
"analysis": {
"analyzer": {
"ReverseIt": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"reverse"