Skip to content

Instantly share code, notes, and snippets.

Logstash custom patterns for Lustre

LUSTRE_OBJECT %{WORD}(-%{WORD}){1,3}
LUSTRE_LNET %{IP}@%{WORD}
LUSTRE_SOURCECODE (%{USERNAME}.c:%{INT})
LUSTRE_ERRCODE rc (=)? (%{INT:error_code}|%{INT}/%{INT})
LUSTRE_LOGPREFIX1 (Lustre|LustreError|LNetError): (%{WORD}-%{WORD}: )?%{LUSTRE_OBJECT:lustre_object}:
LUSTRE_LOGPREFIX2 (Lustre|LustreError|LNet|LNetError):%{SPACE}?%{WORD}:%{WORD}:\(%{LUSTRE_SOURCECODE:lustre_source}:%{USERNAME:lustre_function}\(\)\)
LUSTRE_LOGPREFIX3 (Lustre|LustreError|LNet|LNetError):
@joshuar
joshuar / rpm-dispatch-conf.sh
Created November 20, 2014 03:55
Email a diff of .rpmnew and .rpmsave file changes to an admin to consider
#!/usr/bin/env bash
mail_to=root
ignorefile=/etc/rpm-dispatch-conf.ignore
newfiles=$(find / -noleaf -ignore_readdir_race -xdev -name \*.rpmsave -or -name \*.rpmnew 2>/dev/null)
for f in $newfiles; do
newfile=$f
oldfile=$f
@joshuar
joshuar / Backblaze-HDD-Data-Elasticsearch.md
Last active August 29, 2015 14:15
Backblaze Hard Drive Test Data in Elasticsearch

Instructions

This is just some quick notes for importing the Backblaze Hard Drive Test Data into Elasticsearch. Of the archives that Backblaze has provided, you only need to download the 2013 and 2014 data-sets and unpack them to a temporary location.

After you've unpacked the data, you'll need to convert the CSV to JSON. I use the csvjson tool from Csvkit for this. In the directory containing the CSV files, run this bash loop:

for csv in *.csv; do name=$(basename $csv .csv); csvjson "${name}.csv" > "${name}.json"; done
"trigger": {
"schedule": {
"interval": "10m"
}
},
"input": {
"search": {
"request": {
"search_type": "count",
"indices": [
@joshuar
joshuar / install-es.sh
Created September 22, 2015 05:33
Quick Elasticsearch install script
#!/usr/bin/env bash
ES_VERSION="1.7.2"
ES_URL="https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-$ES_VERSION.tar.gz"
curl -s -L -o - $ES_URL | tar -xz -C /opt \
&& ln -s /opt/elasticsearch-$ES_VERSION /opt/elasticsearch \
&& mkdir /opt/elasticsearch/{data,logs,plugins}
chown -R vagrant:vagrant /opt/elasticsearch-${ES_VERSION}
@joshuar
joshuar / suffix-search.json
Created May 31, 2017 23:20
Performing a suffix search
PUT test
{
"settings": {
"analysis": {
"analyzer": {
"ReverseIt": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"reverse"
@joshuar
joshuar / queries.md
Created October 16, 2017 16:58
Quick Document Counts in Elasticsearch

Count number of docs indexed in certain interval (e.g., last 15 min)

GET /logstash-<DATE>/_search?filter_path=hits.total
{
  "query": {
    "bool": {
      "filter": [
            {
              "range": {
@joshuar
joshuar / Dockerfile
Created November 15, 2017 22:16
Dockerfile for installing JDK development tooling (jmap, etc.) on top of the official Elastic Elasticsearch Docker image
FROM docker.elastic.co/elasticsearch/elasticsearch:5.6.4
USER root
RUN sed -i '/^exclude/d' /etc/yum.conf && yum update -y && yum install -y java-1.8.0-openjdk-devel
USER elasticsearch

Reindexing with Logstash can be done with the following Logstash configuration:

 input {
   # We read from the "old" index
   elasticsearch {
     hosts => ["http://<host>:<port>"]
     index => "<old_index>"
     size => 500
 scroll =&gt; "5m"
@joshuar
joshuar / Three-Node-LustreFS-Cluster-Quickstart.md
Last active July 22, 2020 09:52
Quick three-node Lustre set-up on CentOS 6