Skip to content

Instantly share code, notes, and snippets.

@esperlu
esperlu / mysql2sqlite.sh
Created April 27, 2011 05:46
MySQL to Sqlite converter
#!/bin/sh
# Converts a mysqldump file into a Sqlite 3 compatible file. It also extracts the MySQL `KEY xxxxx` from the
# CREATE block and create them in separate commands _after_ all the INSERTs.
# Awk is choosen because it's fast and portable. You can use gawk, original awk or even the lightning fast mawk.
# The mysqldump file is traversed only once.
# Usage: $ ./mysql2sqlite mysqldump-opts db-name | sqlite3 database.sqlite
# Example: $ ./mysql2sqlite --no-data -u root -pMySecretPassWord myDbase | sqlite3 database.sqlite
@nukemberg
nukemberg / knife.sh
Created June 28, 2011 07:51 — forked from ches/knife
bash completion for Chef's knife command (updated for 0.10)
# vim: ft=sh:ts=4:sw=4:autoindent:expandtab:
# Author: Avishai Ish-Shalom <avishai@fewbytes.com>
# We need to specify GNU sed for OS X, BSDs, etc.
if [[ "$(uname -s)" == "Darwin" ]]; then
SED=gsed
else
SED=sed
fi
@jpatters
jpatters / HeidiDecode.js
Last active May 17, 2024 12:41
Decodes a password from HeidiSQL. HeidiSQL passwords can be found in the registry. Use File -> Export Settings to dump all settings. Great for if you forget a password.
function heidiDecode(hex) {
var str = '';
var shift = parseInt(hex.substr(-1));
hex = hex.substr(0, hex.length - 1);
for (var i = 0; i < hex.length; i += 2)
str += String.fromCharCode(parseInt(hex.substr(i, 2), 16) - shift);
return str;
}
document.write(heidiDecode('755A5A585C3D8141786B3C385E3A393'));
@aras-p
aras-p / preprocessor_fun.h
Last active June 21, 2024 12:20
Things to commit just before leaving your job
// Just before switching jobs:
// Add one of these.
// Preferably into the same commit where you do a large merge.
//
// This started as a tweet with a joke of "C++ pro-tip: #define private public",
// and then it quickly escalated into more and more evil suggestions.
// I've tried to capture interesting suggestions here.
//
// Contributors: @r2d2rigo, @joeldevahl, @msinilo, @_Humus_,
// @YuriyODonnell, @rygorous, @cmuratori, @mike_acton, @grumpygiant,
@davidsnyder
davidsnyder / gist:7668060
Last active May 2, 2016 14:16
Kafka Troubleshooting
You can use the consumer script to verify that data is being received by the http listener (http_producer) on port 80 of the Kafka broker node.
sudo /usr/local/share/kafka/bin/kafka-console-consumer.sh --zookeeper [zk-private-ip]:2181 --topic [topic] --from-beginning
If you curl records at the kafka http listener, you should see them come out at the kafka broker node:
(On the Kafka listener node)
curl -XGET localhost:80 -d '{"name":"Bill","position":"Sales"}'
@Integralist
Integralist / GitHub curl.sh
Last active June 18, 2024 23:31 — forked from madrobby/gist:9476733
Download a single file from a private GitHub repo. You'll need an access token as described in this GitHub Help article: https://help.github.com/articles/creating-an-access-token-for-command-line-use
curl --header 'Authorization: token INSERTACCESSTOKENHERE' \
--header 'Accept: application/vnd.github.v3.raw' \
--remote-name \
--location https://api.github.com/repos/owner/repo/contents/path
# Example...
TOKEN="INSERTACCESSTOKENHERE"
OWNER="BBC-News"
REPO="responsive-news"
@jkreps
jkreps / benchmark-commands.txt
Last active June 17, 2024 03:54
Kafka Benchmark Commands
Producer
Setup
bin/kafka-topics.sh --zookeeper esv4-hcl197.grid.linkedin.com:2181 --create --topic test-rep-one --partitions 6 --replication-factor 1
bin/kafka-topics.sh --zookeeper esv4-hcl197.grid.linkedin.com:2181 --create --topic test --partitions 6 --replication-factor 3
Single thread, no replication
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance test7 50000000 100 -1 acks=1 bootstrap.servers=esv4-hcl198.grid.linkedin.com:9092 buffer.memory=67108864 batch.size=8196
@afolarin
afolarin / docker-log-gist.md
Last active March 16, 2023 13:02
docker-logs
@voxxit
voxxit / USING-VAULT.md
Last active July 7, 2022 03:02
Consul + Vault + MySQL = <3
git clone https://gist.github.com/dd6f95398c1bdc9f1038.git vault
cd vault
docker-compose up -d
export VAULT_ADDR=http://192.168.99.100:8200

Initializing a vault:

vault init
@shreyaskarnik
shreyaskarnik / Instructions.md
Last active March 24, 2023 15:35
Route Docker Logs to ELK Stack
  • With Docker 1.8.0 shipped new log-driver for GELF via UDP, this means that the logs from Docker Container(s) can be shipped directly to the ELK stack for further analysis.
  • This tutorial will illustrate how to use the GELF log-driver with Docker engine.
  • Step 1: Setup ELK Stack:
    • docker run -d --name es elasticsearch
    • docker run -d --name logstash --link es:elasticsearch logstash -v /tmp/logstash.conf:/config-dir/logstash.conf logstash logstash -f /config-dir/logstash.conf
    • Note the config for Logstash can be found at this link
    • docker run --link es:elasticsearch -d kibana
  • Once the ELK stack is up now let's fire up our nginx container which ships its logs to ELK stack.
  • LOGSTASH_ADDRESS=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' logstash)
  • `docker run -d --net=host --log-driver=gelf --log-opt gelf-address=u