Skip to content

Instantly share code, notes, and snippets.

View santaux's full-sized avatar

Konstantin Grabar santaux

View GitHub Profile
[ODBC Data Sources]
AdTracker = "ClickHouse AdTracker"
[ClickHouse]
Driver = /usr/local/lib/clickhouse-odbc.so
Description = ClickHouse driver
SERVER = 127.0.0.1
PORT = 8123
FRAMED = 0
@santaux
santaux / phoenix showdown rackspace onmetal io.md
Created July 24, 2016 17:12 — forked from omnibs/phoenix showdown rackspace onmetal io.md
Phoenix Showdown Comparative Benchmarks @ Rackspace

Comparative Benchmark Numbers @ Rackspace

I've taken the benchmarks from Matthew Rothenberg's phoenix-showdown, updated Phoenix to 0.13.1 and ran the tests on the most powerful machines available at Rackspace.

Results

Framework Throughput (req/s) Latency (ms) Consistency (σ ms)
santaux 30174 0.0 0.0 4400 612 pts/0 S+ 12:09 0:00 sh -c { { /usr/bin/sudo -n /bin/tar --ignore-failed-read -cPf - '/mnt/cloud/slutor/uploads' 2>&4 ; echo "0|$?:" >&3 ; } | { /bin/gzip 2>&4 ; echo "1|$?:" >&3 ; } | { /bin/cat > '/home/santaux/Backup/.tmp/slutor_backup/archives/uploads.tar.gz' 2>&4 ; echo "2|$?:" >&3 ; } } 3>&1 1>&2 4>&2
santaux 30177 0.0 0.0 4400 316 pts/0 S+ 12:09 0:00 sh -c { { /usr/bin/sudo -n /bin/tar --ignore-failed-read -cPf - '/mnt/cloud/slutor/uploads' 2>&4 ; echo "0|$?:" >&3 ; } | { /bin/gzip 2>&4 ; echo "1|$?:" >&3 ; } | { /bin/cat > '/home/santaux/Backup/.tmp/slutor_backup/archives/uploads.tar.gz' 2>&4 ; echo "2|$?:" >&3 ; } } 3>&1 1>&2 4>&2
santaux 30178 0.0 0.0 4400 304 pts/0 S+ 12:09 0:00 sh -c { { /usr/bin/sudo -n /bin/tar --ignore-failed-read -cPf - '/mnt/cloud/slutor/uploads' 2>&4 ; echo "0|$?:" >&3 ; } | { /bin/gzip 2>&4 ; echo "1|$?:" >&3 ; } | { /bin/cat > '/home/santaux/Backup/.tmp/slutor_backup/archives/uploads.tar.gz' 2>&4 ;
# This file is auto-generated from the current state of the database. Instead
# of editing this file, please use the migrations feature of Active Record to
# incrementally modify your database, and then regenerate this schema definition.
#
# Note that this schema.rb definition is the authoritative source for your
# database schema. If you need to create the application database on another
# system, you should be using db:schema:load, not running all the migrations
# from scratch. The latter is a flawed and unsustainable approach (the more migrations
# you'll amass, the slower it'll run and the greater likelihood for issues).
#
# This file is auto-generated from the current state of the database. Instead
# of editing this file, please use the migrations feature of Active Record to
# incrementally modify your database, and then regenerate this schema definition.
#
# Note that this schema.rb definition is the authoritative source for your
# database schema. If you need to create the application database on another
# system, you should be using db:schema:load, not running all the migrations
# from scratch. The latter is a flawed and unsustainable approach (the more migrations
# you'll amass, the slower it'll run and the greater likelihood for issues).
#
ubuntu@ip-10-143-29-124:~$ cat /etc/nginx/nginx.conf
worker_processes 1;
user ubuntu ubuntu;
pid /run/nginx.pid;
error_log /var/log/nginx/nginx.error.log;
events {
worker_connections 1024;
accept_mutex off;
set daemon 5
set logfile /var/log/monit.log
set idfile /var/lib/monit/id
set statefile /var/lib/monit/state
set eventqueue
basedir /var/lib/monit/events
slots 100
# Begin Whenever generated tasks for: kliprr
0 0,12 * * * /bin/bash -l -c 'cd :path && bundle exec backup perform --trigger sample_backup'
0 0 * * * /bin/bash -l -c 'cd /srv/kliprr/releases/20121207111311 && RAILS_ENV=production bundle exec rake schedule:daily_emails --trace'
0,10,20,30,40,50 * * * * /bin/bash -l -c 'cd /srv/kliprr/releases/20121207111311 && RAILS_ENV=production bundle exec rake schedule:immediate_emails --trace'
0 0 * * * /bin/bash -l -c 'cd /srv/kliprr/releases/20121207111311 && RAILS_ENV=production bundle exec rake schedule:sync_counters --trace'
0 * * * * /bin/bash -l -c 'cd /srv/kliprr/releases/20121207111311 && RAILS_ENV=production bundle exec rake schedule:cloudinary_cleanup --trace'
ubuntu@ip-10-143-29-124:/srv/kliprr/current$ cat config/unicorn.rb
#worker_processes 4 # amount of unicorn workers to spin up
#timeout 30 # restarts workers that hang for 30 seconds
deploy_to = "/srv/kliprr"
rails_root = "#{deploy_to}/current"
pid_file = "#{deploy_to}/shared/pids/unicorn.pid"
socket_file= "#{deploy_to}/shared/unicorn.sock"
log_file = "#{rails_root}/log/unicorn.log"
err_log = "#{rails_root}/log/unicorn_error.log"
old_pid = pid_file + '.oldbin'
MacBook-Pro-Konstantin:~ santaux$ traceroute ec2-54-251-79-7.ap-southeast-1.compute.amazonaws.com
traceroute to ec2-54-251-79-7.ap-southeast-1.compute.amazonaws.com (54.251.79.7), 64 hops max, 52 byte packets
1 10.204.84.1 (10.204.84.1) 1.245 ms 1.432 ms 1.540 ms
2 192.168.250.81 (192.168.250.81) 0.531 ms 0.448 ms 0.436 ms
3 gw21.zet (192.168.254.253) 0.328 ms 0.417 ms 0.392 ms
4 cdac0-2.interzet.ru (188.134.127.17) 0.746 ms 0.712 ms 0.718 ms
5 188.134.126.237 (188.134.126.237) 2.455 ms 1.575 ms 2.213 ms
6 188.134.126.114 (188.134.126.114) 2.642 ms 1.653 ms 2.072 ms
7 te0-0-0-10.ccr21.sto01.atlas.cogentco.com (149.6.168.37) 12.109 ms 12.356 ms 12.379 ms
8 te0-0-0-4.ccr21.sto03.atlas.cogentco.com (154.54.58.186) 12.706 ms