Skip to content

Instantly share code, notes, and snippets.

View zzet's full-sized avatar
🎯
I may be slow to respond.

Andrew Kumanyaev zzet

🎯
I may be slow to respond.
View GitHub Profile
@zzet
zzet / semantic-commit-messages.md
Created January 8, 2020 18:14 — forked from joshbuchea/semantic-commit-messages.md
Semantic Commit Messages

Semantic Commit Messages

See how a minor change to your commit message style can make you a better programmer.

Format: <type>(<scope>): <subject>

<scope> is optional

Example

SELECT idx.relname as table,
idx.indexrelname as index,
pg_relation_size( idx.indexrelname::text ) as bytes,
cls.relpages as pages,
cls.reltuples as tuples,
idx.idx_scan as scanned,
idx.idx_tup_read as read,
idx.idx_tup_fetch as fetched
FROM pg_stat_user_indexes idx,
pg_class cls ,
@zzet
zzet / gist:7900439
Created December 10, 2013 21:22 — forked from dnagir/gist:1573414
  • This creates a 560mb ramdisk. Adjust the size accordingly. I think the number at the end of the command is the number of disk blocks. They are 2kb in size for me.
  • Restarting postgres is not necessary; you can create the ramdisk and tablespace while postgres is running.
  • You will lose all data in the ramdisk tablespace when you shut your machine down

  $ diskutil erasevolume HFS+ "postgres_ramdisk" `hdiutil attach -nomount ram://1165430`
  Started erase on disk1
  Unmounting disk
  Erasing
 Initialized /dev/rdisk1 as a 569 MB HFS Plus volume

Zero downtime deploys with unicorn + nginx + runit + rvm + chef

Below are the actual files we use in one of our latest production applications at Agora Games to achieve zero downtime deploys with unicorn. You've probably already read the GitHub blog post on Unicorn and would like to try zero downtime deploys for your application. I hope these files and notes help. I am happy to update these files or these notes if there are comments/questions. YMMV (of course).

Other application notes:

  • Our application uses MongoDB, so we don't have database migrations to worry about as with MySQL or postgresql. That does not mean that we won't have to worry about issues with the database with indexes being built in MongoDB or what have you.
  • We use capistrano for deployment.

Salient points for each file: