Skip to content

Instantly share code, notes, and snippets.

Michael Mahemoff mahemoff

Block or report user

Report or block mahemoff

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
mahemoff / bench.rb
Last active Sep 6, 2019
in versus include
View bench.rb
#!/usr/bin/env ruby
require 'benchmark'
require 'active_support/all'
puts 'in', Benchmark.measure {}
puts 'include', Benchmark.measure { (1..99000).include? 90000 }
mahemoff / parallel_array.rb
Created Jun 29, 2019
Extending Array with Parallel gem
View parallel_array.rb
class Array
%w(each map each_with_index map_with_index flat_map any? all?).each { |meth|
define_method("#{meth}_in_parallel") { |&block|
Parallel.send(meth, self) { |item| }
mahemoff /
Created Feb 2, 2019
mysql incremental backup
function slice() {
upper=$(expr $lower + 100000)
echo "Backing up $lower"
mysqldump db_name table_name --opt --no-create-info --where "id > $lower and id <= $upper" | gzip -c | ssh user@host "cat > /home/backup/dump$lower.gz"
mahemoff /
Created Jan 9, 2019
Fixing Docker error on Ubuntu 16.04

As here, but refined ...

Create or edit /etc/systemd/system/containerd.service.d/override.conf to contain just this:

[Service] ExecStartPre=

To make it happen:

  • systemctl daemon-reload
mahemoff / Ansible Disk Check
Last active Sep 6, 2019
Show disk space and warn about disk full in Ansible
View Ansible Disk Check
* Shows a message while asserting like:
ok: [host] => {
"msg": "disk usage 4.2B of total 20.0GB (21.0%) (should be within limit 90.0%)"
* Note this only looks at first mount point on current node
* Fails if disk is near-full
* Last step pushes to a push-based monitoring service, which will alert us if it doesn't get there after some time
* Need to setup a variable `disk_limit`, which is the max acceptable usage ratio, e.g. set it to 0.8 if you want to keep disks within 80% of max size
mahemoff / gist:07c0c3427aa3eb3c0abc07f76fc68279
Created Sep 15, 2018
Show disk space and warn about disk full in Ansible
View gist:07c0c3427aa3eb3c0abc07f76fc68279
- name: show disk space
debug: msg="{{ ((mount.size_total - mount.size_available) / 1000000000) | round(1,'common') }}GB of {{ (mount.size_total / 1000000000)|round(
1, 'common') }}GB ({{ (100 * ( (mount.size_total - mount.size_available) / mount.size_available)) | round(1, 'common')}}%)"
mount: "{{ ansible_mounts | first }}"
tags: disk
- name: e
mahemoff /
Last active Aug 6, 2018
Verifying Google OAuth auth code on the back-end

This is the "missing Ruby example" for the ID flow example at

It's easy enough to get an auth code like "4/BlahBlahBlah...", but I couldn't find any working examples on how to exchange it for the access code and encoded ID.

To use this, you need to access Google's API console, and under "credentials" establish a client ID and secret, which should go in your environment. (Most examples will use the "secrets.json" file, but I don't want to keep a separate config file for every platform, so it's better to put them in something like Rails' secret.yml or Figaro).

The auth_code is obtained from your web or native client using Google's front-end libraries. The client posts it to your own back-end, which does the exchange and verifies+stores the result. Note the redirect URI must be configured in Google's "credentials" console, otherwise the call will fail (even though it serves no purpose in this context; it's only needed for a non-JavaScript

mahemoff / linodecost.rb
Created Jun 20, 2018
Report monthly Linode cost
View linodecost.rb
#!/usr/bin/env ruby
require 'byebug'
require 'linode'
require 'awesome_print'
# setup API access - key is in file
open('/etc/linode.conf').read =~ /api_key: (.+)/
lin = key).linode

MySQL backup commands and sizes

Adventures in storing and backing up a typical database using innobackupex and gpg. Using gzipped tar due to ubiquity, even though it's possibly 10-20% worse on perf/storage.


PREPARED (ie ready to move to MySQL folder)

  • 2800MB Uncompressed and prepared (roughly the size of live database)
  • 21000MB Uncompressed before compression
  • 700MB Crypted+Compressed (ideal for external storage)
You can’t perform that action at this time.