Skip to content

Instantly share code, notes, and snippets.

Michael Mahemoff mahemoff

View GitHub Profile
@mahemoff
mahemoff / backup.sh
Created Feb 2, 2019
mysql incremental backup
View backup.sh
#!/bin/bash
function slice() {
lower=$1
upper=$(expr $lower + 100000)
echo "Backing up $lower"
mysqldump db_name table_name --opt --no-create-info --where "id > $lower and id <= $upper" | gzip -c | ssh user@host "cat > /home/backup/dump$lower.gz"
}
lower=0
@mahemoff
mahemoff / docker-1604.error.md
Created Jan 9, 2019
Fixing Docker error on Ubuntu 16.04
View docker-1604.error.md

As here, but refined ...

Create or edit /etc/systemd/system/containerd.service.d/override.conf to contain just this:

[Service] ExecStartPre=

To make it happen:

  • systemctl daemon-reload
@mahemoff
mahemoff / Ansible Disk Check
Last active Sep 16, 2018
Show disk space and warn about disk full in Ansible
View Ansible Disk Check
* Shows a message while asserting like:
ok: [host] => {
"msg": "disk usage 4.2B of total 20.0GB (21.0%) (should be within limit 90.0%)"
}
* Note this only looks at first mount point on current node
* Fails if disk is near-full
* Last step pushes to a push-based monitoring service, which will alert us if it doesn't get there after some time
* Need to setup a variable `disk_limit`, which is the max acceptable usage ratio, e.g. set it to 0.8 if you want to keep disks within 80% of max size
@mahemoff
mahemoff / gist:07c0c3427aa3eb3c0abc07f76fc68279
Created Sep 15, 2018
Show disk space and warn about disk full in Ansible
View gist:07c0c3427aa3eb3c0abc07f76fc68279
- name: show disk space
debug: msg="{{ ((mount.size_total - mount.size_available) / 1000000000) | round(1,'common') }}GB of {{ (mount.size_total / 1000000000)|round(
1, 'common') }}GB ({{ (100 * ( (mount.size_total - mount.size_available) / mount.size_available)) | round(1, 'common')}}%)"
vars:
mount: "{{ ansible_mounts | first }}"
tags: disk
- name: e
@mahemoff
mahemoff / README.md
Last active Aug 6, 2018
Verifying Google OAuth auth code on the back-end
View README.md

This is the "missing Ruby example" for the ID flow example at https://developers.google.com/identity/sign-in/web/server-side-flow.

It's easy enough to get an auth code like "4/BlahBlahBlah...", but I couldn't find any working examples on how to exchange it for the access code and encoded ID.

To use this, you need to access Google's API console, and under "credentials" establish a client ID and secret, which should go in your environment. (Most examples will use the "secrets.json" file, but I don't want to keep a separate config file for every platform, so it's better to put them in something like Rails' secret.yml or Figaro).

The auth_code is obtained from your web or native client using Google's front-end libraries. The client posts it to your own back-end, which does the exchange and verifies+stores the result. Note the redirect URI must be configured in Google's "credentials" console, otherwise the call will fail (even though it serves no purpose in this context; it's only needed for a non-JavaScript

@mahemoff
mahemoff / linodecost.rb
Created Jun 20, 2018
Report monthly Linode cost
View linodecost.rb
#!/usr/bin/env ruby
require 'byebug'
require 'linode'
require 'awesome_print'
# setup API access - key is in file
open('/etc/linode.conf').read =~ /api_key: (.+)/
key=$1
lin = Linode.new(api_key: key).linode
View mysql-backup.md

MySQL backup commands and sizes

Adventures in storing and backing up a typical database using innobackupex and gpg. Using gzipped tar due to ubiquity, even though it's possibly 10-20% worse on perf/storage.

Sizes:

PREPARED (ie ready to move to MySQL folder)

  • 2800MB Uncompressed and prepared (roughly the size of live database)
  • 21000MB Uncompressed before compression
  • 700MB Crypted+Compressed (ideal for external storage)
@mahemoff
mahemoff / bench.rb
Last active Mar 11, 2018
Benchmarking performance of persistent HTTP requests
View bench.rb
#!/usr/bin/env ruby
require 'uri'
require 'net/http'
require 'benchmark'
# dummy fetch first URL to baseline setup (ensures DNS and any routing
# optimisations done)
def prime_fetching(urls)
ignored = Net::HTTP.get_response(URI(urls.first))
end
@mahemoff
mahemoff / MySQL monitoring
Last active Jan 28, 2018
MySQL one-liner for monitoring long queries on the console
View MySQL monitoring
+---------+--------+------------+-------------------+---------+------+----------+------------------------------------------------------------------------------------------------------------------------------+
| ID | USER | HOST | DB | COMMAND | TIME | STATE | QUERY |
+---------+--------+------------+-------------------+---------+------+----------+------------------------------------------------------------------------------------------------------------------------------+
| 2786271 | appy | borg:89700 | global_app_center | Query | 12 | updating | SELECT `posts` FROM `blog` WHERE `authors`.`id` IN ( 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2.. | |
+---------+--------+------------+-------------------+---------+------+----------+---------------------------------------------------------------------
@mahemoff
mahemoff / gist:f828acf69bd00d8db06b085221c92b3e
Created Oct 12, 2017
AWS backup folder from command-line with compression and encryption
View gist:f828acf69bd00d8db06b085221c92b3e
### Install Python and aws
[pip install awscli](https://docs.aws.amazon.com/cli/latest/userguide/installing.html)
You may need to add it to your path, e.g. export PATH="$PATH:/home/player/.local/bin"
### Setup AWS S3 bucket
* In S3, create a new backup bucket. You may wish to set it up with versioning and lifecycle management rules so that you can just keep pushing to the same object and old versions will be deleted and/or moved to Glacier. Also recommended to establish tags and logging if cost is likely to be significant and therefore should be tracked.
* In IAM, create a programmatic user and ensure it has an access key and secret access key
You can’t perform that action at this time.