Skip to content

Instantly share code, notes, and snippets.

Avatar
💭
Open to possibilities. Enquire within.

Jeff Dickey jdickey

💭
Open to possibilities. Enquire within.
View GitHub Profile
@jdickey
jdickey / .env.development
Created Feb 27, 2018
Sample .env.development and docker-compose.yml file for Hanami in development mode using Docker
View .env.development
# Define ENV variables for development environment
DATABASE_URL='postgres://postgres:@db:5432/yourapp_development'
SERVE_STATIC_ASSETS="true"
WEB_SESSIONS_SECRET="d31b874186058eb00e17d9cde98e3745408b10666112a401a05a0d7ab392d30c"
@jdickey
jdickey / semi-automatic-droplet-setup.sh
Last active Feb 20, 2018
Semi-automatic setup of Docker-running Droplet from within Droplet itself.
View semi-automatic-droplet-setup.sh
# Semi-automatic setup of Docker-running Droplet from within Droplet itself
#
# Last updated 2017-01-19 at 14:50 (SGT; GMT+8) by @jdickey
#
# ##### Section 1 of 10: Variables used within this script. #####
#
# **NOTE** that several of these **must** be changed, namely
# * DOCKER_PASSWD
# * DOCKER_USER
# * GITHUB_USER
@jdickey
jdickey / ---Troubleshooting Unexpected Ansible Breakage.md
Last active Jan 19, 2018
Writeup for folks trying to help me solve the latest in the series of DigitalOcean+Ansible+Docker confounding experiences
View ---Troubleshooting Unexpected Ansible Breakage.md

What We're Trying to Do

Referencing this earlier Gist, of course.

The following are from a series of Ansible Playbooks, associated files, and wrapper shell functions which I have been using for some months for multiple projects, all hosted on DigitalOcean. Until approximately 0130 Singapore time on Wednesday 17 January 2017 (GMT+8; 1730 GMT or 0930 PDT on 16 January), these scripts had been working quite well. The basic workflow is straightforward:

  1. In the new_droplet.yml Playbook,
    1. Create a new Droplet, with specified values for name, image, region, size, and other values including ssh_key_ids, which is set to the (single) DigitalOcean SSH key ID for the DO user owning the Droplet to be created;
    2. Tag the newly-created Droplet so that it is uniquely identifiable using the Ansible digital_ocean.py dynamic-inventory script;
  2. In the provision_droplet.yml Pla
View Ansible Provisioning of Docker Droplet.md

I've been trying to use Ansible to create and set up a Droplet based on your docker-16-04 image. Creating the droplet using the Ansible digital_ocean module works as expected, but I can't access the Droplet afterwards, as I get an error from ssh.

I can't believe that I'm the first or only customer to try to set things up like this. I've been knocking my head against any information source I can find for two weeks now. Perhaps you can advise?

I've attached my Playbooks and command output as listed below. The dynamic-inventory script referenced as digital_ocean.py is that supplied in the Ansible repo.

  1. Command Output 1 - Create Droplet.log - output from running the new_droplet Playbook (see Item 3);
  2. Command Output 2 - Attempted Provisioning of Droplet.md — output from running the playbook Playbook (see Item 4);
  3. new_droplet.yml - Ansible Playbook tasked with creating and tagging a new Droplet
  4. playbook.ym
@jdickey
jdickey / GnuPG Signature Transition 2017-06-30.txt
Created Jun 30, 2017
GNU Privacy Guard Signature Transition WEF 2017-06-30
View GnuPG Signature Transition 2017-06-30.txt
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Friday 30 June 2017 12:30 SGT (GMT+8)
This is to serve notice that my GPG (GNU Privacy Guard) key has changed with effect from today, Friday 30 June 2017, following my (revised) standard protocol for key expiration. My new key will remain valid until 31 December 2017. I recommend that other users of GNU Privacy Guard enact such policies and keep revocation certificates for their current keys in offline storage, such as on a thumb drive that is labelled and physically secured separately from their systems.
My old key will continue to work for use in sending me secured messages until it expires later today, Friday 30 June 2017 (and for verifying previously-signed messages thereafter); however, you should be using the new key. To anyone who has signed my old key, I’d greatly appreciate your signing the new one (after satisfying yourself of its provenance and legitimacy, of course).
This message is signed by
@jdickey
jdickey / introduction.md
Created Jun 18, 2017 — forked from wagnerjgoncalves/introduction.md
Notes from Growing Rails Applications in Practice
View introduction.md

Growing Rails Applications in Practice

  • How to use discipline, consistency and code organization to make your code grow more gently.

  • As you cycle through patterns, your application is becoming a patchwork of different coding techniques.

    All those new techniques actually help, or if you are just adding layers of inderection.

  • Large applications are large so what we can do is organize a codebase in a way that "scales logarithmically".

@jdickey
jdickey / exploring-test-breakage.md
Last active Mar 17, 2017
Attempting to discover reason for test breakage in some/most random-order sequences.
View exploring-test-breakage.md

This is essentially a duplicate of/companion to this comment on jdickey/rack-service_api_versioning#14.


We have a temporary branch with some new tests for verifying that an invalid SBU causes an HTTP 400 status code to be generated. Commenting out the tests for that, with the code in place, continues to yield a green bar. Enabling the tests causes failures — in apparently unrelated tests; the invalid-SBU-generates-400 tests always pass.

As a reminder, the only known way to run MiniTest tests in a reproducible order requires a somewhat cumbersome syntax to set the randomisation seed; e.g

ruby -e 'require "./test/rack/service_api_versioning/accept_content_type_selector_test.rb"' -- -v --seed=16442
@jdickey
jdickey / HOLY SMOKES, RbNaCl is fast.md
Created Dec 28, 2016
Benchmarking RbNaCl signature and encryption/decryption vs GPGME
View HOLY SMOKES, RbNaCl is fast.md

It's not really fair to compare the speed of Ruby binding to the Networking and Cryptography (NaCl) library with that of the Ruby interface to GnuPG Made Easy (GPGME); their execution profiles and environments are so radically different. RbNaCl is a unified library, whereas GPGME is an interface to a set of libraries that interact with at least one server process (GPG-AGENT). Benchmark timings bear this out; whereas generating signatures or encrypting/decrypting data with RbNaCl benchmarks thousands of iterations per second on our development system, GPGME clocks in at a half-dozen IPS for any operation other than verifying a signature (which is relatively speedy at ~50 iterations/second).

One needs to keep in mind that the signing keys and encrypted data for RbNaCl exist natively in binary form only, requiring conversion using [Base64](http://ruby

View Help with Rake best practices.md

I have been carrying this Rakefile around between projects, that uses another file which reopens the FlogTask class and adds an attr_accessor to it, which the task definition in the Rakefile then uses.

This works but, as you can readily tell, it's less than ideal. In particular, the Rakefile defines multiple tasks which are the same for each project; the only change is usually the Flog threshold value.

What I want to be able to do is:

  1. Subclass a tool's standard Rake task (e.g., FlogTask). The subclass' #initialize method would then set up our standard settings as is presently done in the Rakefile;
  2. Define namespaced Rake tasks (in, e.g., lib/tasks/custom_flog.rake) that use those subclasses rather than the tool's standard Rake task; reducing boilerplate and possibility of copy/paste errors;
  3. Have those tasks be available in