Skip to content

Instantly share code, notes, and snippets.

Tim McCormack timmc

Block or report user

Report or block timmc

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
@timmc
timmc / gist:df5fbb6e069fb8c1c4e181a29930ace3
Last active Aug 9, 2019
Building a sqlite DB for the Pwned Passwords data
View gist:df5fbb6e069fb8c1c4e181a29930ace3

Last executed 2019-06-25 with the v4 dump:

  1. Make sure you have 60 GB free disk space and some extra to spare. Alternatively, take a walk on the wild side and delete source files as soon as you've used them.
  2. Download the SHA-1 (ordered by hash) torrent from https://haveibeenpwned.com/Passwords
  3. Unpack and strip off the counts:
    7z x -so pwned-passwords-sha1-ordered-by-hash-v4.7z pwned-passwords-sha1-ordered-by-hash-v4.txt | sed 's/:.*//' > hashes.lst
    
View jwt-sign-ES256.sh
#!/bin/bash
# Create and sign a JWT token with ES256 given the path to an ECDSA
# private key and a JSON payload.
# $0 path/to/keypair.der '{"JSON": "payload"}'
# Example keypair creation:
# openssl ecparam -name prime256v1 -genkey -noout -outform DER > example-keypair.der
# A few tips for generating the payload:
# - Pipe raw strings through `jq --raw-input .` to encode them as
@timmc
timmc / output-2-10.tsv
Created Jun 5, 2019
Uptime output for in/out hysteresis healthcheck
View output-2-10.tsv
0.00 1.00000
0.01 0.99937
0.02 0.99507
0.03 0.98962
0.04 0.98327
0.05 0.96881
0.06 0.95235
0.07 0.94017
0.08 0.91365
0.09 0.88673
@timmc
timmc / elb-hysteresis-2-10.svg
Last active Jun 5, 2019
Simulating ELB-style hysteresis for a host that exhibits random failures for both the healthcheck and regular requests
View elb-hysteresis-2-10.svg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View gist:cbf503895f08dc39f3bc471aaa5e068a
Have the following been addressed in the branch, if appropriate?
- Tests (unit, API, integration)
- Docs (both in source and in docs directory, and in public docs if separate)
- Changelog
- Compatibility with previous versions (calls, shared files or DBs, data formats -- backward and forward compatibility)
- Rollback friendly?
- Feature switches?
View port-forwarding-jmx.md

Port-forwarding JMX

Proof of concept:

  • Terminal 1:
    • SSH to remote host
    • Start a Java process with JMX registry port 50004, RMI callback port 50005, and RMI hostname pinned to localhost: java -Dcom.sun.management.jmxremote.port=50004 -Dcom.sun.management.jmxremote.rmi.port=50005 -Djava.rmi.server.hostname=localhost -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -cp /some/jar/file main.class
  • Terminal 2:
@timmc
timmc / weighted-shuffle-sampling.clj
Created Mar 25, 2019
A sampling-based version of weighted-shuffle (better to use the exponential random solution)
View weighted-shuffle-sampling.clj
;; This is asymptotically slower (n^2) than the exponential random sort
;; one (n log n) shown in https://gist.github.com/timmc/1211c1ac8ae96c2b42c94124005b5414
;; but it is preserved here for possible later interest
(defn weighted-random-sample
"Given a coll of weights, pick one according to a weighted-random
selection, and return its index. Weights must be non-negative."
[weights]
(when (empty? weights)
(throw (IllegalArgumentException. "Cannot sample from empty list")))
View weighted-shuffle-fancy.clj
(defn weighted-shuffle
"Perform a weighted shuffle on a collection. weight-fn is called at
most once for every element in the collection."
[weight-fn coll]
(->> coll
(shuffle) ;; tie-break any zero weights
(map (fn [el]
;; Bound the weight to positive values
(let [weight (Math/max Double/MIN_VALUE (double (weight-fn el)))
;; Weighted Random Sampling (2005; Efraimidis, Spirakis)
@timmc
timmc / spclsfc.md
Last active Mar 19, 2019
Success-prioritized, concurrency-limited stochastic fallback cascade
View spclsfc.md

Algorithm:

  • State
    • For each node, store:
      • Rolling window of 6 historical stat buckets
      • One additional "sticky" bucket
    • Write stats to newest bucket
    • Age out the oldest bucket every 5 seconds, and add a new one
      • If the oldest bucket had data, copy it over the "sticky" bucket
    • Recorded stats: Number of finished requests, number that
View urldecode.sh
#!/bin/bash
while read line; do
url_encoded="${line//+/ }"
printf '%b\n' "${url_encoded//%/\\x}"
done
You can’t perform that action at this time.