Skip to content

Instantly share code, notes, and snippets.

View ctrahey's full-sized avatar

Chris Trahey ctrahey

View GitHub Profile
First, I want to advocate for a 'decoupled' solution, where instead of directly trying to print when you get a new order, you instead have a task that tries to print all orders which have not yet been printed (possibly with some sanity checks like "received in the last 24 hours and also not yet printed"
The reason I suggest this is that it isolates the two parts of code from each other's errors. For example, if the printing code fails or takes a while to finish, does it delay the order processing? This kind of decoupling is a key feature of designing robust systems.
To achieve this, you will first need some code that can take an order and get it printed. How you do this depends greatly on your specifics, so I'll leave the details up to you (possibly with a different, more specific SO question to help). The main "requirements" for this component is that it takes in an order, and reports back if it was successful or not.
Then your database should have a way to track if an order was printed. From experience
@ctrahey
ctrahey / rook-build-output.txt
Created April 29, 2021 03:40
Build output from https://github.com/rohan47/rook/tree/multus_cluster_nw. Docker version info at bottom
15-MBP:rohan47rook christophertrahey$ build/run make -j4
==== building the cross container (this could take minutes the first time)
==== Creating docker volume cross-volume and syncing sources
==== for first time. This could take a few seconds.
=== installing helm
=== ensuring modules are tidied
=== helm package rook-ceph
==> Linting /home/rook/go/src/github.com/rook/rook/_output/rook-ceph
1 chart(s) linted, 0 chart(s) failed
@ctrahey
ctrahey / README.md
Last active April 2, 2021 19:33
Upgrading a Kubernetes Cluster

Upgrade Kubernetes Cluster

A simple script that enumerates nodes, cordons/drains them, upgrades kubelet and kubectl, then uncordons. Roughly follows the official docs and assumes that the control plane is already updated.

Steps I ran on the Control Plane first:

KUBE_VERSION=1.20.5
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=${KUBE_VERSION}-00 && \
apt-mark hold kubeadm
@ctrahey
ctrahey / README.md
Last active March 22, 2021 18:44
Cleanup Ceph Disks from cluster.yaml

Cleanup Ceph Disks

This script is designed to work with yq to take arguments directly from a CephCluster CRD (cluster.yaml) and zap all matching disks on the hosts.

Caveats/Assumptions:

Danger: This is (currently) a wildly destructive script.

  1. Requires named hosts with specific config
  2. Assumes devicePathFilter
  3. Assumes devicePathFilter regex treats /dev/disks/by-path as a base
@ctrahey
ctrahey / logs.txt
Created January 28, 2021 03:35
log output for Linkerd install issue
15-MBP:~ christophertrahey$ k get pods -n linkerd
NAME READY STATUS RESTARTS AGE
alpine2 1/1 Terminating 0 28m
linkerd-controller-864657567f-pp5ml 1/2 Running 0 33m
linkerd-destination-7f7fc6bf8f-xpwz9 1/2 Running 0 33m
linkerd-grafana-54d58fcc87-lct2q 1/2 Running 0 33m
linkerd-identity-77cd7df8b6-g77df 2/2 Running 0 33m
linkerd-prometheus-578869b9f4-tgqrz 1/2 Running 0 33m
linkerd-proxy-injector-576c6c96dc-mqssd 1/2 Running 0 33m
linkerd-sp-validator-67cbdf694f-n5cpw 1/2 Running 0 33m

Keybase proof

I hereby claim:

  • I am ctrahey on github.
  • I am ctrahey (https://keybase.io/ctrahey) on keybase.
  • I have a public key ASC4UnxxPoPkNQOsdr7E1GRyXkCHqT9HN-iqcZmEbDc5VAo

To claim this, I am signing this object:

ctrahey

Keybase proof

I hereby claim:

  • I am ctrahey on github.
  • I am ctrahey (https://keybase.io/ctrahey) on keybase.
  • I have a public key ASC4UnxxPoPkNQOsdr7E1GRyXkCHqT9HN-iqcZmEbDc5VAo

To claim this, I am signing this object: