Skip to content

Instantly share code, notes, and snippets.

View eriknelson's full-sized avatar
🤘

Erik Nelson eriknelson

🤘
View GitHub Profile
@eriknelson
eriknelson / gist:2650e2f8f5e3312d0932956ea238a985
Last active December 4, 2020 19:15
Sprint retro pseudo code and questions
discoverCurrentSprint()
fetchOpenProjects
sort on num from project name: "MTC Sprint <num>", return highest
[3rd Thurs @ 1pm EST]
startNewSprint()
currentSprint = discoverCurrentSprint()
// Setup a new sprintboard
TASK [k8s_nginx_ingress : Create Hello World webapp ingress] **********************************************************************************************************************************
fatal: [panamera.kube.nsk.io]: FAILED! => {
"changed": false,
"error": 500,
"reason": "Internal Server Error",
"status": 500
}
MSG:

Repo: mig-controller

089e7677 Filter velero pod list on currently-running (non-terminating) pods
09044323 Fix discovery debug tree filtering on PVBs/PVRs (#723)
d8f27843 Use ubi8/go-toolset:1.14.7 to avoid ratelimit build failures (#798)
d6ebd79b Bug 1894822: Fix flipped src/dest pods (#797)
9d07f267 Fix PodVolumeBackup & PodVolumeRestore not updating in the discovery inventory correctly. fixes #724
Warning FailedScheduling 57s (x4 over 65s) default-scheduler 0/6 nodes are available: 1 Insufficient cpu, 2 node(s) had volume node affinity conflict, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.

Getting started with MTC hacking

Getting clusters set up

First thing you'll need to start hacking on MTC is a couple of clusters. The main use-case and the original impetus for the project is to migrate users from OCP 3.x -> 4.x because it was determined there would not be an in-place upgrade path. However, there is nothing that precludes MTC from alternative topologies and use-cases, such as 3.x -> 3.x, or 4.x -> 4.x. Most devs run a 3.11 and a 4.x cluster, where x is the latest stable ocp4 minor release.

* b7cb95e (HEAD -> master, tag: 1.1.1, upstream/master, origin/master, origin/HEAD) refactor all templates to have variables in double quotes (#126)
* acd2b75 specify the host of route explicitly (#124)
* 7ddf7da (tag: 1.1.0) report errors for failed pvc patches instead of exiting the loop (#123)
* aa9988e make resource names static and unique (#122)
* da9dbd1 Test and document incremental staging (#120)
* 25ad2a0 Making stunnel port configureable (#117)
* dec25ee Improvements in Stage 2 around volume sizes on the destination cluster (#110)
* c3347d1 add LICENSE (#119)
* 42a301f (tag: 1.0.0) 1.0.0 rc (#118)
* 51c3382 Merge pull request #113 from fbladilo/rsync_flags
#!/usr/bin/env bash
# vim: set ft=bash:
update_lc=$(journalctl --since "yesterday" -t yum | tail -n +2 | wc -l)
if [[ $update_lc -gt 0 ]]; then
datestamp=$(date +%s)
scratch_file="{{ scratch_path }}/update-alert-${datestamp}.txt"
journalctl --since "yesterday" -t yum &> $scratch_file
@eriknelson
eriknelson / ssh-stunnel.yaml
Created June 29, 2020 15:19 — forked from Miciah/ssh-stunnel.yaml
Example of configuring OpenSSH on OpenShift using stunnel and a passthrough route.
# This is an example for configuring a Kubernetes deployment to provide SSH
# access to an OpenShift cluster. The deployment runs OpenSSH and stunnel.
# SSH clients connect through an OpenShift passthrough route using stunnel.
#
# Example usage:
#
# Create a host key-pair for sshd:
#
# /bin/ssh-keygen -q -t rsa -f ssh_host_rsa_key -C '' -N ''
#
What about Evergreen and Librem 5 USA?
______________________________________
As far as Evergreen and Librem 5 USA shipping dates go, while there are
parts of that process that are running in parallel to Dogwood, there are
other parts (such as molds and FCC/CE testing on the final mass-produced
PCB) which must wait until after the final Dogwood phones have arrived
and have been thoroughly evaluated. Before we commit to a revised
shipping date for Evergreen and Librem 5 USA, we’d like a few more weeks
to complete the evaluation of the final Dogwood phones.
Warning Failed 6m36s (x6 over 7m17s) kubelet, <node>.internal Error: set memory limit 10485760 too low; should be at least 12582912