Skip to content

Instantly share code, notes, and snippets.

@sc68cal
sc68cal / policy.json
Created July 20, 2021 15:30
Route53 policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ListResourceRecordSets",
"route53:ChangeResourceRecordSets"
],
"Resource": {

Keybase proof

I hereby claim:

  • I am sc68cal on github.
  • I am sc68cal (https://keybase.io/sc68cal) on keybase.
  • I have a public key ASDSL9frhA5XGx8yiPyJZQx6cR_ygks-kxNSGj660B9Jsgo

To claim this, I am signing this object:

scolli572@HQSML-155235 ~ » minishift start
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.10.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.10.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Downloading OpenShift binary 'oc' version 'v3.10.0'
scolli572@HQSML-155235 ~/src » git clone git@github.com:openshift/origin.git 130 ↵
Cloning into 'origin'...
Load key "/Users/scolli572/.ssh/id_rsa": invalid format
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
scolli572@HQSML-155235 ~/src » cat ~/.ssh/id_rsa
scollins@Seans-MacBook-Pro ~/src/kubespray ±etcd_count » vagrant destroy -f k8s-03
==> k8s-03: Forcing shutdown of VM...
==> k8s-03: Destroying VM and associated drives...
scollins@Seans-MacBook-Pro ~/src/kubespray ±etcd_count » vagrant ssh k8s-01
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-87-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
From b536eb89d895cb80489ac970923e389a0d73636e Mon Sep 17 00:00:00 2001
From: "Sean M. Collins" <sean@coreitpro.com>
Date: Mon, 11 Dec 2017 10:47:15 -0500
Subject: [PATCH] Evict wtp-k8s-3 from the cluster
Hardware issues with node 3 means we'll promote node 4 to a master
---
k8s-envs/vpcbeta-production/hosts | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
TASK [lxc_hosts : Cached image preparation script] *****************************
task path: /etc/ansible/roles/lxc_hosts/tasks/lxc_cache_preparation.yml:51
Monday 17 April 2017 14:59:21 -0500 (0:00:00.107) 0:03:07.037 **********
ok: [stack.projwrigley.com] => {"changed": false, "checksum": "861f9b837a59976af2f1893e8dcdc5773ade9a01", "dest": "/var/lib/lxc/LXC_NAME/rootfs/usr/local/bin/cache-prep-commands.sh", "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/lxc/LXC_NAME/rootfs/usr/local/bin/cache-prep-commands.sh", "size": 1253, "state": "file", "uid": 0}
ok: [network.projwrigley.com] => {"changed": false, "checksum": "861f9b837a59976af2f1893e8dcdc5773ade9a01", "dest": "/var/lib/lxc/LXC_NAME/rootfs/usr/local/bin/cache-prep-commands.sh", "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/lxc/LXC_NAME/rootfs/usr/local/bin/cache-prep-commands.sh", "size": 1253, "state": "file", "uid": 0}
ok: [alt-files1.projwrigley.com] => {"changed": false, "checksum
TASK [setup] *******************************************************************
Friday 14 April 2017 10:43:39 -0500 (0:00:00.076) 0:00:00.076 **********
fatal: [stack.projwrigley.com_nova_console_container-b04871dc]: FAILED! => {"changed": false, "failed": true, "module_
stderr": "Warning: Permanently added '10.171.203.26' (ECDSA) to the list of known hosts.\r\n--------------------------
----------------------------------------------------\n* WARNING
*\n* You are accessing a secured system and your actions will be logged along *\n* with identifying inf
ormation. Disconnect immediately if you are not an *\n* authorized user of this system.
*\n------------------------------------------------------------------------------\nError: container s
tack.projwrigley.com_nova_console_container-b04871dc is not defined\n", "module_stdout": "", "msg": "MODULE FAILURE"}
fatal: [alt-files1.projwrigley.com_swift_proxy_container-ff67dbc6]: FAILED! => {"changed": false, "failed": true,
ubuntu@example-k8s-master-1:~$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
dnsmasq is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/dnsmasq
kubedns is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubedns
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
scollins@Sean-Collins-MBPr15 ~/src/k8s/kargo ±master⚡ » ansible-playbook -b -v -K -i inventory/inventory.yml cluster.yml
Using /Users/scollins/src/k8s/kargo/ansible.cfg as config file
SUDO password:
PLAY [localhost] ***************************************************************
TASK [bastion-ssh-config : set_fact] *******************************************
ok: [localhost] => {"ansible_facts": {"has_bastion": false}, "changed": false}
TASK [bastion-ssh-config : set_fact] *******************************************