Skip to content

Instantly share code, notes, and snippets.

[mgugino@mguginop50 gcp-testing]$ cat mcps.yaml
apiVersion: machine.openshift.io/v1beta1
kind: MachineControlPlaneSet
metadata:
name: default
spec: {}
status:
controlPlaneMachines:
- name: "x1"
replacementInProgress: false
{
"ignition": {
"config": {},
"security": {
"tls": {}
},
"timeouts": {},
"version": "2.2.0"
},
"networkd": {},
# put this in reconciler_test.go
func TestExists2(t *testing.T) {
scope, err := newLocalMachineScope()
if err != nil {
t.Fatalf("err: %v", err)
}
r := newReconciler(scope)
exists, err := r.instanceExists()
if err != nil {
---
apiVersion: cluster.k8s.io/v1alpha1
kind: Cluster
metadata:
name: cluster-example
spec:
controllers:
clustermanager:
image: k8s.io/cluster-api/clustermanager:latest
controlplane:
@michaelgugino
michaelgugino / actuator.go
Created May 7, 2019 18:56
actuator-library
# pkg/actuator/common.go
package actuator
func (a *Actuator) Reconcile(...) {
...
a.Create(...)
}
# pkg/actuator/myprovider.go
## my custom actuator code lives here.
[mgugino@host installer]$ mv terraform.tfstate terraform.tfstate.bak
[mgugino@host installer]$ ./bin/openshift-install create cluster
INFO Consuming "Kubeconfig Admin" from target directory
INFO Consuming "Terraform Variables" from target directory
INFO Creating cluster...
ERROR
ERROR Error: Error applying plan:
ERROR
ERROR 2 errors occurred:
ERROR * module.iam.aws_iam_instance_profile.worker: 1 error occurred:

Scenario 1

Mismatch fqdn vs hostname.

3.9 install

Ensure output of 'hostname' != 'hostname -f' on each host.

EG:

@michaelgugino
michaelgugino / cluster-config.yaml
Last active October 2, 2018 20:30
Compare input and output
apiVersion: v1
data:
install-config: |
admin:
email: myemail@myhost.com
password: somesupergoodpasswordgoeshere
sshKey: |
ssh-rsa < public key>
baseDomain: tt2.testing
clusterID: c3b78518-d102-4b64-b84f-c9e2fd34480a
Bazel:
I couldn't get the pod provided to actually work, so I built my own:
https://gist.github.com/michaelgugino/257519b9e2c3804c8b291a13726e6a9e
In hind sight, the pod provided might actually work, given the following:
Before attempting to run docker/podman command, ensure you mkdir .cache/ and
chmod 777, otherwise pod won't be able to write in that directory and things
break.
#!/bin/bash
rm -f /var/lib/libvirt/images/*.ign
rm -f /var/lib/libvirt/images/coreos_base
rm -f /var/lib/libvirt/images/master0
rm -f /var/lib/libvirt/images/bootstrap
virsh undefine bootstrap
virsh undefine master0
virsh net-destroy tectonic
virsh net-undefine tectonic