Skip to content

Instantly share code, notes, and snippets.

View abutcher's full-sized avatar

Andrew Butcher abutcher

  • Red Hat
  • Raleigh, NC
View GitHub Profile
hive (master)  make
gofmt -w -s pkg contrib
go vet -mod=vendor ./pkg/... ./cmd/... ./contrib/...
cd v1alpha1apiserver && go vet -mod=vendor ./cmd/... ./pkg/...
/home/abutcher/go/src/github.com/openshift/hive/v1alpha1apiserver
go install -mod=vendor k8s.io/code-generator/cmd/deepcopy-gen
go install -mod=vendor k8s.io/code-generator/cmd/conversion-gen
go install -mod=vendor k8s.io/code-generator/cmd/defaulter-gen
go install -mod=vendor k8s.io/code-generator/cmd/client-gen
go install -mod=vendor sigs.k8s.io/controller-tools/cmd/controller-gen
1a0d293a - (HEAD -> syncset-errors) vendor tmp (17 hours ago) <Andrew Butcher>
diff --git go.sum go.sum
index 104f1381..f1c9433b 100644
--- go.sum
+++ go.sum
@@ -71,7 +71,6 @@ github.com/Azure/go-autorest/autorest/date v0.2.0 h1:yW+Zlqf26583pE43KhfnhFcdmSW
github.com/Azure/go-autorest/autorest/date v0.2.0/go.mod h1:vcORJHLJEh643/Ioh9+vPmf1Ij9AEBM5FuBIXLmIy0g=
github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
-github.com/Azure/go-autorest/autorest/mocks v0.3.0 h1:qJumjCaCudz+OcqE9/XtEPfvtOjOmKaui4EOpFI6zZc=
➜ hive (syncset-errors) ✗ make test
go install -mod=vendor k8s.io/code-generator/cmd/deepcopy-gen
go install -mod=vendor k8s.io/code-generator/cmd/conversion-gen
go install -mod=vendor k8s.io/code-generator/cmd/defaulter-gen
go install -mod=vendor k8s.io/code-generator/cmd/client-gen
go install -mod=vendor sigs.k8s.io/controller-tools/cmd/controller-gen
go install -mod=vendor github.com/jteeuwen/go-bindata/go-bindata
go install -mod=vendor github.com/golang/mock/mockgen
go install -mod=vendor golang.org/x/lint/golint
go install -mod=vendor github.com/golangci/golangci-lint/cmd/golangci-lint
@abutcher
abutcher / gist:ad9271655e4fe63a2740
Last active March 25, 2020 13:48
Regenerate ose dns entries
#!/usr/bin/env oo-ruby
require "/var/www/openshift/broker/config/environment"
Rails.configuration.analytics[:enabled] = false
Mongoid.raise_not_found_error = false
class Regenerate
def self.run
entries = []
Application.all.each do |app|
DEBU[0337] loading SSH key secret clusterDeployment=abutcher controller=remotemachineset namespace=myproject
INFO[0337] reconcile complete clusterDeployment=abutcher controller=unreachable elapsed=191.90019ms namespace=myproject
DEBU[0337] remote cluster version status clusterDeployment=abutcher clusterversion.status="{{4.2.4 quay.io/openshift-release-dev/ocp-release@sha256:cebce35c054f1fb066a4dc0a518064945087ac1f3637
fe23d2ee2b0c433d6ba8 false} [{Completed 2019-11-11 11:48:10 -0500 EST 2019-11-11 12:06:16 -0500 EST 4.2.4 quay.io/openshift-release-dev/ocp-release@sha256:cebce35c054f1fb066a4dc0a518064945087ac1f3637fe23d2ee2
b0c433d6ba8 false}] 1 gd3peR_jS2k= [{Available True 2019-11-11 12:06:16 -0500 EST Done applying 4.2.4} {Failing False 2019-11-11 11:54:17 -0500 EST } {Progressing False 2019-11-11 12:06:16 -0500 EST Cluste
r version is 4.2.4} {RetrievedUpdates False 2019-11-11 11:48:10 -0500 EST RemoteFailed Unable to retrieve available up
$ make deploy
go generate ./pkg/... ./cmd/...
hack/update-bindata.sh
go run vendor/sigs.k8s.io/controller-tools/cmd/controller-gen/main.go crd
[debug] name="BaseDomain"
[debug] type="string"
[debug] JSONPath=".spec.baseDomain"
[debug] name="Installed"
[debug] type="boolean"
@abutcher
abutcher / pacemaker2native.md
Last active February 25, 2019 09:34
Upgrading from pacemaker ha to native ha

Using Playbooks to Upgrade from Pacemaker to Native HA

These steps assume that cluster has been upgraded to 3.1 either manually or via the upgrade playbook.

WARNING: Playbooks will re-run cluster configuration steps, therefore any settings that are not stored in your inventory file will be overwritten. Backup any configuration files that have been modified since installation.

1. Modify Ansible Inventory

Original ansible inventory configuration for pacemaker contains a VIP and a cluster hostname which should resolve to this VIP. Native HA requires a cluster hostname which resolves to the load balancer being used.

apiVersion: cluster.k8s.io/v1alpha1
kind: Machine
metadata:
creationTimestamp: 2019-02-13T19:17:07Z
finalizers:
- machine.cluster.k8s.io
generateName: abutcher-hive-worker-us-east-1a-
generation: 1
labels:
sigs.k8s.io/cluster-api-cluster: abutcher-hive
spec:
replicas: 1
selector:
matchLabels:
sigs.k8s.io/cluster-api-cluster: abutcher
sigs.k8s.io/cluster-api-machineset: abutcher-infra-us-east-1a
template:
metadata:
creationTimestamp: null
labels:
DEBU[0000] debug logging enabled
INFO[0000] Registering Components.
INFO[0000] Starting the Cmd.
INFO[0000] reconciling machine sets for cluster deployment clusterDeployment=abutcher namespace=myproject
INFO[0000] reconciling cluster deployment clusterDeployment=abutcher namespace=myproject
ERRO[0000] unable to load admin kubeconfig clusterDeployment=abutcher namespace=myproject
DEBU[0000] service account already exists clusterDeployment=abutcher name=cluster-installer namespace=myproject
DEBU[0000] role already exists clusterDeployment=abutcher name=cluster-installer namespace=myproject
DEBU[0000] rolebinding already exists clusterDeployment=abutcher name=cluster-installer namespace=myproject
DEBU[0000] found delete after annotation: 8h clusterDeployment=abutcher namespace=myproject