Skip to content

Instantly share code, notes, and snippets.

@mhrivnak
Last active June 10, 2021 19:26
Show Gist options
  • Save mhrivnak/b91d91754b7a5f7b9f992c77d504f1cb to your computer and use it in GitHub Desktop.
Save mhrivnak/b91d91754b7a5f7b9f992c77d504f1cb to your computer and use it in GitHub Desktop.
Add Workers with Agents
pullsecret.yml

Experiment

I’ve been experimenting with running a hub cluster that includes:

  • Hive installed from community-operators
  • Assisted Installer installed from community-operators
  • Hypershift installed per its instructions

A hypershift control plane was created with the goal of adding a worker Node to that control plane using assisted-service. The experiment used a boot-it-yourself approach to starting the discovery ISO in a libvirt VM.

Prerequisites

Namespace

oc new-project clusters

Pull Secret

Create a Secret containing your pull secret. It should look like this:

apiVersion: v1
kind: Secret
metadata:
  name: pullsecret
  namespace: clusters
stringData:
  .dockerconfigjson: '<your pull secret here>'

Storage

Assisted Service requires storage. This scenario uses Local Storage Operator. See the LocalVolume resources.

Operators

Use OpenShift to install Hive and Assisted Installer from community-operators.

Create:

  • AgentServiceConfig
  • HiveConfig

DNS

For this experiment I created a CNAME record for the hypershift control plane that references the hub cluster as its target.

api.demohub.ocp.home.    IN     A   192.168.17.110
*.apps.demohub.ocp.home. IN     A   192.168.17.110

api.hs0.ocp.home.        IN     CNAME   api.demohub.ocp.home.

Layout

Namespace clusters:

  • HostedCluster
  • NodePool
  • InfraEnv

Namespace clusters-hs0 (automatically created by hive):

  • ClusterDeployment
  • AgentClusterInstall

Process

  1. Create pull secret, ssh-key, HostedCluster, NodePool
  2. Create ClusterDeployment and AgentClusterInstall
  3. Create InfraEnv
  4. Download ISO from InfraEnv's URL and boot it somewhere. Optionally add kernal arg ip=dhcp to work around BZ 1967632.
  5. Wait for the Agent resource to appear
  6. Edit agent. Set: approved=true, role=worker, hostname=something
  7. Watch installation begin
apiVersion: agent-install.openshift.io/v1beta1
kind: AgentServiceConfig
metadata:
name: agent
namespace: assisted-service
annotations:
unsupported.agent-install.openshift.io/assisted-service-configmap: "assisted-overrides"
spec:
databaseStorage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: db
filesystemStorage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: isos
osImages:
- openshiftVersion: "4.8"
rootFSUrl: https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/4.8.0-fc.3/rhcos-live-rootfs.x86_64.img
url: https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/4.8.0-fc.3/rhcos-4.8.0-fc.3-x86_64-live.x86_64.iso
version: 48.84.202105062123-0
apiVersion: hive.openshift.io/v1
kind: ClusterDeployment
metadata:
name: hs0
namespace: clusters-hs0
spec:
baseDomain: ocp.home
clusterName: hs0
platform:
agentBareMetal:
agentSelector:
matchLabels:
cluster: hs0
clusterMetadata:
clusterID: ""
infraID: hs0
adminKubeconfigSecretRef:
name: admin-kubeconfig
adminPasswordSecretRef:
name: kubeadmin-password
clusterInstallRef:
group: extensions.hive.openshift.io
version: v1beta1
kind: AgentClusterInstall
name: hs0
pullSecretRef:
name: pull-secret
installed: true
---
apiVersion: extensions.hive.openshift.io/v1beta1
kind: AgentClusterInstall
metadata:
name: hs0
namespace: clusters-hs0
spec:
clusterDeploymentRef:
name: hs0
imageSetRef:
name: openshift-v4.8.0
networking: {}
provisionRequirements:
controlPlaneAgents: 0
apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
name: openshift-v4.8.0
spec:
releaseImage: "quay.io/openshift-release-dev/ocp-release:4.8.0-fc.3-x86_64"
apiVersion: hive.openshift.io/v1
kind: HiveConfig
metadata:
name: hive
spec:
featureGates:
custom:
enabled:
- AlphaAgentInstallStrategy
featureSet: Custom
apiVersion: hypershift.openshift.io/v1alpha1
kind: HostedCluster
metadata:
name: hs0
namespace: clusters
spec:
release:
image: "quay.io/openshift-release-dev/ocp-release:4.8.0-fc.3-x86_64"
pullSecret:
name: "pullsecret"
sshKey:
name: "sshkey"
networking:
serviceCIDR: "172.31.0.0/16"
podCIDR: "10.132.0.0/14"
machineCIDR: "192.168.17.0/24"
platform:
type: "None"
infraID: "hs0"
dns:
baseDomain: "ocp.home"
services:
- service: APIServer
servicePublishingStrategy:
type: NodePort
nodePort:
address: api.hs0.ocp.home
- service: VPN
servicePublishingStrategy:
type: NodePort
nodePort:
address: api.hs0.ocp.home
- service: OAuthServer
servicePublishingStrategy:
type: Route
apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
name: hs0
namespace: clusters
spec:
clusterRef:
name: hs0
namespace: clusters-hs0
agentLabelSelector:
matchLabels:
cluster: hs0
pullSecretRef:
name: pullsecret
sshAuthorizedKey: "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA7uDnJrx1aYrNmP48y9lMuyuVRJWGH0uk+LJrg/qTLMZyj1MoXLydb2+x5qDPYYE7Lg/y5F1MHSo24sdry8Q6vYx+lTsAB8tTgA8/u2m30o7z2gM/1t93NL3KNhmSh4ZjcF6ymtkX7x/O7OlCtgg05iyeG6V2TzYDLcA1s6yFnJE= mhrivnak@hrivnak.org"
apiVersion: local.storage.openshift.io/v1
kind: LocalVolume
metadata:
name: db
namespace: openshift-local-storage
spec:
storageClassDevices:
- devicePaths:
- /dev/vdc
fsType: xfs
storageClassName: db
volumeMode: Filesystem
apiVersion: local.storage.openshift.io/v1
kind: LocalVolume
metadata:
name: isos
namespace: openshift-local-storage
spec:
storageClassDevices:
- devicePaths:
- /dev/vdb
fsType: xfs
storageClassName: isos
volumeMode: Filesystem
apiVersion: hypershift.openshift.io/v1alpha1
kind: NodePool
metadata:
name: worker
namespace: clusters
spec:
clusterName: hs0
nodeCount: 0
nodePoolManagement:
autoRepair: false
maxSurge: 0
maxUnavailable: 0
platform:
type: None
release:
image: quay.io/openshift-release-dev/ocp-release:4.8.0-fc.3-x86_64
status:
conditions: null
nodeCount: 0
apiVersion: v1
kind: ConfigMap
metadata:
name: assisted-overrides
namespace: assisted-installer
data:
LOG_LEVEL: "info"
ENABLE_KUBE_API_DAY2: "true"
HW_VALIDATOR_REQUIREMENTS: "[{\"version\":\"default\",\"master\":{\"cpu_cores\":4,\"ram_mib\":16384,\"disk_size_gb\":20,\"installation_disk_speed_threshold_ms\":10},\"worker\":{\"cpu_cores\":2,\"ram_mib\":8192,\"disk_size_gb\":20,\"installation_disk_speed_threshold_ms\":10},\"sno\":{\"cpu_cores\":8,\"ram_mib\":32768,\"disk_size_gb\":120,\"installation_disk_speed_threshold_ms\":10}}]"
apiVersion: v1
kind: Secret
metadata:
name: sshkey
namespace: clusters
stringData:
id_rsa.pub: "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA7uDnJrx1aYrNmP48y9lMuyuVRJWGH0uk+LJrg/qTLMZyj1MoXLydb2+x5qDPYYE7Lg/y5F1MHSo24sdry8Q6vYx+lTsAB8tTgA8/u2m30o7z2gM/1t93NL3KNhmSh4ZjcF6ymtkX7x/O7OlCtgg05iyeG6V2TzYDLcA1s6yFnJE= mhrivnak@hrivnak.org"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment