https://gist.github.com/mhrivnak/b91d91754b7a5f7b9f992c77d504f1cb https://github.com/openshift-telco/telco-gitops/blob/main/mgmt/mgmt.telco.shift.zone/config-assisted-installer/05-agentserviceconfig.yaml
LSO https://docs.okd.io/latest/storage/persistent_storage/persistent-storage-local.html https://github.com/openshift-telco/telco-gitops/blob/main/mgmt/mgmt.telco.shift.zone/01-master-create-lvs-for-lso.yaml https://github.com/openshift-telco/telco-gitops/blob/main/mgmt/mgmt.telco.shift.zone/02-lso-localvolume-lvs.yaml
http://ai.jinkit.com:8888/clusters/5b8a2b00-b50c-443b-8b59-7f189a205045
There's a good link on explaining what CRC is, and how to get started. You can also read through the CRC official documentation.
It's best to start the CRC environment with 6 vCPU and 12G of RAM. This can be done with the following command:
crc start -c 6 -m 12288 -p /location/of/pull-secret.txt
NOTE: Be sure to include the location of your pull-secret.txt file, which can be downloaded from HERE.
Please read the General DNS setup information on the CRC project page. The basic understanding that I want you to walk away with is that CRC uses a default domain (crc.testing
) which works fine on your local Linux, MacOS, or Windows workstation/laptop because during the CRC installation /etc/hosts
entries are written to resolve CRC endpoints (c:\windows\system32\drivers\etc\hosts
on Windows, of course). These entries look like the following:
127.0.0.1 localhost assisted-service-assisted-installer.apps-crc.testing api.crc.testing canary-openshift-ingress-canary.apps-crc.testing console-openshift-console.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing downloads-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing
DNS Considerations:
Next, verify that hosts within your network can resolve the crc.testing
to your local machine. As an example, if the primary interface en0
on the host running CRC is 192.168.0.1
, then make sure that other hosts can resolve to this address. I had to do this by editing my lab BIND server for the following Zone:
root@JINKITRPIDIRS01:~# cat /etc/bind/zones/db.testing
;
; BIND data file for local loopback interface
;
$TTL 604800
@ IN SOA ns1.testing. admin.testing. (
2106161450 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
; Name Servers Definitions
testing. IN NS ns1.testing.
@ IN NS ns1.
; 192.168.1.0/24 - A Records: Name Servers
ns1 IN A 192.168.1.70
api.crc.testing. IN A 192.168.0.1
assisted-service-assisted-installer.apps-crc.testing. IN A 192.168.0.1
oauth-openshift.apps-crc.testing. IN A 192.168.0.1
root@JINKITRPIDIRS01:~# cat /etc/bind/named.conf.local
zone "testing" {
type master;
file "/etc/bind/zones/db.testing"; # zone file path
allow-transfer { 192.168.1.208; }; # ns2 private IP address - secondary
};
root@JINKITRPIDIRS01:~#
This allows hosts in my network to reach my CRC endpoints:
bjozsa@JINKITRPIDIRS01 ~ $ dig @192.168.1.70 api.crc.testing
; <<>> DiG 9.10.3-P4-Raspbian <<>> @192.168.1.70 api.crc.testing
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16075
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;api.crc.testing. IN A
;; ANSWER SECTION:
api.crc.testing. 604800 IN A 192.168.0.1
;; AUTHORITY SECTION:
testing. 604800 IN NS ns1.testing.
testing. 604800 IN NS ns1.
;; ADDITIONAL SECTION:
ns1.testing. 604800 IN A 192.168.1.70
;; Query time: 2 msec
;; SERVER: 192.168.1.70#53(192.168.1.70)
;; WHEN: Fri Jun 18 16:14:45 UTC 2021
;; MSG SIZE rcvd: 111
bjozsa@JINKITRPIDIRS01 ~ $
IMPORTANT: The steps above are actually requirements! Make sure this is working before continuing.
Connectivity Considerations:
Make sure to test connectivity to your CRC cluster before moving onto the next steps. This can be done by following the example below:
bjozsa@JINKITRPIDIRS01 ~ $ curl -k https://api.crc.testing:6443
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}bjozsa@JINKITRPIDIRS01 ~ $
IMPORTANT: If you cannot access the API endpoint for CRC, then make sure your local firewall isn't blocking connections to port 6443. It's best to shut down any firewalls while running through this demo. If the firewall isn't the issue, and you still cannot access the API endpoint, you may want to consider modifying the host-network-access
setting for crc config
.
❯ crc config get host-network-access
Configuration property 'host-network-access' is not set. Default value is 'false'
Once this modification has been made, restart CRC.
It's time to create the local and begin our demonstration. Start by downloading the Pull Secret. For the purposes of our demo, I'm going to download this straight into the working directory.
❯ pwd
/Users/bjozsa/Development/OpenShift/Solutions/AssistedInstaller/CRC-Demos
❯ ls -asl
total 8
0 drwxr-xr-x 4 bjozsa staff 128 Jun 18 12:41 .
0 drwxr-xr-x 11 bjozsa staff 352 Jun 18 12:27 ..
4 -rw-r--r-- 1 bjozsa staff 2727 Jun 18 12:41 pull-secret.txt
Now create the CRC local OpenShift environment. Set the core count, memory, and pull-secret as part of the deployment:
crc start -c 6 -m 12288 -p /Users/bjozsa/Development/OpenShift/Solutions/AssistedInstaller/CRC-Demos/pull-secret.txt
... REMOVED_OUTPUT
... REMOVED_OUTPUT
... REMOVED_OUTPUT
Started the OpenShift cluster.
The server is accessible via web console at:
https://console-openshift-console.apps-crc.testing
Log in as administrator:
Username: kubeadmin
Password: Uoj7L-m3XPH-paVq3-nTX8W
Log in as user:
Username: developer
Password: developer
Use the 'oc' command line interface:
$ eval $(crc oc-env)
$ oc login -u developer https://api.crc.testing:6443
Using the output from the final output of the CRC dialog, let's log into the cluster as the kubeadmin
user:
eval $(crc oc-env)
oc login -u kubeadmin https://api.crc.testing:6443
The following variables will be throughout this demonstration. You can use them exactly as written below, but please adapt them to your envirnment:
cat << EOF > ./environment.sh
RHCOS_VERSION="48.84.202105281935-0"
RHCOS_ROOTFS_URL="https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/4.8.0-fc.7/rhcos-live-rootfs.x86_64.img"
RHCOS_ISO_URL="https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/4.8.0-fc.7/rhcos-4.8.0-fc.7-x86_64-live.x86_64.iso"
ASSISTED_SVC_PVC="20Gi"
ASSISTED_SVC_IMG="quay.io/ocpmetal/assisted-service-index:latest"
ASSISTED_AGENT_LABEL="dc"
ASSISTED_AGENT_SELECTOR="charlotte"
CLUSTER_VERSION="4.8"
CLUSTER_IMAGE="quay.io/openshift-release-dev/ocp-release:4.8.0-fc.7-x86_64"
CLUSTER_NAME="ran-poc"
CLUSTER_DOMAIN="jinkit.com"
CLUSTER_NET_TYPE="OpenShiftSDN"
CLUSTER_CIDR_NET="10.128.0.0/14"
CLUSTER_CIDR_SVC="172.30.0.0/16"
CLUSTER_HOST_NET="192.168.3.0/24"
CLUSTER_HOST_PFX="23"
CLUSTER_WORKER_HT="Enabled"
CLUSTER_WORKER_COUNT="0"
CLUSTER_MASTER_HT="Enabled"
CLUSTER_MASTER_COUNT="0"
CLUSTER_SSHKEY='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDE1F7Fz3MGgOzst9h/2+5/pbeqCfFFhLfaS0Iu4Bhsr7RenaTdzVpbT+9WpSrrjdxDK9P3KProPwY2njgItOEgfJO6MnRLE9dQDzOUIQ8caIH7olzxy60dblonP5A82EuVUnZ0IGmAWSzUWsKef793tWjlRxl27eS1Bn8zbiI+m91Q8ypkLYSB9MMxQehupfzNzJpjVfA5dncZ2S7C8TFIPFtwBe9ITEb+w2phWvAE0SRjU3rLXwCOWHT+7NRwkFfhK/moalPGDIyMjATPOJrtKKQtzSdyHeh9WyKOjJu8tXiM/4jFpOYmg/aMJeGrO/9fdxPe+zPismC/FaLuv0OACgJ5b13tIfwD02OfB2J4+qXtTz2geJVirxzkoo/6cKtblcN/JjrYjwhfXR/dTehY59srgmQ5V1hzbUx1e4lMs+yZ78Xrf2QO+7BikKJsy4CDHqvRdcLlpRq1pe3R9oODRdoFZhkKWywFCpi52ioR4CVbc/tCewzMzNSKZ/3P0OItBi5IA5ex23dEVO/Mz1uyPrjgVx/U2N8J6yo9OOzX/Gftv/e3RKwGIUPpqZpzIUH/NOdeTtpoSIaL5t8Ki8d3eZuiLZJY5gan7tKUWDAL0JvJK+EEzs1YziBh91Dx1Yit0YeD+ztq/jOl0S8d0G3Q9BhwklILT6PuBI2nAEOS0Q=='
CLUSTER_DEPLOYMENT="$CLUSTER_NAME"
CLUSTER_IMG_REF="openshift-v4.8.0"
MANIFEST_DIR="./deploy-$CLUSTER_NAME"
SSH_PRIVATE_KEY="$HOME/.ssh/id_rsa"
EOF
Next, source the file so we can use the variables while writing out the required manifests in the next steps.
source ./environment.sh
Using the variables as sourced above, let's write each of the manifests that will be used for the rest of the demo. Let's first start by creating a few directies that we can use to keep everything organized:
mkdir -p $MANIFEST_DIR/01-operators
mkdir -p $MANIFEST_DIR/02-storage
mkdir -p $MANIFEST_DIR/01-infra
mkdir -p $MANIFEST_DIR/02-config
mkdir -p $MANIFEST_DIR/03-deployment
Now copy/paste to write each of the following infrastructure-related manifests, such as operators, local-storage, etc:
cat << EOF > $MANIFEST_DIR/01-operators/01-local-storage-operator.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: openshift-local-storage
spec: {}
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-local-storage
namespace: openshift-local-storage
spec:
targetNamespaces:
- openshift-local-storage
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: local-storage-operator
namespace: openshift-local-storage
spec:
channel: "4.7"
installPlanApproval: Automatic
name: local-storage-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
Before continuing, capture the hostname of the CRC AIO node in a variable. Use the following command to do this:
CRC_HOSTNAME=$(oc get no -o jsonpath="{.items[0].metadata.name}")
echo $CRC_HOSTNAME
Next write out the manifests for deploying a local-storage discovery agent, and storageclass:
cat << EOF > $MANIFEST_DIR/02-storage/01-local-storage-discovery.yaml
apiVersion: local.storage.openshift.io/v1alpha1
kind: LocalVolumeDiscovery
metadata:
name: auto-discover-devices
namespace: openshift-local-storage
spec:
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- $CRC_HOSTNAME
EOF
cat << EOF > $MANIFEST_DIR/02-storage/02-local-storage-storageclass.yaml
apiVersion: local.storage.openshift.io/v1alpha1
kind: LocalVolumeSet
metadata:
name: local-vs
namespace: openshift-local-storage
spec:
deviceInclusionSpec:
deviceTypes:
- disk
- part
minSize: 1Gi
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- $CRC_HOSTNAME
storageClassName: local-sc
volumeMode: Block
EOF
Let's get the local-storage operator installed:
oc create -f $MANIFEST_DIR/01-operators/01-local-storage-operator.yaml
Verify:
❯ oc get pods -n openshift-local-storage
NAME READY STATUS RESTARTS AGE
local-storage-operator-855dc65869-596l7 1/1 Running 0 34m
Now create the discovery agent and storageclass:
oc create -f $MANIFEST_DIR/02-storage/01-local-storage-discovery.yaml
oc create -f $MANIFEST_DIR/02-storage/02-local-storage-storageclass.yaml
Verify:
❯ oc get pods -n openshift-local-storage
NAME READY STATUS RESTARTS AGE
diskmaker-discovery-rm7pb 1/1 Running 0 15m
diskmaker-manager-k9nxd 1/1 Running 0 14m
local-storage-operator-855dc65869-596l7 1/1 Running 0 34m
❯ oc get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-sc kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 17m
Make the storageclass default
:
oc patch storageclass local-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Verify:
❯ oc get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-sc (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 18m
Write the files for the Hive operator, assisted-installer namespace, and correspoding CatalogSource:
cat << EOF > $MANIFEST_DIR/01-operators/02-hive-operator.yaml
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: hive-operator
namespace: openshift-operators
spec:
channel: alpha
installPlanApproval: Automatic
name: hive-operator
source: community-operators
sourceNamespace: openshift-marketplace
startingCSV: $HIVE_CSV
EOF
cat << EOF > $MANIFEST_DIR/01-operators/03-assisted-installer-catsource.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: assisted-installer
labels:
name: assisted-installer
---
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: assisted-service
namespace: openshift-marketplace
spec:
sourceType: grpc
image: $ASSISTED_SVC_IMG
EOF
Deploy the Hive operator and CatalogSource
oc create -f $MANIFEST_DIR/01-operators/02-hive-operator.yaml
oc create -f $MANIFEST_DIR/01-operators/03-assisted-installer-catsource.yaml
Now write the manifest for the Assisted-Service operator:
cat << EOF > $MANIFEST_DIR/01-operators/04-assisted-installer-operator.yaml
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: assisted-service-operator
namespace: assisted-installer
spec:
targetNamespaces:
- assisted-installer
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: assisted-service-operator
namespace: assisted-installer
spec:
channel: alpha
installPlanApproval: Automatic
name: assisted-service-operator
source: assisted-service
sourceNamespace: openshift-marketplace
EOF
Deploy the Assisted-Service operator:
oc create -f $MANIFEST_DIR/01-operators/04-assisted-installer-operator.yaml
Verify:
❯ oc get pods -n assisted-installer
NAME READY STATUS RESTARTS AGE
assisted-service-operator-568588f6cc-r8w9r 1/1 Running 0 37s
This next part is very important - we will be creating an AgentServiceConfig
for the Assisted-Service. Our primary objective here is to define version of OpenShift that will be available use for your deployment. This information can be found at mirrors.openshift.com. In order to get the RCHOS version
(i.e. 48.84.202105281935-0
), be sure to look at the release.txt
that is included with every single release. Look for the machine-os
, and map that to the given osImages.version
key-value in the example manifest below (04-assisted-installer-agentserviceconfig.yaml). Please be aware that you can have multiple versions of OpenShift osImages
available which are each distinguishable via the openshiftVersion
key.
Knowing this infromation, write out the AgentServiceConfig
manifest:
cat << EOF > $MANIFEST_DIR/02-config/01-assisted-installer-agentserviceconfig.yaml
apiVersion: agent-install.openshift.io/v1beta1
kind: AgentServiceConfig
metadata:
name: agent
spec:
databaseStorage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: $ASSISTED_SVC_PVC
filesystemStorage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: $ASSISTED_SVC_PVC
osImages:
- openshiftVersion: "$CLUSTER_VERSION"
rootFSUrl: >-
$RHCOS_ROOTFS_URL
url: >-
$RHCOS_ISO_URL
version: $RHCOS_VERSION
EOF
Deploy this manifests now:
oc create -f $MANIFEST_DIR/02-config/01-assisted-installer-agentserviceconfig.yaml
Verify:
❯ oc get pods -n assisted-installer
NAME READY STATUS RESTARTS AGE
assisted-service-6b945795d7-wjhjd 2/2 Running 0 43s
assisted-service-operator-568588f6cc-r8w9r 1/1 Running 0 57s
IMPORTANT: If the assisted-service
pods remain in a pending
state, this is because PersistentVolumes aren't being created automatically. You can create these PVs with the following manifests:
cat << EOF > $MANIFEST_DIR/02-storage/03-local-pv-ai.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-ai
labels:
volume: pv0002
spec:
capacity:
storage: 100Gi
hostPath:
path: /mnt/pv-data/pv0002
type: ''
accessModes:
- ReadWriteOnce
- ReadWriteMany
- ReadOnlyMany
claimRef:
kind: PersistentVolumeClaim
namespace: assisted-installer
name: assisted-service
apiVersion: v1
persistentVolumeReclaimPolicy: Recycle
volumeMode: Filesystem
EOF
cat << EOF > $MANIFEST_DIR/02-storage/04-local-pv-postgres.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-postgres
labels:
volume: pv0003
spec:
capacity:
storage: 100Gi
hostPath:
path: /mnt/pv-data/pv0003
type: ''
accessModes:
- ReadWriteOnce
- ReadWriteMany
- ReadOnlyMany
claimRef:
kind: PersistentVolumeClaim
namespace: assisted-installer
name: postgres
apiVersion: v1
persistentVolumeReclaimPolicy: Recycle
volumeMode: Filesystem
EOF
Deploy the PVs with the following commands:
oc create -f $MANIFEST_DIR/02-storage/03-local-pv-ai.yaml
oc create -f $MANIFEST_DIR/02-storage/04-local-pv-postgres.yaml
Now it's time to create the ClusterImageSet
manifest:
cat << EOF > $MANIFEST_DIR/02-config/02-assisted-installer-clusterimageset.yaml
apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
name: $CLUSTER_IMG_REF
namespace: assisted-installer
spec:
releaseImage: $CLUSTER_IMAGE
EOF
oc create -f $MANIFEST_DIR/02-config/02-assisted-installer-clusterimageset.yaml
Next it's time to create a pull-secret
, which the bare-metal clusters will leverage when pulling iamges during the installation process. Make sure that you're pull-secret.txt
is in the current working directory, and the write and deploy secret manifest:
PULL_SECRET=$(cat pull-secret.txt | jq -R .)
cat << EOF > $MANIFEST_DIR/02-config/03-assisted-installer-secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: assisted-deployment-pull-secret
namespace: assisted-installer
stringData:
.dockerconfigjson: $PULL_SECRET
EOF
oc create -f $MANIFEST_DIR/02-config/03-assisted-installer-secrets.yaml
Or simply do the following, using your own pull-secret.txt
file aquired from cloud.redhat.com
oc create secret generic assisted-deployment-pull-secret -n assisted-installer --from-file=.dockerconfigjson=/Users/bjozsa/Development/code/personal/github.com/working/pull-secret.txt --type=kubernetes.io/dockerconfigjson
Next you'll want to create a secret that includes your private SSH key:
oc create secret generic assisted-deployment-ssh-private-key \
-n assisted-installer \
--from-file=ssh-privatekey=$SSH_PRIVATE_KEY
Now we're going to create our deployment for a test cluster called ran-poc
. This is what the deployment will look like:
cat << EOF > $MANIFEST_DIR/03-deployment/01-assisted-installer-agentclusterinstall.yaml
---
apiVersion: extensions.hive.openshift.io/v1beta1
kind: AgentClusterInstall
metadata:
name: $CLUSTER_NAME-aci
namespace: assisted-installer
spec:
clusterDeploymentRef:
name: $CLUSTER_NAME
imageSetRef:
name: $CLUSTER_IMG_REF
networking:
clusterNetwork:
- cidr: "$CLUSTER_CIDR_NET"
hostPrefix: $CLUSTER_HOST_PFX
serviceNetwork:
- "$CLUSTER_CIDR_SVC"
machineNetwork:
- cidr: "$CLUSTER_HOST_NET"
provisionRequirements:
controlPlaneAgents: 1
sshPublicKey: "$CLUSTER_SSHKEY"
EOF
Deploy this cluster config:
oc create -f $MANIFEST_DIR/03-deployment/01-assisted-installer-agentclusterinstall.yaml
Next create the ClusterDeployment
manifest, and deploy it to the cluster:
cat << EOF > $MANIFEST_DIR/03-deployment/02-assisted-installer-clusterdeployment.yaml
---
apiVersion: hive.openshift.io/v1
kind: ClusterDeployment
metadata:
name: $CLUSTER_NAME
namespace: assisted-installer
spec:
baseDomain: $CLUSTER_DOMAIN
clusterName: $CLUSTER_NAME
controlPlaneConfig:
servingCertificates: {}
installed: false
clusterInstallRef:
group: extensions.hive.openshift.io
kind: AgentClusterInstall
name: $CLUSTER_NAME-aci
version: v1beta1
platform:
agentBareMetal:
agentSelector:
matchLabels:
$ASSISTED_AGENT_LABEL: "$ASSISTED_AGENT_SELECTOR"
pullSecretRef:
name: assisted-deployment-pull-secret
EOF
oc create -f $MANIFEST_DIR/03-deployment/02-assisted-installer-clusterdeployment.yaml
Finally, create the InfraEnv
manifest and deploy it to the cluster:
cat << EOF > $MANIFEST_DIR/03-deployment/03-assisted-installer-infraenv.yaml
---
apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
name: $CLUSTER_NAME-infraenv
namespace: assisted-installer
spec:
clusterRef:
name: $CLUSTER_NAME
namespace: assisted-installer
sshAuthorizedKey: "$CLUSTER_SSHKEY"
agentLabelSelector:
matchLabels:
$ASSISTED_AGENT_LABEL: $ASSISTED_AGENT_SELECTOR
pullSecretRef:
name: assisted-deployment-pull-secret
EOF
oc create -f $MANIFEST_DIR/03-deployment/03-assisted-installer-infraenv.yaml
You can obtain the download ISO with the following command:
DOWNLOAD_URL=$(oc get infraenv $CLUSTER_NAME-infraenv -o jsonpath='{.status.isoDownloadURL}' -n assisted-installer)
curl -k -L "$DOWNLOAD_URL" -o ai-install.iso
Load the ISO image onto a bare-metal system, and boot to ISO.
IMPORTANT: For 4.8.0-fc.x (any pre-release of OpenShift) you must disable UEFI Secure Boot features. This is not a bug, and it is expected to fail with secure boot enabled. Production releases of OpenShift will operate just fine with UEFI Secure Boot enabled (as expected).
Once the deployment has been initiated, you can verify that it has checked into the Assisted-Service API with the following command:
oc get agentclusterinstalls $CLUSTER_NAME-aci -o json -n assisted-installer | jq '.status.conditions[]'
Additionally, you should see similar input when running the following command:
oc get agents.agent-install.openshift.io -n assisted-installer -o=jsonpath='{range .items[*]}{"\n"}{.spec.clusterDeploymentName.name}{"\n"}{.status.inventory.hostname}{"\n"}{range .status.conditions[*]}{.type}{"\t"}{.message}{"\n"}{end}'
ran-poc
worker03
SpecSynced The Spec has been successfully applied
Connected The agent's connection to the installation service is unimpaired
ReadyForInstallation The agent is not approved
Validated The agent's validations are passing
Installed The installation has not yet started
This means that your SNO cluster is ready to deploy! You can start by doing the following - but first, determine what the ClusterID is with the command:
❯ oc get agents.agent-install.openshift.io -n assisted-installer
NAME CLUSTER APPROVED
dfaab9a4-964c-68c1-b765-8efcdd53ac7f ran-poc false
Then set a CLUSTER_ID
variable and patch
the deployment to "approve" the installation to proceed:
CLUSTER_ID=27991542-03e4-41a4-4850-cc1b36fc6b7b
oc -n assisted-installer patch agents.agent-install.openshift.io $CLUSTER_ID -p '{"spec":{"approved":true}}' --type merge
Finally, you should notice that the installation has begun and is proceeding:
❯ oc get agents.agent-install.openshift.io -n assisted-installer -o=jsonpath='{range .items[*]}{"\n"}{.spec.clusterDeploymentName.name}{"\n"}{.status.inventory.hostname}{"\n"}{range .status.conditions[*]}{.type}{"\t"}{.message}{"\n"}{end}'
ran-poc
worker03
SpecSynced The Spec has been successfully applied
Connected The agent's connection to the installation service is unimpaired
ReadyForInstallation The agent cannot begin the installation because it has already started
Validated The agent's validations are passing
Installed The installation is in progress: Host is preparing for installation
Be patient - the cluster is now installing. You can check this progress by logging imto the host with the core
user and running a journalctl -f
to watch all the logs while the host comes online.
Last item of interest is collecting the kubeadm
credentials and kubeconfig
. These can be obtained with the following commands:
mkdir -p auth
oc get secret -n assisted-installer $CLUSTER_NAME-admin-kubeconfig -o json | jq -r '.data.kubeconfig' | base64 -d > auth/$CLUSTER_NAME-admin-kubeconfig
oc get secret -n assisted-installer $CLUSTER_NAME-admin-password -o json | jq -r '.data.password' | base64 -d > auth/$CLUSTER_NAME-admin-password
Come back to clean up the following items:
startingCSV: $HIVE_CSV