Get iib index image from CPaaS build, e.g. registry-proxy.engineering.redhat.com/rh-osbs/iib:148577
. From this image refernence, use the tag in brew.registry.redhat.io/rh-osbs/iib
, e.g. brew.registry.redhat.io/rh-osbs/iib:148577
to enable the catalogsource
export IIB_IMAGE="brew.registry.redhat.io/rh-osbs/iib:<tag>"
- Add Brew pull secrets according to https://docs.engineering.redhat.com/display/CFC/Test. Assuming you've created a token (and have only one) in the past and want to reuse it, the process is
# Assuming you've run kinit TOKEN_USERNAME=$(curl --negotiate -u : https://employee-token-manager.registry.redhat.com/v1/tokens -s | jq -r '.[].credentials.username') PASSWORD=$(curl --negotiate -u : https://employee-token-manager.registry.redhat.com/v1/tokens -s | jq -r '.[].credentials.password') oc get secret/pull-secret -n openshift-config -o json | jq -r '.data.".dockerconfigjson"' | base64 -d > authfile echo "$PASSWORD" | podman login --authfile authfile --username "$TOKEN_USERNAME" --password-stdin brew.registry.redhat.io oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=authfile rm authfile
- Create ImageContentSourcePolicy to allow pulling the IIB:
cat <<EOF | oc apply -f - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: brew-registry spec: repositoryDigestMirrors: - mirrors: - brew.registry.redhat.io source: registry.redhat.io - mirrors: - brew.registry.redhat.io source: registry.stage.redhat.io - mirrors: - brew.registry.redhat.io source: registry-proxy.engineering.redhat.com EOF
- Add a CatalogSource for the IIB from above:
cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: devworkspace-operator-testing-catalog namespace: openshift-marketplace spec: sourceType: grpc image: ${IIB_IMAGE} publisher: DWO Testing displayName: DevWorkspace Operator Catalog EOF
-
Install existing DWO release from Red Hat Catalog -- e.g. if you're testing
v0.11.0
, installv0.10.0
from the Red Hat Catalog:cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: devworkspace-operator namespace: openshift-operators spec: channel: fast installPlanApproval: Manual name: devworkspace-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF
-
Approve install of DWO and wait for it to succeed
-
Install WTO from the Red Hat catalog (approve install in OperatorHub UI)
cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: web-terminal namespace: openshift-operators spec: channel: fast installPlanApproval: Manual name: web-terminal source: redhat-operators sourceNamespace: openshift-marketplace EOF
-
Start a sample DevWorkspace (e.g.
oc apply -f samples/theia-next.yaml
) and wait for it to enterRunning
state. Start a Web Terminal from the OpenShift Console UI and wait for it to start -
Edit the subscription for DWO to change the to catalogsource for OLM. After this is done an update should be available in the OperatorHub UI. Approve the upgrade in OperatorHub UI
cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: devworkspace-operator namespace: openshift-operators spec: channel: fast installPlanApproval: Manual name: devworkspace-operator source: devworkspace-operator-testing-catalog sourceNamespace: openshift-marketplace EOF
-
Check that update succeeds and that Workspaces created earlier are fine
-
Delete all workspaces:
oc delete dw --all --all-namespaces
-
Create a DevWorkspace:
oc apply -f samples/code-latest.yaml
-
Wait for it to enter running phase:
oc get dw -w
-
Check that project-clone succeeded as expected:
oc logs -f workspace<random-string> -c project-clone
-
Open Workspace in browser and verify that Theia works as expected. Note opening terminals may fail if the certificate for the cluster isn't added to the browser's trust store.
-
Stop the workspace:
oc patch dw code-latest --type merge -p '{"spec": {"started": false}}'
-
Start workspace again:
oc patch dw code-latest --type merge -p '{"spec": {"started": true}}'
-
Once started, verify that Workspace is intact, changes are persisted, and project-clone container logs state that project is already cloned
-
Create a secondary workspace to prevent workspace PVC removal:
cat samples/code-latest.yaml | yq '.metadata.name = "code-latest-2"' | oc apply -f -
-
Delete the first DevWorkspace:
oc delete dw code-latest
-
Check that PVC is cleaned up after deletion:
- Create a pod that mounts the common PVC
cat <<EOF | oc apply -f - apiVersion: v1 kind: Pod metadata: name: "cleanup" labels: app: cleanup spec: containers: - image: quay.io/fedora/fedora:34 name: "1234" volumeMounts: - mountPath: /tmp/claim-devworkspace name: claim-devworkspace command: ["tail"] args: ["-f", "/dev/null"] resources: limits: memory: 100Mi volumes: - name: claim-devworkspace persistentVolumeClaim: claimName: claim-devworkspace EOF
- Once Pod is running, check for files in the common PVC:
oc exec cleanup -- /bin/bash -c 'ls /tmp/claim-devworkspace'
- There should be files from the second workspace, but there should be no folder for the first workspace's DevWorkspace ID
- Delete the Pod:
oc delete po cleanup
- Create a pod that mounts the common PVC
-
Delete the second DevWorkspace:
oc delete dw code-latest-2
-
Verify that the common DevWorkspace PVC is removed:
oc get pvc claim-devworkspace
should fail as PVC does not exist (may take a couple of seconds after deletion)
Note To test web terminal idle timeout, it's convenient to set a shorter timeout duration via the DWO config, (e.g. 1m)
cat <<EOF | oc apply -f -
apiVersion: controller.devfile.io/v1alpha1
kind: DevWorkspaceOperatorConfig
metadata:
name: devworkspace-operator-config
namespace: openshift-operators
config:
workspace:
idleTimeout: 60s
EOF
-
Start a web terminal as a regular (non cluster-admin) user and verify:
- Namespace selector appears when creating terminal
- Terminal starts up and command prompt appears in browser
- User is logged in to OpenShift and has access to the cluster (
oc get po
) - Terminal idles automatically after timeout: if browser is left idle for 60s (see config above), terminal panel should show "The terminal connection has closed due to inactivity."
-
Start a web terminal as a cluster-admin user
- No Namespace selector is available when creating terminal
- Terminal starts up and command prompt appears in browser
- User is logged in to OpenShift and has access to the cluster (
oc get po
) - Terminal's namespace is
openshift-terminal
- Terminal idles automatically after timeout: if browser is left idle for 60s (see config above), terminal panel should show "The terminal connection has closed due to inactivity."