Skip to content

Instantly share code, notes, and snippets.

@jcpowermac
Last active September 3, 2018 09:13
Show Gist options
  • Save jcpowermac/96c32d39f58cfb447c56003e07b39f83 to your computer and use it in GitHub Desktop.
Save jcpowermac/96c32d39f58cfb447c56003e07b39f83 to your computer and use it in GitHub Desktop.
release

Using OpenShift Release with alternative GCP account

There are a few preq to be done before running make:

  • Clone openshift/release and openshift/shared-secrets
  • Create a new service account
  • Create a new subdomain that GCE will manage
  • Modify gcp-dev/vars.yml

Git Clone

git clone https://github.com/openshift/release
git clone https://github.com/openshift/shared-secrets

Create a new service account

  • Navigate to IAM & admin
  • Create Service Account
    • Provide service account name
    • Check furnish a new private key
    • Role: Project - Editor

Download and save service-account json file. This will be used later.

Cloud DNS

To create a CloudDNS:

  • Navigate to CloudDNS: Network services -> CloudDNS
  • Click "Create Zone"
  • Provide appropriate values

The next screen will provide the name servers that should be used in your existing DNS configuration for the subdomain.

Results using the gcloud cli

jcallen@cnvlab-209908:~$ gcloud dns managed-zones list
NAME                      DNS_NAME                   DESCRIPTION
virtomation-com           gce.virtomation.com.

Create a subdomain

Create a subdomain for GCE in your DNS. For this example I used a domain that I own.

$ host -t NS gce.virtomation.com
gce.virtomation.com name server ns-cloud-b4.googledomains.com.
gce.virtomation.com name server ns-cloud-b1.googledomains.com.
gce.virtomation.com name server ns-cloud-b2.googledomains.com.
gce.virtomation.com name server ns-cloud-b3.googledomains.com.

Changes to configurations

Change directory to release.

cd ./release/cluster/test-deploy/gcp-dev

Modify vars.yaml and the following variables should be changed based on your configuration.

openshift_gcp_project: cnvlab-209908 
# The subdomain that we just created
public_hosted_zone: gce.virtomation.com  
# The name GCE name of the CloudDNS
dns_managed_zone: virtomation-com 

Create ./release/cluster/test-deploy/gcp-dev/kubevirt.yaml

# This option is only available in openshift-ansible master (08-16-2018)
openshift_gcp_licenses: https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx

# openshift_release is required for 3.10 until the origin-ansible:3.10 container image is updated with changes
# to support gcp licenses.
#openshift_release: "3.10"

# We are not upgrading and these packages are always missing.
openshift_enable_excluders: False

Copy required files to gcp-dev

# Copy service account json to gce.json
cp ~/Downloads/<sa>.json ./release/cluster/test-deploy/gcp-dev/gce.json

See Configure a profile for the additional files required - excluding the gce.json.

Deploy OKD Cluster

Cluster managment

See test-deploy README for additional information

Current make options:

  • WHAT: Base name of the cluster
  • PROFILE: Which deployment profile that will be run. Currently only use gcp-dev.
  • REF: branch or tag of the cluster that should be deployed.

Installing 3.10

NOTE: Set openshift_release to 3.10 (or just uncomment within kubevirt.yaml)

sudo OPENSHIFT_ANSIBLE_IMAGE=quay.io/openshift/origin-ansible:latest make WHAT=jcallen PROFILE=gcp-dev REF=release-3.10 up

Installing Master (3.11)

sudo OPENSHIFT_ANSIBLE_IMAGE=quay.io/openshift/origin-ansible:latest make WHAT=jcallen PROFILE=gcp-dev REF=master up

Use a container image cli

Instead of installing the openshift client let's use a existing container with the client already installed. NOTE: docker can be replaced with podman if not available on your operating system.

Example:

sudo podman run -it --rm \
    -v ${PWD}/gcp-dev/:/gcp-dev:Z \
    -e KUBECONFIG=/gcp-dev/admin.kubeconfig quay.io/openshift/origin-cli:v3.10.0 oc get nodes

Build KubeVirt

See build.sh

#!/bin/bash
set -x
PROFILE="gcp-dev"
# Using the kube-system namespace to avoid modifying service accounts to allow
# pulling images from a kubevirt project if that was used instead.
NAMESPACE="kube-system"
VERSION="v3.10.0"
RELEASE_TOOL_PATH="/home/jlcallen/Development/release"
KUBEVIRT_PATH="/home/jlcallen/Development/kubevirt"
PROFILE_PATH="${RELEASE_TOOL_PATH}/cluster/test-deploy/${PROFILE}"
MANIFEST_PATH="${KUBEVIRT_PATH}/_out/manifests/dev"
# If you don't have podman you should be able to replace this with docker
RUNTIME="podman"
# Why? I am running silverblue and the origin clients are not install by default in the ostree.
# You could easily replace this with just the oc path if installed
CLI_CONTAINER="quay.io/openshift/origin-cli:${VERSION}"
OC="sudo ${RUNTIME} run -it --rm -v ${PROFILE_PATH}:/profile:Z -e KUBECONFIG=/profile/admin.kubeconfig ${CLI_CONTAINER} oc"
OC_MANIFEST="sudo ${RUNTIME} run -it --rm -v ${MANIFEST_PATH}:/manifest:Z -v ${PROFILE_PATH}:/profile:Z -e KUBECONFIG=/profile/admin.kubeconfig ${CLI_CONTAINER} oc"
# Need the kube-system bulder token to be able to login to the OpenShift registry
TOKEN=`${OC} sa get-token -n ${NAMESPACE} builder | tr -d '\r'`
# Get the URL to the registry
REGISTRY=`${OC} get route docker-registry -n default --template '{{ .spec.host }}'`
# Login to the registry
# Yes you will need to configure insecure registries in /etc/containers/registries.conf
sudo docker login -u builder -p ${TOKEN} ${REGISTRY}
# Change the kubevirt development directory
cd ${KUBEVIRT_PATH}
# build and create kubevirt container images
sudo DOCKER_PREFIX=${REGISTRY}/${NAMESPACE} make docker
# Push to the registry
sudo DOCKER_PREFIX=${REGISTRY}/${NAMESPACE} make push
# Now that the images are in the openshift registry there is no
# reason to the use the registry "route" just use the service url
# The tag when pushed to the registry is latest
sudo DOCKER_PREFIX=docker-registry.default.svc:5000/${NAMESPACE} DOCKER_TAG=latest make manifests
eval ${OC_MANIFEST} create -f /manifest
# This option is only available in openshift-ansible master (08-16-2018)
openshift_gcp_licenses: https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx
# openshift_release may be required for 3.10 until the origin-ansible:3.10 is updated with the nested virtualization
# additions
#openshift_release: "3.11"
# We are not upgrading and these packages are always missing.
openshift_enable_excluders: False
#!/bin/bash
PROFILE=gcp-dev
VERSION="v3.10.0"
RELEASE_TOOL_PATH=/home/jlcallen/Development/release
PROFILE_PATH=${RELEASE_TOOL_PATH}/cluster/test-deploy/${PROFILE}
# If you don't have podman you should be able to replace this with docker
RUNTIME="podman"
# Why? I am running silverblue and the origin clients are not install by default in the ostree.
# You could easily replace this with just the oc path if installed
CLI_CONTAINER="quay.io/openshift/origin-cli:${VERSION}"
OC="sudo ${RUNTIME} run -it --rm -v ${PROFILE_PATH}:/profile:Z -e KUBECONFIG=/profile/admin.kubeconfig ${CLI_CONTAINER} oc"
eval ${OC} $@

old stuff ignore - notes

NOTE: below is out-of-date...

Create kubevirt project

sudo podman run -it --rm \
	-v ${PWD}/gcp-dev/:/gcp-dev:Z \
	-e KUBECONFIG=/gcp-dev/admin.kubeconfig \
	quay.io/openshift/origin-cli:v3.10.0 \
	oc new-project kubevirt

Get the builder token

TOKEN=`sudo podman run -it --rm -v ${PWD}/gcp-dev/:/gcp-dev:Z -e KUBECONFIG=/gcp-dev/admin.kubeconfig quay.io/openshift/origin-cli:v3.10.0 oc sa get-token -n kubevirt builder | tr -d '\r'`

Get the registry URL

REGISTRY=`sudo podman run -it --rm -v ${PWD}/gcp-dev/:/gcp-dev:Z -e KUBECONFIG=/gcp-dev/admin.kubeconfig quay.io/openshift/origin-cli:v3.10.0 oc get route docker-registry -n default --template '{{ .spec.host }}'`

Login to the registry

sudo docker login -u builder -p ${TOKEN} ${REGISTRY}

Kubevirt stuff

sudo DOCKER_PREFIX=${REGISTRY}/kubevirt make docker
sudo DOCKER_PREFIX=${REGISTRY}/kubevirt make push

# Use the internal service instead

sudo DOCKER_PREFIX=docker-registry.default.svc:5000/kubevirt DOCKER_TAG=latest make manifests

Deploy kubevirt

sudo podman run -it --rm \
    -v ${PWD}/gcp-dev/:/gcp-dev:Z \
    -v /home/jlcallen/Development/kubevirt/_out/manifests/dev:/tmp/manifests:Z \
    -e KUBECONFIG=/gcp-dev/admin.kubeconfig quay.io/openshift/origin-cli:v3.10.0 \
    oc create -f /tmp/manifests/

Permissions for kubevirt image pull

To pull images from the kubevirt project allow the service accounts access via system:image-puller

sudo podman run -it --rm -v ${PWD}/gcp-dev/:/gcp-dev:Z -e KUBECONFIG=/gcp-dev/admin.kubeconfig quay.io/openshift/origin-cli:v3.10.0 \
	oc policy add-role-to-user system:image-puller system:serviceaccount:kube-system:kubevirt-controller -n kubevirt


sudo podman run -it --rm -v ${PWD}/gcp-dev/:/gcp-dev:Z -e KUBECONFIG=/gcp-dev/admin.kubeconfig quay.io/openshift/origin-cli:v3.10.0 \
	oc policy adm add-role-to-user system:image-puller system:serviceaccount:kube-system:kubevirt-apiserver -n kubevirt
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment