There are a few preq to be done before running make:
- Clone openshift/release and openshift/shared-secrets
- Create a new service account
- Create a new subdomain that GCE will manage
- Modify gcp-dev/vars.yml
Install the following software onto your system:
- docker
- podman
- Google Cloud SDK (For those deploying to GCP)
- openshift-ansible
git clone https://github.com/openshift/release
git clone https://github.com/openshift/shared-secrets
- Navigate to IAM & admin
- Create Service Account
- Provide service account name
- Check furnish a new private key
- Role: Project - Editor
Download and save service-account json file. This will be used later.
To create a CloudDNS:
- Navigate to CloudDNS: Network services -> CloudDNS
- Click "Create Zone"
- Provide appropriate values
The next screen will provide the name servers that should be used in your existing DNS configuration for the subdomain.
Results using the gcloud
cli
jcallen@cnvlab-209908:~$ gcloud dns managed-zones list
NAME DNS_NAME DESCRIPTION
virtomation-com gce.virtomation.com.
Create a subdomain for GCE in your DNS. For this example I used a domain that I own.
$ host -t NS gce.virtomation.com
gce.virtomation.com name server ns-cloud-b4.googledomains.com.
gce.virtomation.com name server ns-cloud-b1.googledomains.com.
gce.virtomation.com name server ns-cloud-b2.googledomains.com.
gce.virtomation.com name server ns-cloud-b3.googledomains.com.
Change directory to release.
cd ./release/cluster/test-deploy/gcp-dev
Modify vars.yaml
and the following variables should be changed based on your configuration.
openshift_gcp_project: cnvlab-209908
# The subdomain that we just created
public_hosted_zone: gce.virtomation.com
# The name GCE name of the CloudDNS
dns_managed_zone: virtomation-com
# This option is only available in openshift-ansible master
openshift_gcp_licenses: https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx
# Certificate to provide access to mirror
cp ./shared-secrets/mirror/ops-mirror.pem ./release/cluster/test-deploy/gcp-dev/
# Copy service account json to gce.json
cp ~/Downloads/<sa>.json ./release/cluster/test-deploy/gcp-dev/gce.json
See test-deploy README for additional information
WHAT
: Base name of the clusterPROFILE
: Which deployment profile that will be run. Currently only usegcp-dev
.REF
: branch or commit of the cluster that should be deployed.
Example deployment of 3.10:
sudo make WHAT=jcallen PROFILE=gcp-dev REF=release-3.10 up
Use the OPENSHIFT_ANSIBLE_IMAGE
environmental variable
sudo OPENSHIFT_ANSIBLE_IMAGE=openshift-ansible:3.10 make WHAT=jcallen PROFILE=gcp-dev REF=release-3.10 up
Instead of installing the openshift client let's use a existing container with the client already installed. NOTE: docker can be replaced with podman if not available on your operating system.
Example:
sudo podman run -it --rm \
-v ${PWD}/gcp-dev/:/gcp-dev:Z \
-e KUBECONFIG=/gcp-dev/admin.kubeconfig quay.io/openshift/origin-cli:v3.10.0 oc version
sudo podman run -it --rm \
-v ${PWD}/gcp-dev/:/gcp-dev:Z \
-e KUBECONFIG=/gcp-dev/admin.kubeconfig \
quay.io/openshift/origin-cli:v3.10.0 \
oc new-project kubevirt
TOKEN=`sudo podman run -it --rm -v ${PWD}/gcp-dev/:/gcp-dev:Z -e KUBECONFIG=/gcp-dev/admin.kubeconfig quay.io/openshift/origin-cli:v3.10.0 oc sa get-token -n kubevirt builder | tr -d '\r'`
REGISTRY=`sudo podman run -it --rm -v ${PWD}/gcp-dev/:/gcp-dev:Z -e KUBECONFIG=/gcp-dev/admin.kubeconfig quay.io/openshift/origin-cli:v3.10.0 oc get route docker-registry -n default --template '{{ .spec.host }}'`
sudo docker login -u builder -p ${TOKEN} docker-registry-default.apps.jcallen.gce.virtomation.com
sudo DOCKER_PREFIX=${REGISTRY}/kubevirt make docker
sudo DOCKER_PREFIX=${REGISTRY}/kubevirt make push
# Use the internal service instead
sudo DOCKER_PREFIX=docker-registry.default.svc:5000/kubevirt DOCKER_TAG=latest make manifests
sudo podman run -it --rm \
-v ${PWD}/gcp-dev/:/gcp-dev:Z \
-v /home/jlcallen/Development/kubevirt/_out/manifests/dev:/tmp/manifests:Z \
-e KUBECONFIG=/gcp-dev/admin.kubeconfig quay.io/openshift/origin-cli:v3.10.0 \
oc create -f /tmp/manifests/
To pull images from the kubevirt project allow the service accounts access via system:image-puller
sudo podman run -it --rm -v ${PWD}/gcp-dev/:/gcp-dev:Z -e KUBECONFIG=/gcp-dev/admin.kubeconfig quay.io/openshift/origin-cli:v3.10.0 \
oc policy add-role-to-user system:image-puller system:serviceaccount:kube-system:kubevirt-controller -n kubevirt
sudo podman run -it --rm -v ${PWD}/gcp-dev/:/gcp-dev:Z -e KUBECONFIG=/gcp-dev/admin.kubeconfig quay.io/openshift/origin-cli:v3.10.0 \
oc policy adm add-role-to-user system:image-puller system:serviceaccount:kube-system:kubevirt-apiserver -n kubevirt