Skip to content

Instantly share code, notes, and snippets.

@karmab
Last active July 9, 2018 13:20
Show Gist options
  • Save karmab/54401d2e35be51ebcc5e2e698bbc33da to your computer and use it in GitHub Desktop.
Save karmab/54401d2e35be51ebcc5e2e698bbc33da to your computer and use it in GitHub Desktop.
CNV LAB
*_rsa
*_rsa.pub
*retry

This repository provides instructions on how to deploy an arbitrary number of vms on gcp with kubevirt, optionally with nested

Requirements

  • a gcp account and the corresponding service account json file
  • an image with nested enabled (optional)
  • kcli tool ( configured to point to gcp) with version >= 12.0

Service account retrieval

To gather your service account file:

  • Select the "IAM" → "Service accounts" section within the Google Cloud Platform console.
  • Select "Create Service account".
  • Select "Project" → "Editor" as service account Role.
  • Select "Furnish a new private key".
  • Select "Save"

Preparing a nested enabled image (Optional)

gcloud compute images create nested-centos7 --source-image-family centos-7 --source-image-project centos-cloud --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"

Create a dns domain

  • Select the "Networking" → "Network Services" → "Cloud DNS"
  • Select "Create Zone"
  • Put the same name as your domain, but with '-' instead

kcli setup

Installation

docker pull karmab/kcli
echo alias kcli=\'docker run -it --rm -v ~/.kcli:/root/.kcli:Z -v $SSH_AUTH_SOCK:/ssh-agent --env SSH_AUTH_SOCK=/ssh-agent karmab/kcli\' >> $HOME.bashrc

Configuration

  • copy your service account json file location to .kcli directory ( to ease sharing the file with the container)
  • create a directory .kcli in your home directory and a file config.yml with the following content, specifying your serviceaccount json file location
default:
 client: mygcp

mygcp:
 type: gcp
 user: cnv
 credentials: ~/.kcli/myproject.json
 enabled: true
 project: myproject
 zone: us-central1-b

How to use

the plan file kubevirt.yml is the main artifact used to run the deployment

kcli plan -f kubevirt.yml -P nodes=10 cnvlab

this will create 10 vms, named student001,student002,...,student010 and populate them with scripts to deploy the corresponding feature an requisites.sh script will also be executed to install docker and pull relevant images

  • openshift.sh
  • kubevirt.sh
  • cdi.sh
  • clean.sh

You can then use *

  • kcli list*
  • kcli ssh $INSTANCE
  • kcli plan -d cnvlab # for deletion
  • relaunch the same command with a different value for nodes so that extra instances get created

Available parameters:

Parameter Default Value
domain cnvlab.gce.sysdeseng.com
openshift_version 3.10
kubevirt_version v0.7.0-alpha.2
disk_size 60
numcpus 4
memory 12288
nodes 1
deploy true
nested true
wget -P /root https://raw.githubusercontent.com/kubevirt/containerized-data-importer/[[ cdi_version ]]/manifests/example/golden-pvc.yaml
wget -P /root https://raw.githubusercontent.com/kubevirt/containerized-data-importer/[[ cdi_version ]]/manifests/controller/cdi-controller-deployment.yaml
oc new-project golden-images
oc create -f /root/cdi-controller-deployment.yaml
oc adm policy add-cluster-role-to-user cluster-admin -z cdi-sa -n golden-images
- hosts: all
remote_user: cnv
become: yes
vars:
package: docker
tasks:
- name: check if "{{ package }}" is installed
shell: "rpm -qa | grep {{ package }}"
# - name: report notdes where it s note
# debug:
# msg: "{{ inventory_hostname }}"
# when: (found.results|length != 1)
#!/bin/bash -x
OC_DIR=/root/openshift.local.clusterup
if [ -d $OC_DIR ];
then
oc cluster down
for i in `mount | grep -i openshift | awk '{ print $3 }'`; do
umount $i;
done
rm -rf $OC_DIR
docker rmi --force $(docker images -q)
else
echo "System is clean."
fi
sh /root/requirements.sh
sh /root/openshift.sh
sh /root/kubevirt.sh
sh /root/cdi.sh
student001.cnvlab.gce.sysdeseng.com
student002.cnvlab.gce.sysdeseng.com
student003.cnvlab.gce.sysdeseng.com
student004.cnvlab.gce.sysdeseng.com
student005.cnvlab.gce.sysdeseng.com
student006.cnvlab.gce.sysdeseng.com
student007.cnvlab.gce.sysdeseng.com
student008.cnvlab.gce.sysdeseng.com
student009.cnvlab.gce.sysdeseng.com
student010.cnvlab.gce.sysdeseng.com
student011.cnvlab.gce.sysdeseng.com
student012.cnvlab.gce.sysdeseng.com
student013.cnvlab.gce.sysdeseng.com
student014.cnvlab.gce.sysdeseng.com
student015.cnvlab.gce.sysdeseng.com
student016.cnvlab.gce.sysdeseng.com
student017.cnvlab.gce.sysdeseng.com
student018.cnvlab.gce.sysdeseng.com
student019.cnvlab.gce.sysdeseng.com
student020.cnvlab.gce.sysdeseng.com
student021.cnvlab.gce.sysdeseng.com
student022.cnvlab.gce.sysdeseng.com
student023.cnvlab.gce.sysdeseng.com
student024.cnvlab.gce.sysdeseng.com
student025.cnvlab.gce.sysdeseng.com
student026.cnvlab.gce.sysdeseng.com
student027.cnvlab.gce.sysdeseng.com
student028.cnvlab.gce.sysdeseng.com
student029.cnvlab.gce.sysdeseng.com
student030.cnvlab.gce.sysdeseng.com
student031.cnvlab.gce.sysdeseng.com
student032.cnvlab.gce.sysdeseng.com
student033.cnvlab.gce.sysdeseng.com
student034.cnvlab.gce.sysdeseng.com
student035.cnvlab.gce.sysdeseng.com
#student036.cnvlab.gce.sysdeseng.com
#student037.cnvlab.gce.sysdeseng.com
#student038.cnvlab.gce.sysdeseng.com
#student039.cnvlab.gce.sysdeseng.com
#student040.cnvlab.gce.sysdeseng.com
VERSION="[[ kubevirt_version ]]"
yum -y install xorg-x11-xauth virt-viewer
oc project kube-system
wget https://github.com/kubevirt/kubevirt/releases/download/$VERSION/kubevirt.yaml
oc adm policy add-scc-to-user privileged -z kubevirt-privileged -n kube-system
oc adm policy add-scc-to-user privileged -z kubevirt-controller -n kube-system
[% if not nested %]
oc create configmap -n kube-system kubevirt-config --from-literal debug.allowEmulation=true
[% endif %]
oc create -f kubevirt.yaml
docker pull kubevirt/virt-launcher:$VERSION
wget https://github.com/kubevirt/kubevirt/releases/download/$VERSION/virtctl-$VERSION-linux-amd64
mv virtctl-$VERSION-linux-amd64 /usr/bin/virtctl
chmod u+x /usr/bin/virtctl
ssh-keygen -t rsa -N '' -f /root/.ssh/id_rsa
[% if deploy %]
setfacl -m user:107:rwx /openshift.local.clusterup/openshift.local.pv/pv*
[% else %]
setfacl -m user:107:rwx /root/openshift.local.clusterup/openshift.local.pv/pv*
[%endif %]
oc adm policy add-scc-to-user privileged -z kubevirt-controller -n kube-system
parameters:
kubevirt_version: v0.7.0-alpha.2
domain: cnvlab.gce.sysdeseng.com
openshift_version: '3.10'
cdi_version: v0.5.0
disk_size: 60
numcpus: 4
memory: 12288
nodes: 40
deploy: false
nested: true
keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5Qbj7vDf0uYQpeYb432g5R4YvYJaPfPA4EM4qc3lO62c7oUsWbZlZBl5neEWX41HGCIP4Zm1ybN9iiDyeIns6hg5OkU2vUGuPtV2KCAZOI7snzXeZxlrjsVMjMy/CYUlvIOAPxY4XzfzMMAJjIJni18R2PqVRI4f4SeSq3IIzpnOu2VQmqjFmmdybQY83BvBvWj6KLszAXkJk9LkZSAoktXimDBWFPQYikzZihLolRxwHzo21lXSw58D1N+6IeMudOviAte5yu6FBUN6dFYbt9dkLuH2/ONliFz/042n5UNp0wC5BLdpVwJpWqqrCVaeXBgla/gYm8YNZJIAlf8K5 kboumedh@vegeta.local
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCq9Dr3eNBqaNXTZuHNTvWoaB/gLNpkKYk2AUSzyc6EOexFmXkSOH/3tGIFJINnJhx8YpfHXF+zsp7UfBmxVZQa7zBi7xKixkV7lIBlCD/ZD9LRV7WxBqi5Eb39YPnH1A6W6fwGrR+wQMkC299b2SF3zBzuQgAYdixSYzNDsB7rt89BNSgFmAkv6mL/tVpgVBV6ax6Bmn5XKEvFkHaC/i0YKIiqq+xtoa9w6jq7TQE5XDiAgx51S0uSLvxz+UkKxCbN1oo8FZ4cvGF3rL8NmigzFBzCpmLSUvF1qFbAeMQEEfmZBex5v1TrAbxaH3POBcApOKfEHvaUm9yY44zCXJU5 jcallen@jcallen
tags:
- cnvlab
[% for node in range(0, nodes) %]
student[[ "%03.d" | format(node+1) ]]:
[% if nested %]
template: nested-centos7
[% else %]
template: centos-7
[% endif %]
numcpus: [[ numcpus ]]
memory: [[ memory ]]
tags: [[ tags ]]
keys: [[ keys ]]
domain: [[ domain ]]
reservedns: true
nets:
- name: default
alias: ['*']
disks:
- size: [[ disk_size ]]
pool: default
files:
- path: /root/openshift.sh
origin: openshift.sh
- path: /root/kubevirt.sh
origin: kubevirt.sh
- path: /root/cdi.sh
origin: cdi.sh
- path: /root/pvc_fedora.yml
origin: pvc_fedora.yml
- path: /root/vm1_pvc.yml
origin: vm1_pvc.yml
- path: /root/vm1_registrydisk.yml
origin: vm1_registrydisk.yml
- path: /root/clean.sh
origin: clean.sh
- path: /root/requirements.sh
origin: requirements.sh
scripts:
- requirements.sh
[% if deploy %]
- deploy.sh
[% endif %]
[% endfor %]
#!/bin/bash -x
oc delete -f /root/kubevirt.yaml
DOMAIN=[[ domain ]]
oc cluster up --public-hostname `hostname`.$DOMAIN --routing-suffix `hostname`.$DOMAIN --enable=service-catalog,router,registry,web-console,persistent-volumes,rhel-imagestreams,automation-service-broker
oc login -u system:admin
docker update --restart=always origin
oc adm policy add-cluster-role-to-user cluster-admin developer
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "fedora"
labels:
app: containerized-data-importer
annotations:
kubevirt.io/storage.import.endpoint: "https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
[%- set releaseurls = {
'3.9' : 'v3.9.0/openshift-origin-client-tools-v3.9.0-191fece',
'3.10' : 'v3.10.0-rc.0/openshift-origin-client-tools-v3.10.0-rc.0-c20e215',
}
-%]
docker ps && echo Requirements already installed && exit 0
yum -y install wget docker git bash-completion qemu-img
systemctl enable docker
sed -i "s@# INSECURE_REGISTRY=.*@INSECURE_REGISTRY='--insecure-registry 172.30.0.0/16'@" /etc/sysconfig/docker
wget -O /root/oc.tar.gz https://github.com/openshift/origin/releases/download/[[ releaseurls[openshift_version] ]]-linux-64bit.tar.gz
export HOME=/root
cd /root ; tar zxvf oc.tar.gz
mv /root/openshift-origin-client-tools-*/oc /usr/bin
rm -rf /root/openshift-origin-client-tools-*
curl -L https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl -o /usr/bin/kubectl
chmod +x /usr/bin/kubectl
chmod +x /root/*sh
sed -i "s@OPTIONS=.*@OPTIONS='--selinux-enabled --insecure-registry 172.30.0.0/16'@" /etc/sysconfig/docker
systemctl start docker --ignore-dependencies
IMAGES="docker.io/openshift/origin-node:v3.10 docker.io/openshift/origin-control-plane:v3.10 docker.io/openshift/origin:v3.10 docker.io/openshift/origin-hypershift:v3.10 docker.io/openshift/origin-hyperkube:v3.10 docker.io/openshift/origin-pod:v3.10 docker.io/automationbroker/automation-broker-apb:latest docker.io/openshift/origin-cli:v3.10 quay.io/coreos/etcd:v3.3 docker.io/openshift/origin-service-catalog:v3.10 docker.io/openshift/origin-template-service-broker:v3.10 docker.io/ansibleplaybookbundle/origin-ansible-service-broker:latest docker.io/openshift/origin-web-console:v3.10 docker.io/openshift/origin-docker-registry:v3.10 docker.io/openshift/jenkins-2-centos7:v3.10 docker.io/centos/nodejs-6-centos7:latest"
for image in $IMAGES; do docker pull $image ; done
yum -y install wget docker git bash-completion qemu-img
systemctl enable docker
sed -i "s@# INSECURE_REGISTRY=.*@INSECURE_REGISTRY='--insecure-registry 172.30.0.0/16'@" /etc/sysconfig/docker
wget -O /root/oc.tar.gz https://github.com/openshift/origin/releases/download/v3.10.0-rc.0/openshift-origin-client-tools-v3.10.0-rc.0-c20e215-linux-64bit.tar.gz
export HOME=/root
cd /root ; tar zxvf oc.tar.gz
mv /root/openshift-origin-client-tools-*/oc /usr/bin
rm -rf /root/openshift-origin-client-tools-*
curl -L https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl -o /usr/bin/kubectl
chmod +x /usr/bin/kubectl
chmod +x /root/*sh
sed -i "s@OPTIONS=.*@OPTIONS='--selinux-enabled --insecure-registry 172.30.0.0/16'@" /etc/sysconfig/docker
systemctl start docker --ignore-dependencies
IMAGES="docker.io/openshift/origin-node:v3.10 docker.io/openshift/origin-control-plane:v3.10 docker.io/openshift/origin:v3.10 docker.io/openshift/origin-hypershift:v3.10 docker.io/openshift/origin-hyperkube:v3.10 docker.io/openshift/origin-pod:v3.10 docker.io/automationbroker/automation-broker-apb:latest docker.io/openshift/origin-cli:v3.10 quay.io/coreos/etcd:v3.3 docker.io/openshift/origin-service-catalog:v3.10 docker.io/openshift/origin-template-service-broker:v3.10 docker.io/ansibleplaybookbundle/origin-ansible-service-broker:latest docker.io/openshift/origin-web-console:v3.10 docker.io/openshift/origin-docker-registry:v3.10 docker.io/openshift/jenkins-2-centos7:v3.10 docker.io/centos/nodejs-6-centos7:latest"
for image in $IMAGES; do docker pull $image ; done
apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachine
metadata:
creationTimestamp: 2018-07-04T15:03:08Z
generation: 1
labels:
kubevirt.io/os: linux
name: vm1
spec:
running: true
template:
metadata:
creationTimestamp: null
spec:
domain:
cpu:
cores: 2
devices:
disks:
- disk:
bus: virtio
name: disk0
volumeName: vm1-vol0
- cdrom:
bus: sata
readonly: true
name: cloudinitdisk
volumeName: cloudinitvolume
machine:
type: q35
resources:
requests:
memory: 1024M
volumes:
- name: vm1-vol0
persistentVolumeClaim:
claimName: fedora
- cloudInitNoCloud:
userData: |
#cloud-config
hostname: vm1
ssh_pwauth: True
disable_root: false
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5Qbj7vDf0uYQpeYb432g5R4YvYJaPfPA4EM4qc3lO62c7oUsWbZlZBl5neEWX41HGCIP4Zm1ybN9iiDyeIns6hg5OkU2vUGuPtV2KCAZOI7snzXeZxlrjsVMjMy/CYUlvIOAPxY4XzfzMMAJjIJni18R2PqVRI4f4SeSq3IIzpnOu2VQmqjFmmdybQY83BvBvWj6KLszAXkJk9LkZSAoktXimDBWFPQYikzZihLolRxwHzo21lXSw58D1N+6IeMudOviAte5yu6FBUN6dFYbt9dkLuH2/ONliFz/042n5UNp0wC5BLdpVwJpWqqrCVaeXBgla/gYm8YNZJIAlf8K5 kboumedh@vegeta.local
name: cloudinitvolume
apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachine
metadata:
creationTimestamp: 2018-07-04T15:03:08Z
generation: 1
labels:
kubevirt.io/os: linux
name: vm1
spec:
running: true
template:
metadata:
creationTimestamp: null
spec:
domain:
cpu:
cores: 2
devices:
disks:
- disk:
bus: virtio
name: disk0
volumeName: vm1-vol0
- cdrom:
bus: sata
readonly: true
name: cloudinitdisk
volumeName: cloudinitvolume
machine:
type: q35
resources:
requests:
memory: 1024M
volumes:
- name: vm1-vol0
registryDisk:
image: kubevirt/fedora-cloud-registry-disk-demo
- cloudInitNoCloud:
userData: |
#cloud-config
hostname: vm1
ssh_pwauth: True
disable_root: false
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5Qbj7vDf0uYQpeYb432g5R4YvYJaPfPA4EM4qc3lO62c7oUsWbZlZBl5neEWX41HGCIP4Zm1ybN9iiDyeIns6hg5OkU2vUGuPtV2KCAZOI7snzXeZxlrjsVMjMy/CYUlvIOAPxY4XzfzMMAJjIJni18R2PqVRI4f4SeSq3IIzpnOu2VQmqjFmmdybQY83BvBvWj6KLszAXkJk9LkZSAoktXimDBWFPQYikzZihLolRxwHzo21lXSw58D1N+6IeMudOviAte5yu6FBUN6dFYbt9dkLuH2/ONliFz/042n5UNp0wC5BLdpVwJpWqqrCVaeXBgla/gYm8YNZJIAlf8K5 kboumedh@vegeta.local
name: cloudinitvolume
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment