Skip to content

Instantly share code, notes, and snippets.

@glekner
Last active January 28, 2020 12:15
Show Gist options
  • Save glekner/27baa2248644ec6c2d4660317835aaa2 to your computer and use it in GitHub Desktop.
Save glekner/27baa2248644ec6c2d4660317835aaa2 to your computer and use it in GitHub Desktop.
OKD + CNV w/Native K8s

Openshift Console + CNV w/Native K8s

Motive

Running Openshift Console along side a native local K8s is really nice and fast! Very useful for developers looking to test and create new components, and for users looking to take advantage of the console UI with their own cluster. There are a couple things that are missing: Metrics and Templates (Kubernetes doesn't support Templates kind, and there is no Prometheus)

Prerequisites

  1. A local K8s cluster (Minikube, Docker Desktop etc..)
  2. Openshift Console

Steps

Deploy Kubevirt

# change this according to the latest STABLE version
export KUBEVIRT_VERSION=v0.25.0
 
export CDI_VERSION=$(curl -s https://github.com/kubevirt/containerized-data-importer/releases/latest | grep -o "v[0-9]\.[0-9]*\.[0-9]*")
 
# Deploy Kubevirt, Storage, CDI Pods
kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/$KUBEVIRT_VERSION/kubevirt-operator.yaml
 
kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/$KUBEVIRT_VERSION/kubevirt-cr.yaml

kubectl create -f https://raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/storage-setup.yml

kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$CDI_VERSION/cdi-operator.yaml

kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$CDI_VERSION/cdi-cr.yaml

# Create our kubevirt-native namespace
kubectl create namespace kubevirt-native

Create ConfigMap

Create a YAML file

apiVersion: v1
kind: ConfigMap
metadata:
  name: kubevirt-storage-class-defaults
  namespace: kubevirt-native
data:
  accessMode: ReadWriteOnce
  volumeMode: Filesystem

Import it

kubectl create -f file.yaml

Connecting Console to our local Cluster

Once all pods are deployed we want to fetch our cluster token and put it in BRIDGE_K8S_BEARER_TOKEN

kubectl get secrets
kubectl describe secrets/<secret-id-obtained-previously>
export BRIDGE_K8S_BEARER_TOKEN=token

Done!

yarn dev # inside frontend/
source ./contrib/environment.sh
./bin/bridge
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment