This is the nad + vmi
---
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: l2-network
spec:
config: |2
{
--- | |
apiVersion: v1 | |
kind: Namespace | |
metadata: | |
labels: | |
kubevirt.io: "" | |
pod-security.kubernetes.io/enforce: "privileged" | |
name: kubevirt | |
--- | |
apiVersion: apiextensions.k8s.io/v1 |
This is the nad + vmi
---
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: l2-network
spec:
config: |2
{
#!/bin/bash | |
set -xe | |
export OCP_RELEASE_IMAGE=registry.build05.ci.openshift.org/ci-ln-5gcbylk/release:latest | |
#export OCP_RELEASE_IMAGE=quay.io/openshift-release-dev/ocp-release:4.15.0-ec.1-x86_64 | |
export guest_cluster_name=$2 | |
export vlan_id=$3 | |
hypershift_image=quay.io/ellorent/hypershift-operator:4.15-multus |
What this PR does / why we need it: Poc for using multus network at kubevirt provider
Problems 1:
We need to expose ignition as load balancer so it can access over the public IP.
[ 342.877333] ignition[905]: GET https://ignition-server-clusters-live-migrate.apps.hypershift.qinqon.local/ignition: attempt #71
[ 342.903038] ignition[905]: GET error: Get "https://ignition-server-clusters-live-migrate.apps.hypershift.qinqon.local/ignition": dial tcp: lookup ignition-server-clusters-live-migrate.apps.hypershift.qinqon.local on 192.168.66.1:53: no such host
--- | |
apiVersion: kubevirt.io/v1 | |
kind: VirtualMachineInstance | |
metadata: | |
labels: | |
special: vmi-multus | |
name: vmi-multus | |
spec: | |
domain: | |
devices: |
apiVersion: apps/v1 | |
kind: Deployment | |
metadata: | |
name: kubevirt-cloud-controller-manager | |
namespace: kube-system | |
labels: | |
k8s-app: kubevirt-cloud-controller-manager | |
spec: | |
replicas: 1 | |
selector: |
#!/bin/bash -xe | |
$_ssh_infra node01 -- sudo cat /etc/kubernetes/admin.conf > infra-kubeconfig.yaml | |
cat << EOF > cloud-config.yaml | |
kubeconfig: | | |
$(cat infra-kubeconfig.yaml | sed "s/^/ /g") | |
loadBalancer: | |
creationPollInterval: 30 | |
EOF | |
Validation | |
Checking the ignition rationale, the verification step from nmstate align with that since it ensure that the network configuration state asked by the user is reached or fail, so it will fail to boot with a clear message about what was not able to set up. | |
Some scenarios that are only verified by nmstate: | |
* Setting MTU at some NICs does not fail at NetworkManager but the outcome is not the expected one: | |
* https://bugzilla.redhat.com/show_bug.cgi?id=1767266 | |
* Using linux-bridge vlan filtering with huge range vlans at ports fails when hardware offloading was configured at some type of NICs, this was only catched by nmstate verification |
#!/usr/bin/env bash | |
set -xe | |
debug="false" | |
wait="true" | |
time2wait=3 | |
CLOUD_IMG_FOLDER="$HOME/Documents/isos" | |
POOL_FOLDER="/var/lib/libvirt/images" |
func main() { | |
capture := map[string]string {} | |
captureAST := map[string]*ast.Node{} | |
for expressionKey, expression := range capture { | |
src := source.New(expression) | |
tokens, err := lexer.New(src).Lex() | |
if err != nil { | |
... | |
} |