$ cat vm.yaml
kind: VirtualMachine
metadata:
name: barbar
spec:
state: down
os:
flavor: Windows 10
disks:
- name: windows-disk-pvc-from-kube
type: disk
- name: windpws-iso-pvc-from-kube
type: cdrom
nics:
- network: foonet-from-kube
model: virtio
display:
heads: 2
$ kubectl create -f vm.yaml
$ kubectl get machine barbar
kind: VirtualMachine
metadata:
name: barbar
spec:
state: down
os:
flavor: Windows 10
disks:
- name: windows-disk-pvc-from-kube
type: disk
- name: windpws-iso-pvc-from-kube
type: cdrom
nics:
- network: foonet-from-kube
model: virtio
display:
heads: 2
status:
state: down
$ kubectl patch machine barbar --type='json' \ -p='[{"op": "replace", "path": "/spec/nics/0/macaddr", "value":"00:de:ad:be:ef:00"}]'
$ kubectl get machine barbar
kind: VirtualMachine
metadata:
name: barbar
spec:
state: down
os:
flavor: Windows 10
disks:
- name: windows-disk-pvc-from-kube
type: disk
- name: windpws-iso-pvc-from-kube
type: cdrom
nics:
- network: foonet-from-kube
model: virtio
macaddr: 00:de:ad:be:ef:00
display:
heads: 2
status:
state: down
$ kubectl patch machine barbar --type="json" \ -p='[{"op": "replace", "path": "/spec/state", "value":"up"}]'
or
$ kubectl start machine barbar
Either of those calls above will now create the usual runtime VM
kind.
The defaulting is implicitly called by the operator, when creating the VM
, biased by flavor.
Annotate with kubevirt.io/default: no
to prevent
Show how the machine looks after launch
$ kubectl get machine barbar
kind: VirtualMachine
metadata:
name: barbar
spec:
state: up
os:
flavor: Windows 10
# Unsure if we should have a .spec.domain, or keep it in .spec -- or keep .spec.domain for override?
# If moved to .spec.domain, then the story could be: .spec.domain does not propagate to a running instance
disks:
- name: windows-disk-pvc-from-kube
type: disk
- name: windpws-iso-pvc-from-kube
type: cdrom
nics:
- network: foonet-from-kube
model: virtio
macaddr: 00:de:ad:be:ef:00
display:
heads: 2
status:
state: up
instanceRef: # Should it go to .spec like PVs?
apiVersion: v1
kind: VM
name: barbar-6gasd8
namespace: default
resourceVersion: "172"
uid: 7a7ab468-31be-11e7-a682-525400b6be62
Get the instance by label
$ kubectl get vms -l "machine=barbar"
kind: VM
metadata:
uid: 7a7ab468-31be-11e7-a682-525400b6be62
name: barbar-6gasd8
instanceOfMachine: barbar
labels:
machine: barbar
spec:
domain:
devices:
graphics:
- type: spice
interfaces:
- type: network
source:
network: default
video:
- type: qxl
disks:
- type: network
snapshot: external
device: disk
driver:
name: qemu
type: raw
cache: none
source:
host:
name: iscsi-demo-target
port: "3260"
protocol: iscsi
name: iqn.2017-01.io.kubevirt:sn.42/2
target:
dev: vda
…
Leads to the same instance as as above
$ kubectl get vms barbar-6gasd8
kind: VM
metadata:
uid: 7a7ab468-31be-11e7-a682-525400b6be62
name: barbar-6gasd8
labels:
machine: barbar
…
$ kubectl stop machine barbar
$ kubectl get machine barbar
kind: VirtualMachine
metadata:
name: barbar
spec:
state: down
os:
flavor: Windows 10
disks:
- name: windows-disk-pvc-from-kube
type: disk
- name: windpws-iso-pvc-from-kube
type: cdrom
nics:
- network: foonet-from-kube
model: virtio
macaddr: 00:de:ad:be:ef:00
display:
heads: 2
status:
state: down
All of the above can be implemented as an add-on, using operator pattern. Eventually: Abstraction (i.e. pvc) could be moved to Machine kind.
To do the defaulting manually:
$ kubectl instantiate barbar
Templating would be a recursive/deep opeartion, also i.e. affecting storage.
A template would be pertty much like the VirtualMachine
kind, i.e. the scheme of the spec is the same.
Compared to i.e. defaulting and to the VirtualMachine
<->VM
kind relationship above, templating leads to different opertations:
- Any associated foreign resource (i.e. volumes) are getting cloned/cowd'ed/reflink'ed
- A new
VirtualMachine
object will be created, foreign resources will be updated to point to the clones
A pool could then be created by referencing a template. Upscaling the pool will lead to new instances of VirtualMachine
and once they run, to new instances of VM
.
Eventually this could also allow to use a pattern, where the user defines the K8s resource to be used for VMs.
kind: StatefulSet
metadata:
kubevirt.io/launch-from-template: barbarb-tpl
spec:
limits:
…
containers:
- name: launcher
image: kubevirt/launcher