- Everything explicit calls (most flexibility)
POST /apis/kubevirt.io/v1alpha1/vms/defaults
POST /apis/kubevirt.io/v1alpha1/vms/store
# wait for a day
GET /apis/kubevirt.io/v1alpha1/vms/store/runtime
POST /apis/kubevirt.io/v1alpha1/namespaces/default/vms
GET from the store:
apiVersion: kubevirt.io/v1alpha1
kind: VM
metadata:
name: testvm
annotations:
kubevirt.io/revision: 15
spec:
restartPolicy: always
domain:
os:
type:
os: hvm
There is an "edit" revision on the document, indicating which revision is sent to the runtime. This allows further editing in the store, while still maintaining a reference between the config store and the runtime version.
- Defaulting is implicit
POST /apis/kubevirt.io/v1alpha1/vms/store
POST /apis/kubevirt.io/v1alpha1/vms/defaults
# wait for a day
GET /apis/kubevirt.io/v1alpha1/vms/store/runtime
POST /apis/kubevirt.io/v1alpha1/namespaces/default/vms
- Extra spec
- We have to evolve/convert and manage different versions through out the project, but possible. The rest is like in solution 2)
- Might make sense, when hotplug and permanent config changes for the next run come into play
- Embedded annotations
- Can be fit into a runtime description
- Indicat flavour with an annotation which is kept always
- Expansion on POST time (but keeping the annotation)
- On application updates, the falvour needs to be taken into account, to change defaults to better things/correct things
POST /apis/kubevirt.io/v1alpha1/vms/store
GET /apis/kubevirt.io/v1alpha1/vms/store/runtime
POST /apis/kubevirt.io/v1alpha1/namespaces/default/vms
POST to store:
apiVersion: kubevirt.io/v1alpha1
kind: VM
metadata:
name: testvm
annotations:
kubevirt.io/flavour: windows10 # indicate template to use
spec:
domain:
os:
type:
os: hvm # this might override flavour defaults
GET from the store:
apiVersion: kubevirt.io/v1alpha1
kind: VM
metadata:
name: testvm
annotations:
kubevirt.io/revision: 15
kubevirt.io/flavour: windows10
spec:
domain:
devices:
# something added by the flavour
os:
type:
os: hvm
The runtime config now contains a revision and the flavour. For upgrading our specs, we can now always replay if we want.
When having a revision, we can always get the corresponding spec for a revision, even if we do updates in the meantime.
GET /apis/kubevirt.io/v1alpha1/vms/store/runtime?revision=15
Updating the spec for next run, without thinking about hotplug, is easy, for instance do a PUT to /apis/kubevirt.io/v1alpha1/vms/store. This will increase the revision server side.
Theoretically temporary hotplug is nothing the store needs to take care about. The main problem is, that you would end up with a revision in the runtime, which looks different than the revision you originally used from your store. Not sure what is best in this case. One soultion would be "subrevisions".
However, most likely we don't have to take care of it at all. Just doing a PUT to the runtime should be good enough.
Here the biggest problem, is keeping the PCI address holes, which might appear because of temporary hotplugs, so that the new permanent device, already made available in the current run via hotplug, should stay at the same addresss after the restart. We will have to express that restful.
Additional tags on devices might help (hotplug=true && keep=false vs. hotplug=false && keep = true vs hotplug=true && keep=true).
Which might also help: internally store one document, containing all three versions, to guarantee atomicy (posted/putted, defaulted, next-run). We are getting in a transactional safety area. Of course a controller can help. Would require revert then?
When we have named devices (like containers in pods), we can cleary reference them, and do temporary hotplug on the runtime alone. Then when a permanent device is hotplugged, we can look up the runtime config, merge it with the put config to the store and send it through the defaulter. Finally remove the temporary device again and store the new spec in the store.
if not shut down by KubeVirt infra, the VM will be restarted:
apiVersion: kubevirt.io/v1alpha1
kind: VM
metadata:
name: testvm
spec:
restartPolicy: always
domain:
os:
type:
os: hvm
apiVersion: kubevirt.io/v1alpha1
kind: VM
metadata:
name: testvm
spec:
restartPolicy: never
domain:
os:
type:
os: hvm
They don't care about PCI addresses, you restart them after you are done. There we only care about temporary hotplug in the runtime (if at all).
- core components are not affected by document changes for administrational tasks
- no core component has to understand flavours
- either a controller pre-expands it or virt-handler has to do the work
- Revision in metadata can easily and reliably link (speaking: that revision should currently run)
- config changes for next runs can easily covered, since we have our own admin space/storage for such tasks
- simpler runtime spec
- spec versioning would be very easy
- Spec is depending on the case stored many times
- "duplicating" endpoints, e.g. VM, VMPool + Defaulting
- Need to have a revision in metadata
- More rest calls? Depends on the implementation (controller vs client based)
- Spec is, depending on the case, stored many times
- Unexpanded version including flavours
- Expanded version for runtime (with our without defaulting)
- Expanded version for next run (including "left out" PCI addresses for temporary hotplug)
Including the expected VM phase in the spec and sync via controller?
- A very simple controller just copies over or deletes the VM based on such a Spec phase, using exactly these rest calls.
- This way, you are mixing runtime information with a specification.
- Changing the spec wihtout applying the changes is hard, if not impossible
- If you update the spec with a different config version, would you also have to set the phase to stopped?
- How to distinguish between restartPolicy and Running/Stopped phase in this kind of spec?
- Even having a flag for "up", "down", independent from the phase, does not solve this, it has the same problems.