Skip to content

Instantly share code, notes, and snippets.

@rmohr
Last active May 10, 2017 13:42
Show Gist options
  • Save rmohr/ef15db303001471df2f38158f0f178db to your computer and use it in GitHub Desktop.
Save rmohr/ef15db303001471df2f38158f0f178db to your computer and use it in GitHub Desktop.
dump vm store thoughts

Covers

Store a VM for later start (maximum rest version, full client controll):

  1. Everything explicit calls (most flexibility)
 POST /apis/kubevirt.io/v1alpha1/vms/defaults
 POST /apis/kubevirt.io/v1alpha1/vms/store
 # wait for a day
 GET  /apis/kubevirt.io/v1alpha1/vms/store/runtime
 POST /apis/kubevirt.io/v1alpha1/namespaces/default/vms

GET from the store:

apiVersion: kubevirt.io/v1alpha1
kind: VM
metadata:
  name: testvm
  annotations:
    kubevirt.io/revision: 15
spec:
  restartPolicy: always
  domain:
    os:
      type:
        os: hvm

There is an "edit" revision on the document, indicating which revision is sent to the runtime. This allows further editing in the store, while still maintaining a reference between the config store and the runtime version.

  1. Defaulting is implicit
 POST /apis/kubevirt.io/v1alpha1/vms/store
     POST /apis/kubevirt.io/v1alpha1/vms/defaults
 # wait for a day
 GET  /apis/kubevirt.io/v1alpha1/vms/store/runtime
 POST /apis/kubevirt.io/v1alpha1/namespaces/default/vms

Flavour

  1. Extra spec
  • We have to evolve/convert and manage different versions through out the project, but possible. The rest is like in solution 2)
  • Might make sense, when hotplug and permanent config changes for the next run come into play
  1. Embedded annotations
  • Can be fit into a runtime description
  • Indicat flavour with an annotation which is kept always
  • Expansion on POST time (but keeping the annotation)
  • On application updates, the falvour needs to be taken into account, to change defaults to better things/correct things
  POST /apis/kubevirt.io/v1alpha1/vms/store
  GET  /apis/kubevirt.io/v1alpha1/vms/store/runtime
  POST /apis/kubevirt.io/v1alpha1/namespaces/default/vms

POST to store:

apiVersion: kubevirt.io/v1alpha1
kind: VM
metadata:
  name: testvm
  annotations:
    kubevirt.io/flavour: windows10 # indicate template to use
spec:
  domain:    
    os:
      type:
        os: hvm # this might override flavour defaults

GET from the store:

apiVersion: kubevirt.io/v1alpha1
kind: VM
metadata:
  name: testvm
  annotations:
    kubevirt.io/revision: 15
    kubevirt.io/flavour: windows10
spec:
  domain:
    devices:
      # something added by the flavour
    os:
      type:
        os: hvm

The runtime config now contains a revision and the flavour. For upgrading our specs, we can now always replay if we want.

Spec change for next run

When having a revision, we can always get the corresponding spec for a revision, even if we do updates in the meantime.

GET /apis/kubevirt.io/v1alpha1/vms/store/runtime?revision=15

Updating the spec for next run, without thinking about hotplug, is easy, for instance do a PUT to /apis/kubevirt.io/v1alpha1/vms/store. This will increase the revision server side.

Temporary hotplug

Theoretically temporary hotplug is nothing the store needs to take care about. The main problem is, that you would end up with a revision in the runtime, which looks different than the revision you originally used from your store. Not sure what is best in this case. One soultion would be "subrevisions".

However, most likely we don't have to take care of it at all. Just doing a PUT to the runtime should be good enough.

Hotplug + keeping the device for the next run

Here the biggest problem, is keeping the PCI address holes, which might appear because of temporary hotplugs, so that the new permanent device, already made available in the current run via hotplug, should stay at the same addresss after the restart. We will have to express that restful.

Additional tags on devices might help (hotplug=true && keep=false vs. hotplug=false && keep = true vs hotplug=true && keep=true).

Which might also help: internally store one document, containing all three versions, to guarantee atomicy (posted/putted, defaulted, next-run). We are getting in a transactional safety area. Of course a controller can help. Would require revert then?

When we have named devices (like containers in pods), we can cleary reference them, and do temporary hotplug on the runtime alone. Then when a permanent device is hotplugged, we can look up the runtime config, merge it with the put config to the store and send it through the defaulter. Finally remove the temporary device again and store the new spec in the store.

Use-cases with VM run policy attached in the store

Always On VM (host bound)

if not shut down by KubeVirt infra, the VM will be restarted:

apiVersion: kubevirt.io/v1alpha1
kind: VM
metadata:
  name: testvm
spec:
  restartPolicy: always
  domain:
    os:
      type:
        os: hvm

One-off VM:

apiVersion: kubevirt.io/v1alpha1
kind: VM
metadata:
  name: testvm
spec:
  restartPolicy: never
  domain:
    os:
      type:
        os: hvm

Worker VMs

They don't care about PCI addresses, you restart them after you are done. There we only care about temporary hotplug in the runtime (if at all).

Pro/Contra

Pro

  • core components are not affected by document changes for administrational tasks
  • no core component has to understand flavours
    • either a controller pre-expands it or virt-handler has to do the work
  • Revision in metadata can easily and reliably link (speaking: that revision should currently run)
  • config changes for next runs can easily covered, since we have our own admin space/storage for such tasks
  • simpler runtime spec
  • spec versioning would be very easy
  • Spec is depending on the case stored many times

Cons:

  • "duplicating" endpoints, e.g. VM, VMPool + Defaulting
  • Need to have a revision in metadata
  • More rest calls? Depends on the implementation (controller vs client based)
  • Spec is, depending on the case, stored many times
    • Unexpanded version including flavours
    • Expanded version for runtime (with our without defaulting)
    • Expanded version for next run (including "left out" PCI addresses for temporary hotplug)

Sidenote on mixing Phase and specification:

Including the expected VM phase in the spec and sync via controller?

  • A very simple controller just copies over or deletes the VM based on such a Spec phase, using exactly these rest calls.
  • This way, you are mixing runtime information with a specification.
    • Changing the spec wihtout applying the changes is hard, if not impossible
    • If you update the spec with a different config version, would you also have to set the phase to stopped?
    • How to distinguish between restartPolicy and Running/Stopped phase in this kind of spec?
    • Even having a flag for "up", "down", independent from the phase, does not solve this, it has the same problems.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment