Skip to content

Instantly share code, notes, and snippets.

@Crazykev
Last active September 4, 2017 16:08
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Crazykev/bd9d0fbb7d41aa8c12cbd2c450cfa319 to your computer and use it in GitHub Desktop.
Save Crazykev/bd9d0fbb7d41aa8c12cbd2c450cfa319 to your computer and use it in GitHub Desktop.
Google Summer of Code 2017 Summary-HaoZhang

Google Summer of Code 2017 Summary

It is already the final days of GSoC 2017, three months of challenge and coding make it like a flash. Thanks for Google Summer of Code team and CNCF organization give me this opportunity, and of course my dearest memtors, Pengfei Ni and Harry Zhang, 's patient guidance. It's not the first time for me as a opensource project contributor, but it's the first time for me to work on such large and challenge feature. And I do believe unikernel will play an important role in the post container technology world, which will benefit NFV and IoT scene.

Design

The original proposal could be found here.

Here is the brief description: Basicaly, I treat unikernel instance as a special VM, so that I build unikernel instance into a VM image(qcow2 format), manage unikernel instance by libvirt. From hypervisor aspect, there is no difference between unikernel instance and traditional virtual machine instance.

For container/sandbox life-cycle management, this is what unikshim(what I call this built in unikernel shim in Frakti) should do, and also the core part of what I work in this project. Every pod sandbox contain one container, and hold by one VM instance. More container/sandbox info, please refer the proposal.

For image management, image is pulled from internet with hidden url specified in image name, then untar and store in local filesystem. Typically, a image we pulled from internet is a tar file, which contains one manifest to describe this image and several images described in manifest. This is called base image, and for each container we build, unikernel runtime will prepare one image copy based on base image, and remove them when container is removed. To support more unikernel type, in the first stage, I can't build all kinds of them at container creation stage just like described in proposal. So building unikernel image is part of user's responsibility for now.

For network management, I refactor and reuse frakti's CNI lib to achieve host network configuration, and dhcp is the inner solution of unikernel instance. So to enable network integration, unikernel image should contain dhcp module in most unikernel technology.

At last, in order to support dhcp server and logging feature, I add a VM wrapper to every libvirt managed vm, who's responsible for dhcp service and get log from VM then write to specified location. Now the call stack is unikshim->libvirt->vmwrapper->qemu.

Related Work

Issues:

  1. kubernetes-retired/frakti#99
  2. kubernetes-retired/frakti#181

Pull Request:

  1. kubernetes-retired/frakti#178
  2. kubernetes-retired/frakti#180
  3. kubernetes-retired/frakti#189
  4. kubernetes-retired/frakti#219
  5. kubernetes-retired/frakti#223
  6. kubernetes-retired/frakti#226
  7. kubernetes-retired/frakti#227

Conclusion

We can already use kubeadm spin up a unikernel powered kubernetes cluster like @feiskyer posted on frakti deployment totural, the only difference is that we need add --enable-unikernel-runtime to frakti's parameter, and can also use CRI tools to test and debug. While dhcp is still under development, so network connectivity is limited on host and vm.

I've complete most of stuff listed in scheduled plan expect for some issues and passing CRI validation test. The good news is that we also make linuxkit built image could run on unikshim, and discover the potential that linuxkit unikshim cloud support multiple container per pod.

Further work

Enhance image management

Support building image at container creation time is important and awesome if we want to support all kubernetes features, because it's not easy to inject user data(user volume/service account/etc.) to a pre-build unikernel image.

Support multiple container per pod

Linuxkit built image has a built-in containerd, this allow us cloud management mutli-container's life-cycle if we find a way to communication with it. Glad that vsock in qemu is the one cloud help us, while it require linux kernel 4.4+, maybe a little bit difficult to meet by now.

Support flex volume

Traditional kubernetes volume is built on directory mount, but this is not easy for VM based runtime, and impossible for unikernel with virtio-9p support. So a disk attach based flex volume is the final solution for unikernel runtime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment