Instantly share code, notes, and snippets.

What would you like to do?
Notes for the virtualization presentation at
Full Virtualization
* All devices are emulated
* No resources or devices are shared with host machine unless explicitly configured to work this way (usb, network, etc)
* Like having an entirely separate machine inside your own
* Can be resource intensive as everything must be emulated, and a separate copy of every file must be maintained.
* Resources are dedicated to running things that are already running on your box.
* Can be snapshotted and restored easily with most common virt platforms.
* must emulate everything, so slow to start
* Other operating systems supported
Chroot jails
* Filesystem based separation
* Processes are bound to a level of the filesystem they cannot see beyond.
* Does not offer resources control other than at filesystem level. Processes can see other processes running.
* you can nice processes, but that's about it.
* must maintain a copy of all libraries, configuration files, and dependencies for your chroot'ed application to run (need all files required for functioning linux system)
* security concers because if jail is 'broken' then intruder has access to the host system
* Very lightweight.
* Takes jails to it's logical conclusion
* Creates a thin virutalization layer around real hardware and allows you to allocate it to zones.
* Few kinds of zones,
Container based virtualization (LXC)
* Uses cgroups (control groups) and namespace isolation to effectively jail processes.
* cgroups work a bit like a firewall for processes, allowing you to filter, create groups, change priorities all the way down to the scheduler
* Kernel is shared, so less flexible in terms of what can be run. (cannot run i386 apps on 64bit kernel), can't run other operating systems or kernel versions.
* Since kernel is shared, don't need to maintain copies of everything
* Extrememly fast to setup and teardown
* Also effective for distribution of application stacks, package on your machine with all deps and ship to server. Will run as is, exactly as on your machine.
* Isolated from other containers on the system and the host machine. If security is compromised, other container unaffected.
* Designed to be used as a package manager for application stacks. That's right. Bundling the entire OS, (stripped down to the minimal parts to run your application)
* Container orchestration system
* Single process per container. The implications of this are massive, security etc.
* Tied directly to LXC (much like vagrant was)
* Currently uses cgroups
* Another container orchestration system built by cloudfoundry, has swappable backends though (think windows!)
Words around virtualization you will hear:
Hyper-V - Microsoft full virtualization and management application. Has fancynessfor communicating directly with hardware devices. Runs on top of Server 2008/2012.
ESXi - VMWare's hypervisor platform, has lots of fancy hardware support (1tb ram, SAN, bonding 32 nics together...) but does the same things as xen. This is an actual hypervisor system, meaning it does nothing without running systems on top of it.
Xen - Linux open source virtualization hypervisor system, runs it's own special kernel built for running other hosts inside. Most public clouds run on Xen, rackspace, google, amazon... Dowsides of this is difficulty of debugibility. Each kernel runs it's own scheduler which is at the mercy of the Xen kernel's scheduler. Can add cpu latency in weird places (why is my process so slow? oh waiting on the NIC, oh the xen kernel is blocking access because a proc has nice'd itself above my vm)
KVM - Kernel Virtualization Module exposes virtualized hardware in the linux kernel and allows you to boot them. Not a real hypervisor because it's still running a stock linux kernel with a virtualization module loaded. The upside of this is observibility and significantly lower overhead.
LibVirt - API for communicating with KVM, Xen, or ESX. This is the glue between most graphical or command line management tools and hypervisors. If you've ever used cpanel, jenkins, foreman, openstack... all that is libvirt under the covers.
LXC - Using control groups (kernel level process control for ram cpu and io network) and namespacing provides visibility or lack thereof to processes running on the host or in other containers. Filesystems can do quite a bit to help out lxc with snapshotting instantly to clone etc.
OmniOS - Super minimal distribution of Illumos (open source fork of Solaris) designed for running zones on. The entire os is loaded into memory (tiny) leaving your resources free for virtualizing.
SmartOS - Joyent's distribution of Illumos (open source fork of Solaris) designed for running zones on. Well tuned for running their infrasturcture and has high visibility and debuggibility. Extremely well supported.
CoreOS - Super minimal distribution of Linux (kernel and systemd) with only the requirements needed for running containers, along with some fancy service discovery rtualization with KVM, also designed for lots of VMs. Runs on top of linux, hypervisor machines must be linux or ESX as well.
Openstack - Redhat effort for IaaS, full virtualization using KVM, designed for running lots of VMs. Runs on it's own distro of linux. Works with many different hypervisors.
How does this effect me (as a dev)
We should all be using virtualization for testing and experimentation locally. Are computers have long been powerful enough to make this work efficiently. (learn to use vagrant)
You are bound to find chroot jails over the place as this was best practice for deploying application servers for a VERY LONG time. (chroot was written in 1979, and popularized at sun in the mid 80s)
It's looking like containers are the way of the future in terms of deployment, so it may be time to get comfortable using them.
Docker has been/is making a lot of noise and people seem to like it, docker just wraps LXC in easy to use commands for moving containers around and running them. Think about not having to ever worry about RVM again because your OS is tailored to run your app. Combine this with configuration management scripts to set up your environment, and you have a complete deployable unit sitting in source control.
Docker is really useful in CI environments. Spinning up your dev machines and test machines almost instantly is a big win. Knowing they are clean every time you run.
** Live demo **
* full virtualization
show disk space of vagrant box, contents of the vmdk and manifest. Spin up the box and snapshot and destroy and revert.
* chroot jail
mount the diskspace
# tmpfs /tmp/jail tmpfs rw,size=1G,nr_inodes=5k 0 0
run the debbootstrap
sudo debootstrap --variant=buildd --arch i386 lucid /tmp/jail
chroot into there
run ps aux
install apache
expose the root directory
* Container based
Getting a project up and running:
#look at the filesizes!
sudo docker images
sudo docker run -i -t ubuntu /bin/bash
# install the stuff you need
docker ps -a
docker commit <HASH> urbanskims/python
docker kill
# couple of strategies here, you can provision the whole thing in your Dockerfile or you can use snapshots, whichever makes sense for your use case.
sudo docker build .
vagrant up
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment