Skip to content

Instantly share code, notes, and snippets.

@ihelmke
Created December 4, 2010 00:38
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ihelmke/727773 to your computer and use it in GitHub Desktop.
Save ihelmke/727773 to your computer and use it in GitHub Desktop.
KVM Crew Project Notes
XEN->KVM Migration Project Notes
Author: Ian Helmke (ihelmke@ccs.neu.edu)
Author: Mike Coppola (coppola.mi@husky.neu.edu)
Description: This document is intended as a guide detailing our
findings, considerations, and thoughts regarding the XEN-KVM migration
of CCIS linux XEN hosts. In it, we will describe (directly or
abstractly) the process of performing many different KVM tasks that we
believe are necessary for its day-to-day use. We will also discuss the
various KVM host management tools available and describe some how some
of the operations in KVM translate across these management layers.
CONTENTS:
0. KVM ARCHITECTURE
1. BUILDING A KVM HOST
KVM is, first and formost, a kernel module that is loaded into the
Ubuntu kernel.
To install KVM, simply run the command:
sudo apt-get install qemu-kvm
That's literally all you need to do to get vanilla KVM running with no
frills. The kvm meta-package will install the kernel module and any
other dependences you need to get things running (including qemu).
The computer that KVM is running on MUST have CPU virtualization
present and enabled or KVM will not function. This includes AMD-V or
Intel-VT - either will suffice.
NOTE: The kvm-pxe package is not always installed but is necessary for
booting VMs. If an error regarding a PXE ROM image is encountered when
starting a VM, then run the command:
sudo apt-get install kvm-pxe
In order to establish network connectivity in guests, a bridge device
must first be created. See http://www.linux-kvm.org/page/Networking for
more information.
2. USING KVM
2.1 CREATING A DISK IMAGE
To build a KVM host, one merely needs a hard disk image file (or some
other form of hard disk). Hard disk images can be generated using the
command line:
qemu-img create [-f fmt] [-o options] filename [size]
qcow2 is the qemu '0standard' format, considered the most versatile by
most sources that we've looked at. Performance information is
available at:
http://www.linux-kvm.org/page/Qcow2
It supports encryption, compression, snapshotting, and is dynamically
expanding. Performance appears comparable to raw image files in the
tests that we have seen.
2.2 STARTING KVM
To run a disk image, simply use the command:
kvm [disk_image]
This runs kvm with all the default options and the given disk is the
device that the VM will boot from. KVM simulates a normal computer in
almost every respect, and unlike XEN, will look at the boot sector of
the hard drive, load grub, etc like a normal physical machine.
2.3 COMMAND LINE OPTION TOUR
Now we'll focus on a bunch of the different command-line switches and
options that you can feed to the 'kvm' command.
2.3.1 Configuring Memory
To configure base memory for a vm, use the -m switch; memory size is
given in megabytes (this can be changed by suffixing the number with a
G to feed it gigabytes of ram).
kvm -m 256 image.qcow2
2.3.2 Configuring attached volumes
the default KVM command gives you a quick way to fire up a vm using
only one disk, but you can attach up to 4 (along with a cd-rom drive).
http://telinit0.blogspot.com/2009/06/playing-with-kvm-and-physical-hdds.html
In addition, it is possible to mount a physical drive directly as a
virtual hard drive by simply invoking the kvm command with the
physical device as the drive. (As I understand it, you can mount
specific partitions as well. I'm not sure how booting from these
partitions would work as they would technically have no boot sector,
however - this is something we would need to investigate further.
kvm /dev/sda
kvm /dev/sda1
2.3.3 Configuring the boot device with multiple volumes
Multiple volumes may be specified using the -hdX flags like so:
kvm -hda image.img -hdb otherimage.img
You can specify the boot device using -boot order=X , where X is:
a for the first floppy
c for the first hard disk
d for the first CD-ROM
n for the first network device (Etherboot)
2.3.4 Configuring number of processors
To define how many processors to assign a VM, use the -smp flag like so:
kvm -smp 5 someimage.qcow2
2.3.5 Configuring network connectivity
If you created the bridge device that was talked about in the first
section, then all you need to do to get network connectivity in guests
is run:
kvm -net nic -net tap image.qcow2
Here, -net nic will create a virtual Ethernet device, and -net tap will
connect it to the bridge device on the host.
According to the documentation given on the KVM web site (again, see
the link on top) this is much better performance-wise than most other
solutions. There may be other ways to do this, but this was simple and
according to the docs offers good performance.
2.3.6 Configuring virtual displays
The -nographic flag will start the guest without a display. You can
also use the -vnc flag to create a VNC listener that you can connect to
and view the guest display. For example:
kvm -vnc hostname:port image.qcow2
where hostname is the client that will be connecting to the listener,
and port (+5900) is the TCP port number that the VNCd will listen on.
It's really weird, but we mean that you add 5900 to the number that you
specified in the command. So, -vnc hostname:100 means the VNCd will
listen on port 6000.
2.4 GUEST MANAGEMENT
Guest management on KVM is interesting (and extremely simple) because
unlike XEN or Open-VZ there is no KVM daemon which runs with a handle
on all of the KVM machines in the background. All of the KVM machines
on the host exist as processes in linux. Running KVM from the command
line is the way to start these virtual machines. If you have a total
lockup of a machine and KVM does not respond, you can simply kill the
process.
This has the effect of making it rather difficult to tell what guests
are running on the host. You can use something like:
ps aux | grep kvm
to find all kvm processes running on the host machine. This is
probably the simplest way to find out what virtual machines are
running. You can run kvm with the -name param to help you if you're
searching this way; the -name option gives the virtual machine a name
that you can search for in the process list (the process still
identifies itself as kvm, but you can grep for this. It also effects
the name of the VNC window or SDL window. It's a bit of a gimmick, but
it's something).
2.5 TOOLS TO CREATE KVM GUESTS AUTOMATICALLY
There are several tools to create KVM guests automatically, including
vmbuilder (apt-get install vmbuilder) which can automatically build
vanilla virtual machines. This is useful for building out simple
guests very quickly. If we want guests with more frills (as we get
through pxe booting like we do with lab machines) we will probably
need to come up with our own solution.
2.6 THE QEMU MONITOR SHELL
A number of the more advanced features of KVM are done through the
QEMU monitor shell. If you run a kvm instance with the -nographic or
-vnc option, you can get to this from the attached terminal with
C-a c. From this terminal, you can:
- snapshot the virtual disk
- attach/detach drives from the vm
- set migration parameters and initiate
There are also ways to forward this console to a different output
stream. We are omitting specific examples from this section because,
so far as I can tell, the documenation for using this stuff directly
is pretty poor. Most people do these things through a system to manage
KVM, which we will be discussing later.
3. TOOLS FOR MANAGING KVM HOSTS
It's probably become clear that using KVM by itself is a pretty
horrible solution in terms allowing you to manage your virtual
machines. There are actually two main technologies that are used to
aid in this particular task:
libvirt - libvirt is a c library and daemon that monitors and allows
the user to manipulate virtual machines. No sane person interacts with
it directly; the most common tool seems to be virsh, a shell which
lets you monitor and control virtual machines. There are other
virt-tools that can be used to quickly manipulate virtual machines
within libvirt utilites (or create them).
Virtual machine profiles in libvirt are xml files containing info
about the virtual machine.
ganeti - ganeti is a virtual machine cluster management tool. It
allows you to do many of the same things libvirt does but features
tighter integration between virtual hosts.
3.1 USING LIBVIRT TOOLS TO MANAGE KVM
The simplest libvirt tool to use to manage kvm machines is
virsh. Virsh uses xml configuration files to store information about
the virtual machines it is managing. A guide on how to create these
xml files is located at:
http://libvirt.org/formatdomain.html
Note: Many of the tools that exist to automatically create virtual
machines have hooks to automatically create libvirt configuration
files and add the vms into libvirt automatically.
In terms of API stability, one of the stated goals of libvirt is to
provide a stable API which isolates its users from changes in the
underlying hypervisor. Thus, it *may* be the case that using libvirt
will provide us with more stability than using kvm directly.
A good 5 minute presentation overviewing many of libvirts features can
be found at:
http://honk.sigxcpu.org/projects/libvirt/libvirt-dc09.pdf
This is from 2009 and so is a bit out of date - at the time of this
presentation the snapshotting api was slightly more primitive.
3.1.1 libvirt organization
Libvirt's notion of virtual machines is abstracted in something called
a domain. The physical machine that runs the domains is called a
node. Libvirt also provides apis for dealing with complex virtual
networks between virtual machines. For more information, see:
http://libvirt.org/archnetwork.html
http://libvirt.org/formatnetwork.html
The first page provides a background as to what can be done with the
networking features in libvirt, and the second is a reference to the
networking device xml format (which allows you to describe these
virtual bridge devices to libvirt and add them to your network with
minimal work). This isn't necessary to use libvirt, but may be helpful
depending on what kind of setup we opt to use down the line.
Libvirt also has a sophisticated notion of storage. It can of course
mount simple physical volumes on hosts, but is also has a notion of
storage 'pool's, which aid in the management of LVM and disk
partitioning for virtual machines.
Libvirt has a uri specification that allows it to identify different
hypervisors and identify local and remote domains. Libvirt is actually
able to monitor remote hosts if the corresponding uri is added to its
configuration.
[hypervisor]://remotehost/domainname
[hypervisor]:///localdomainname
Also note that libvirt is a c library with bindings in several
different languages. If we are so inclined, we can do our own things
with it in C#, python, or one of several other languages. The full list
of binding is available at:
http://libvirt.org/bindings.html
3.1.2 Managing guests in libvirt
As stated above, xml configuration files define virtual machines in
libvirt. An example is included in the same directory as this git repo
(see vm.xml). This file is essentially specifying the hardware
configuration of the virtual machine you wish to run, and as you would
expect there are a great deal of customizable options in here.
3.1.3 A brief tour of VM management with libvirt
You can start virsh by typing 'virsh' into any console - which will
put you in the virsh shell. Note that you may want to do this as
administrator depending on how your network is set up.
You can view the virtual machines currently running under libvirt
using the list command, which will get you something like:
Id Name State
----------------------------------
1 virtualhost Running
With the --all flag you can get the list of domains on the machine
independent of state.
You can use the start/shutdown commands to start and stop a virtual
machine. The destroy command is the 'hard' power-off (the same as
unplugging a physical box).
The edit command will allow you to edit the xml configuration
associated with a host, so you can change the physical hardware.
The suspend command will suspend a virtual machine's state. This
command will not free up memory on the physical host; it merely pauses
execution within the virtual machine (at least as far as all of the
documents that I have been able to find have told me).
In theory - snapshots can be done using snapshot-create. They can be
managed with the following:
snapshot-create Create a snapshot
snapshot-current Get the current snapshot
snapshot-delete Delete a domain snapshot
snapshot-dumpxml Dump XML for a domain snapshot
snapshot-list List snapshots for a domain
snapshot-revert Revert a domain to a snapshot
We haven't had a chance to test this out too extensively, so it isn't
clear if this is going to do exactly what we want it to. Since domains
and devices have their own seprate representations within libvirt, it
isn't clear whether this will actually snapshot the domain and all
associated devices or just the phyiscal hardware configuration -
something we haven't gotten a chance to investigate quite yet.
The setmem command allows you to change the currently allocated memory
to a domain.
The migrate command allows one to migrate a domain from one host to
another by specifying the host machine and the destination uri.
3.1.4 Adding devices to libvirt and its utilities
The easiest way to add a vm to libvirt is through the virt-install
command. This command can be used to create virtual machines with an
os ready to go, but can also be used to add a hard drive along with
some basic hardware into libvirt with minimal hassle.
There are commands in virsh which will allow you to define drives and
network devices independently of base virtual machines. This can be
useful if you would like to hotswap certain drives between virtual
hosts for whatever reason (or plug machines into different virtual
networks, etc.).
4. OTHER ISSUES
4.1 UPGRADING KVM
KVM is considered stable - looking around the internet it appears that
people encounter the occasional KVM issue here and there as they
upgrade. It appears that the KVM folks regard upgrade issues as bugs,
not features, so we should be able to get at least some degree of
support if things break when we update. We certainly won't have to
build out our entire vm cluster with every Ubuntu upgrade, but there
may be an occasional road bump between distros to work out (quite like
everything else relating to upgrading).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment