Skip to content

Instantly share code, notes, and snippets.

@paveq
Last active December 17, 2015 12:19
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save paveq/5608556 to your computer and use it in GitHub Desktop.
Save paveq/5608556 to your computer and use it in GitHub Desktop.
Virtualization platform setup

Virtualization platform setup

Briefing

The host machine is Dell Vostro laptop running Fedora core 18. Centos 6.4 openVZ host will be installed on KVM VM on top of the Fedora host. OpenVZ containers will run in Centos openVZ host. The latest version (6.4) was picked due the mirrors no longer having older 6.3 version, which was not supported anymore.

Execution

Install libvirt utilities to Fedora

# yum install libvirt virt-install libguestfs-tools
# service libvirtd start
  • Make sure KVM kernel modules are loaded

Install the openVZ host

# virt-install \
  --name centos \
  --ram 2048 \
  --os-variant rhel6 \
  --disk path=/home/centos.img,size=100 \
  --network network=default \
  --location http://ftp.funet.fi/pub/mirrors/centos.org/6/os/x86_64/ \
  --extra-args "console=ttyS0"

After this I ran into issue:

> Starting install...
> Creating storage file centos.img                                                                          | 100 GB  00:00:00     
> ERROR    internal error Process exited while reading console log output: char device redirected to /dev/pts/2
> qemu-kvm: -drive file=/home/vm/CentOS-6.4-x86_64-minimal.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw: could not open disk image /home/vm/CentOS-6.4-x86_64-minimal.iso: Permission denied

This was selinux issue, and was fixed by setting correct security context to the image storage location. Reason for storing these under /home is because it had the largest partition to fit the images. Correct selinux context can be verified by:

# ls -alZ /var/lib/libvirt/
> drwx--x--x. root root system_u:object_r:virt_image_t:s0 images

and fixed:

# semanage fcontext -a -t virt_image_t /home/vm/
# restorecon -R -v /home/vm/
> restorecon reset /home/vm context unconfined_u:object_r:home_root_t:s0->unconfined_u:object_r:virt_image_t:s0

Now we can continue the installation, and view the serial console:

# virsh console centos

The OS was installed with mostly default options and root password "foobar". After the installation the guest will shut down, and needs to be started with "virsh start centos". Then login to the serial console and get the IP address of the VM. At this point static IP could be set by editing network configuration file "/etc/sysconfig/network-scripts/ifcfg-eth0".

Configure the openVZ host

The VM was updated to latest security fixes and base tools installed:

# yum update
# yum groupinstall base

The hostname was set to openvz.vm. SSH key generated on the Fedora host and public key transfered to the VM. At this point openVZ was installed mostly by using instructions found from http://www.howtoforge.com/installing-and-using-openvz-on-centos-6.4

Some filesystem tweaking:

# umount /home
# lvrename VolGroup lv_home lv_vz

Modify fstab to mount the LV to /vz (and add noatime for performance)

# yum install vzkernel vzctl vzquota

Selinux was disabled due to not being supported on openVZ platform. NEIGHBOUR_DEVS in vz.conf was set to "all". At this point the openVZ host was rebooted and new openVZ kernel selected.

Download ubuntu template to /vz/template/cache:

# cd /vz/template/cache
# wget http://download.openvz.org/template/precreated/ubuntu-10.04-x86_64.tar.gz

Install the ubuntu container

# vzctl create 100 --ostemplate ubuntu-10.04-x86_64 --config basic
# vzctl set 100 --hostname testsite.example.com --save
# vzctl set 100 --ipadd 192.168.0.100 --save
# vzctl set 100 --nameserver 8.8.8.8 --nameserver 8.8.4.4 --save

Note: libvirt tools were tested at this point, but found to be perhaps too inadequate for openVZ support. OpenVZ's own tools were used instead.

Start and enter to the container:

# vzctl start 100
# vzctl enter 100

Configure the ubuntu container

Install updates and then apache and mod_php:

# apt-get update
# apt-get upgrade
# apt-get install apache2 php5

Creating the test page

Create the simple test page to /var/www:

<?php
echo "<html><head><title>Test page</title></head><body>";
echo "<p>Hello world</p>";
echo "<p>" . phpversion() . "</p>";
echo "<p>" . date('c') . "</p>";
echo "</body></html>";
?>

Tell the Fedora host about the openVZ virtual network:

# ip route add 192.168.0.0/24 via 192.168.122.232

In order to make the test page accessible by testsite.example.com, add it to /etc/hosts. Finally add the ssh public key to authorized_keys on the ubuntu vm.

Conclusion

The test was relatively easy, even for first time openVZ user. I've previously used Xen, KVM and Solaris Zones. The setup and networking here was simplified, not suitable for real production system.

Possible improvements

  • Add management vlan between Fedora host and openVZ host. The management network should be exposed to real hardware (vmhosts, switches, router etc.) and internal VPS's only, not for customer traffic.
  • Add DHCP and DNS servers, maybe even routing protocol ,for example OSPF.
  • Switch from mod_php to php-fpm, and switch apache from prefork MPM to event MPM. PHP-fpm is in production use in certain Faarao servers, and has given boost to the performance and reduced memory usage immensively. PHP-fpm also separates jobs between the HTTP server and backend PHP request handlers, and each Magento can run under different user id.
  • Add another KVM or openVZ VM as puppetmaster, install DHCP server and PXE boot environment there. Provision VM's with, for example, Cobbler or Foreman.

Misc feedback

Virtualization platform choice

In general openVZ has slightly better performance and less overhead compared to KVM, but I think KVM has better security model. In openVZ the kernel is always shared versus in KVM each VM has its own kernel. In case of local root vulnerability the exploit can never affect the KVM host. When using Red Hat based host os (might be true for others too nowdays) each qemu-kvm process is separated to different selinux context. In case of bug in qemu-kvm (assumed the attacker gets privileges of the qemu process) it cannot, for example, access another VM image on the same host. If iSCSI or NFS are used, then the problem might be even larger.

Local filesystem vs SAN

I've previously run lot of stuff over NFSv3 and v4, but today I think the best choice is to have local direct attached storage. The disk space is cheap compared to having to build a storage nodes and network. With current needs the 1 GbE network is not adequate for high performance hosting Magento needs, and 10 GbE equipment is still relatively expensive. The networked storage will always have more latencies, and is more complex setup overall. It is possible to build redundant storage systems, but that is expensive and probably still error prone (human factors).

Filesystem choice

I love ZFS. Look at my ZFS tests on commodity hardware: https://gist.github.com/paveq/3587970 ZFS is fast, guarantees integrity and doesn't necessarily need battery backed hardware RAID controllers. Actually ZFS is best when the disks are directly exposed to it. For blazingly fast syncronous writes SSD as slog device can be used. Unfortunately ZFS is not supported on Linux due to licensing issues, and the replacement (btrfs) is still not ready nor has the same features. That's why I would try Illumos (Solaris kernel) based virtualization platform called SmartOS. SmartOS supports ZFS, Dtrace, KVM and has lightweight Solaris zones, similar to openVZ. SmartOS is used in production by Joyent, with hardware specs available here: https://github.com/joyent/manufacturing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment