Below is list of results collected with netperf. The interesting value is TCP_CRR, it measures how fast it can TCP connect/request/response/receive. In short, the transaction rate. The test is used to simulate a normal HTTP/1.0 transaction. What's worrying is that this value has very low on Xen virtualized guests. Performance differences between bare metal and virtualization has been as high as 2-3x.
You can find the MAC address for LAN1/eth0 (not the BMC MAC) via the SuperMicro IPMI interface by running the following command:
$ ipmitool -U <redacted> -P <redacted> -H 10.4.0.10 raw 0x30 0x21 | tail -c 18
00 25 90 f0 be ef
failed (104: Connection reset by peer) while reading response header from upstream, client: | |
If you are getting the above error in nginx logs running in from of upstream servers you may consider doing this as it worked for me: | |
check the ulimit on the machines and ensure it is high enough to handle the load coming in. 'ulimit' on linux, I am told determines the maximum number of open files the kernel can handle. | |
The way I did that? | |
modifying limits: for open files: | |
-------------------------------- | |
add or change this line in /etc/systcl.conf | |
fs.file-max = <limit-number> |
- Atleast 7 VM's: 1 corresponding to each node and 1 for DNS Server.
- Keep note of the IP's of each VM and assign them to a node.
- A publicly accessible IP address of each of the above machines and a private IP address for each of them (these may be the same address depending on the machine environment). These will be referred to as
<publicIP>
and<privateIP>
below. - The FQDN of the machine, which resolves to the machine's public IP address (if the machine has no FQDN, you should instead use the public IP). Referred to as
<hostname>
below. - A DNS root zone in which to install your repository and the ability to configure records within that zone. This root zone will be referred to as
<zone>
below. In setting of DNS, this is referred to asims.hom
#!/bin/bash | |
set -e | |
## striping seems to break docker | |
#STRIPE="-i2 -I64" | |
#DEVS="/dev/xvdf /dev/xvdg" | |
DEVS="$1" | |
if [ -z "$DEVS" ]; then | |
echo >&2 "Specify which block devices to use" | |
exit 1 |
When running virtual machines under a Linux host system for testing web apps in various browsers (e.g. Internet Explorer), I found it rather tedious having to continually tweak the hosts file within each VM for the purpose of adding entries pointing back to the host machine's development web server address.
Instead the steps below will setup Dnsmasq on a Ubuntu 14.04LTS or 12.04LTS host machine for the purpose of serving both it's own DNS queries and that of virtual machine guests. Dnsmasq will parse the /etc/hosts
file on your host machine where we will keep a single set of DNS entires to our test web application(s).
#!/usr/bin/env bash | |
# Loads and mounts an ISO over SMB via the | |
# SuperMicro IPMI web interface | |
# | |
# usage: supermicro-mount-iso.sh <ipmi-host> <smb-host> <path> | |
# e.g.: supermicro-mount-iso.sh 10.0.0.1 10.0.0.2 '\foo\bar\windows.iso' | |
set -x |
--- | |
##prapare to install | |
# for all nodes | |
sudo useradd -d /home/ceph -m ceph | |
sudo passwd ceph | |
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph | |
sudo chmod 0440 /etc/sudoers.d/ceph | |
##admin-node node(ceph and root) | |
ssh-keygen |
; allow nova login | |
usermod -s /bin/bash nova | |
su - nova | |
; ssh key | |
ssh-keygen | |
; scp key to compute host | |
scp /var/lib/nova/.ssh/id_rsa.pub root@ComputeHost:/var/lib/nova/.ssh/authorized_keys |
for x in $(virsh list --all | grep instance- | awk '{print $2}') ; do | |
virsh destroy $x ; | |
virsh undefine $x ; | |
done ; | |
yum remove -y nrpe "*nagios*" puppet "*openstack*" "*nova*" "*keystone*" "*glance*" "*cinder*" "*swift*"; | |
mysql -u root -e "drop database nova; drop database cinder; drop database keystone; drop database glance; drop database neutron;" | |
# Uncomment this for Cinder volume group |