Skip to content

Instantly share code, notes, and snippets.

@glevand
Forked from dm0-/README.md
Created August 14, 2017 20:21
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save glevand/659e00b4e9b80ba96818311ca21b7782 to your computer and use it in GitHub Desktop.
Save glevand/659e00b4e9b80ba96818311ca21b7782 to your computer and use it in GitHub Desktop.
Automated Tectonic installer for local virtual machines

Tectonic virtual install script

This script is a Tectonic installer for local development and testing. It creates a cluster of virtual machines on the same host that runs the script. All files, processes, and any other virtual resources that are created during its execution should be automatically removed on exit.

Dependencies

The first lines in the script list every command dependency, which can be overridden with environment variables. For example, this allows the script to work on distros other than Fedora.

export DNF="docker run --rm --volume=$PWD:$PWD fedora dnf"

Usage

Use the script's -h option to print full command help and usage information.

The script performs many operations requiring various super-user permissions, so it is easiest to execute the whole thing as root.

For a simple Tectonic installation, the only required options are -l for the license file and -s for the pull secret file. The -i option is required when the host system does not have an eth0 network interface. The following command will start a cluster of virtual machines after downloading the CoreOS account files from https://account.coreos.com/ on a workstation with a wireless network interface.

sudo -E bash install-tectonic.sh -l ~/Downloads/tectonic-license.txt -s ~/Downloads/config.json -i wlp4s0

The virtual machines will be installed allowing SSH access from the RSA key at matchbox-root/root/.ssh/id_rsa in the current directory.

When Tectonic is fully installed on the virtual machines, the kubeconfig file will be written to matchbox-root/tmp/kubeconfig. The root's dnsmasq server is required, so either add nameserver 10.0.0.2 to /etc/resolv.conf, or copy kubectl into the root and run it as, for example:

sudo KUBECONFIG=/tmp/kubeconfig chroot matchbox-root kubectl describe nodes

The script will exit and clean up its resources when all virtual machines have been shut down or killed. The installer's -x option shows the VMs' graphical consoles, which allows simply closing their windows to exit.

More examples

Start a cluster with vanilla Kubernetes (not Tectonic), fetching CoreOS signing keys from a specific key server URL, using a wireless Internet connection, and displaying all cluster consoles.

sudo GPG='gpg2 --keyserver hkps://hkps.pool.sks-keyservers.net' bash install-tectonic.sh -kxi wlp4s0

Start a three-master five-worker Tectonic cluster, displaying script commands and Terraform debug messages, installing experimental features, using an Ethernet network interface, displaying all cluster consoles. It reduces node disks to 7 GiB and memory to 1.5 GiB to try to fit them on a usual workstation.

sudo TF_LOG=DEBUG bash -x install-tectonic.sh -i ens3 -x -g 7G -r 1536m \
    -e -l ~/Downloads/tectonic-license.txt -s ~/Downloads/config.json \
    -m node1.example.com:80:11:11:11:11:11 \
    -m node2.example.com:80:22:22:22:22:22 \
    -m node3.example.com:80:33:33:33:33:33 \
    -w node4.example.com:80:44:44:44:44:44 \
    -w node5.example.com:80:55:55:55:55:55 \
    -w node6.example.com:80:66:66:66:66:66 \
    -w node7.example.com:80:77:77:77:77:77 \
    -w node8.example.com:80:88:88:88:88:88
#!/bin/bash
set -eo pipefail
# Support overriding all required host commands.
cat=${CAT:-cat}
chroot=${CHROOT:-chroot}
dnf=${DNF:-dnf-3}
gpg=${GPG:-gpg2}
ip=${IP:-ip}
iptables=${IPTABLES:-iptables}
mkdir=${MKDIR:-mkdir}
mknod=${MKNOD:-mknod}
mount=${MOUNT:-mount}
openssl=${OPENSSL:-openssl}
qemu=${QEMU:-qemu-system-x86_64 -enable-kvm -cpu host}
rm=${RM:-rm -f}
sed=${SED:-sed}
shred=${SHRED:-shred}
tar=${TAR:-tar}
truncate=${TRUNCATE:-truncate}
umount=${UMOUNT:-umount}
wget=${WGET:-wget}
# Some paths depend on these versions, so they are not options.
matchbox_url="https://github.com/coreos/matchbox/releases/download/v0.6.1/matchbox-v0.6.1-linux-amd64.tar.gz"
tectonic_url="https://releases.tectonic.com/tectonic-1.7.1-tectonic.1.tar.gz"
# Describe available command-line options.
function usage() {
echo "Usage: $0 [-l license_file] [-s secret_file] [-e] [-k] \
[-a admin_email] [-p admin_password] \
[-c cl_channel] [-v cl_version] [-i interface] [-d domain] [-n network] \
[-g node_disk_size] [-r node_memory] [-x] \
[-m master_node]... [-w worker_node]... [-o deferred_node]...
Tectonic options:
-l license_file Path to a CoreOS license file downloaded from \
https://account.coreos.com${license_path:+ (using: $license_path)}
-s secret_file Path to a pull secret file downloaded from \
https://account.coreos.com${secret_path:+ (using: $secret_path)}
-e Install experimental features
-k Install vanilla Kubernetes, does not require the \
license or pull secret files
-a admin_email E-mail address of the administrator\
${email:+ (using: $email)}
-p admin_password Password to use for the administrator account\
${password:+ (using: $password)}
Node VM options:
-c cl_channel Container Linux update channel for the nodes\
${channel:+ (using: $channel)}
-v cl_version Container Linux release version for the nodes\
${version:+ (using: $version)}
-i interface Name of a host network interface on the Internet for \
masquerade rules${interface:+ (using: $interface)}
-d domain Domain name for the cluster${domain:+ (using: $domain)}
-n network IPv4 network prefix for the cluster's /24 network\
${network24:+ (using: $network24)}
-g node_disk_size Size of the virtual disk to create for each node\
${node_disk:+ (using: $node_disk)}
-r node_memory Amount of RAM to allocate for each node\
${node_ram:+ (using: $node_ram)}
-x Display the nodes' graphical consoles
-m master_node Create a master node and add it to the cluster, \
given by name:mac (can be given multiple times)
-w worker_node Create a worker node and add it to the cluster, \
given by name:mac (can be given multiple times)
-o deferred_node Reserve DNS/DHCP for adding a node later, \
given by name:mac (can be given multiple times)
If no masters or workers are given, one master and three workers will be used."
}
# Take all settings from the command-line options.
declare -A MASTER_NODES=() OFF_NODES=() WORKER_NODES=()
channel=stable
domain=example.com
email=admin@example.com
experimental=
interface=eth0
license_path=
network24=10.0.0
node_disk=10G
node_display=
node_ram=2G
password=password
releasever=26
secret_path=
vanilla=
version=current
while getopts :a:c:d:ef:g:i:kl:m:n:o:p:r:s:v:w:xh opt
do
case "$opt" in
a) email=$OPTARG ;;
c) channel=$OPTARG ;;
d) domain=$OPTARG ;;
e) experimental=true ;;
f) releasever=$OPTARG ;;
g) node_disk=$OPTARG ;;
i) interface=$OPTARG ;;
k) vanilla=true ;;
l) license_path=$OPTARG ;;
m) MASTER_NODES[${OPTARG%%:*}]=${OPTARG#*:} ;;
n) network24=$OPTARG ;;
o) OFF_NODES[${OPTARG%%:*}]=${OPTARG#*:} ;;
p) password=$OPTARG ;;
r) node_ram=$OPTARG ;;
s) secret_path=$OPTARG ;;
v) version=$OPTARG ;;
w) WORKER_NODES[${OPTARG%%:*}]=${OPTARG#*:} ;;
x) node_display='-vga std' ;;
h) usage ; exit 0 ;;
*) usage 1>&2 ; exit 1 ;;
esac
done
[ -z "$vanilla" -a ! -r "$license_path" ] &&
echo "Can't read a CoreOS license file" 1>&2 && exit 1
[ -z "$vanilla" -a ! -r "$secret_path" ] &&
echo "Can't read a pull secret file" 1>&2 && exit 1
[ ! -e "/sys/class/net/$interface/device" ] &&
echo "Can't find an Internet-connected interface \"$interface\"" 1>&2 && exit 1
(( ${#MASTER_NODES[@]} == 0 ^ ${#WORKER_NODES[@]} == 0 )) &&
echo 'Both master and worker nodes must be given' 1>&2 && exit 1
if [ ${#MASTER_NODES[@]}${#WORKER_NODES[@]} -lt 1 ]
then
MASTER_NODES[node1.$domain]=52:54:00:a1:9c:ae
WORKER_NODES[node2.$domain]=52:54:00:b2:2f:86
WORKER_NODES[node3.$domain]=52:54:00:c3:61:77
WORKER_NODES[node4.$domain]=52:54:00:d7:99:c7
fi
# Define a convenience function to stack cleanup commands to run on exit.
function defer() {
local -r cmd="$(trap -p EXIT)"
eval "trap -- '$*;'${cmd:8:-5} EXIT"
}
# Define a convenience function to print the arguments as a Python string list.
function args2pylist() {
echo -n [
[ $# -ge 1 ] && echo -n "\"$1\"" && shift
for item ; do echo -n ", \"$item\"" ; done
echo ]
}
# Install and configure a Fedora root containing matchbox and dnsmasq.
function install_root() {
local -x GNUPGHOME=matchbox-root/tmp/gnupg
local os_url="https://$channel.release.core-os.net/amd64-usr/$version"
local assets suffix
# Provide random numbers for dnsmasq and /dev/null for redirects.
$mkdir -p matchbox-root/dev
defer $rm -r matchbox-root
$mknod matchbox-root/dev/null c 1 3
$mknod matchbox-root/dev/urandom c 1 9
# Install dnsmasq and OpenSSH from Fedora.
$dnf --installroot="$PWD/matchbox-root" --releasever="$releasever" -y \
install dnsmasq glibc-langpack-en openssh-clients python3-py-bcrypt
# Use a separate GPG keyring for importing signing keys.
$mkdir --mode=0700 -p "$GNUPGHOME"
$gpg --recv-keys \
04127D0BFABEC8871FFB2CCE50E0885593D2DCB4 \
18AD5014C99EF7E3BA5F6CE950BDD3E0FC8A365E
# Determine the exact OS version (in case "current" was given).
$wget -P matchbox-root/tmp "$os_url"/version.txt{.sig,}
$gpg --verify matchbox-root/tmp/version.txt{.sig,}
version=$($sed -n s/^COREOS_VERSION=//p matchbox-root/tmp/version.txt)
assets="matchbox-root/var/lib/matchbox/assets/coreos/$version"
os_url="https://$channel.release.core-os.net/amd64-usr/$version"
# Extract the matchbox binary into the new root.
$wget -P matchbox-root/tmp "$matchbox_url"{.asc,}
$gpg --verify "matchbox-root/tmp/${matchbox_url##*/}"{.asc,}
$tar --transform='s,.*/,,' -C matchbox-root/tmp \
-xf "matchbox-root/tmp/${matchbox_url##*/}" '*/matchbox'
# Extract the Tectonic installer into the new root.
$wget -P matchbox-root/tmp "$tectonic_url"{.asc,}
$gpg --verify "matchbox-root/tmp/${tectonic_url##*/}"{.asc,}
$tar -C matchbox-root/tmp -xf "matchbox-root/tmp/${tectonic_url##*/}"
# Fetch OS images for matchbox to serve.
$mkdir -p "$assets"
for suffix in pxe.vmlinuz pxe_image.cpio.gz image.bin.bz2
do
$wget -P "$assets" "$os_url/coreos_production_$suffix"{.sig,}
$gpg --verify "$assets/coreos_production_$suffix"{.sig,}
done
}
# Write the configuration for dnsmasq, and generate matchbox certificates.
function configure_services() {
local -i address=128 # First IP address in the DHCP pool
local cert cluster_node ingress_node mac node
# Generate matchbox certificates.
$mkdir -p matchbox-root/etc/matchbox
$openssl req -x509 -sha512 -days 365 \
-newkey rsa:4096 -nodes \
-subj '/CN=matchbox CA' \
-keyout matchbox-root/etc/matchbox/ca.key \
-out matchbox-root/etc/matchbox/ca.crt \
-extensions v3_ca -config /dev/fd/3 3<< EOF
distinguished_name=v3_ca
[v3_ca]
basicConstraints=CA:TRUE
keyUsage=cRLSign,keyCertSign
subjectKeyIdentifier=hash
EOF
for cert in client server
do
$openssl req -new -sha512 -days 365 \
-newkey rsa:4096 -nodes \
-subj "/CN=matchbox.$domain" \
-keyout matchbox-root/etc/matchbox/$cert.key \
-out matchbox-root/etc/matchbox/$cert.csr \
-extensions $cert -config /dev/fd/3 3<< EOF
distinguished_name=$cert
[$cert]
basicConstraints=CA:FALSE
extendedKeyUsage=${cert}Auth
keyUsage=digitalSignature,keyEncipherment
nsCertType=$cert
EOF
$openssl x509 -req -CAcreateserial -sha512 -days 365 \
-CA matchbox-root/etc/matchbox/ca.crt \
-CAkey matchbox-root/etc/matchbox/ca.key \
-in matchbox-root/etc/matchbox/$cert.csr \
-out matchbox-root/etc/matchbox/$cert.crt
done
# Pick a master and worker for entry point DNS names.
cluster_node=${!MASTER_NODES[*]} cluster_node=${cluster_node%% *}
ingress_node=${!WORKER_NODES[*]} ingress_node=${ingress_node%% *}
# Configure dnsmasq to manage the defined nodes.
$mkdir -p matchbox-root/var/lib/tftpboot
$cat << EOF > matchbox-root/etc/dnsmasq.d/matchbox.conf
enable-tftp
listen-address=$network24.2
no-daemon
tftp-root=/var/lib/tftpboot
dhcp-boot=tag:#ipxe,undionly.kpxe
dhcp-boot=tag:ipxe,http://matchbox.$domain:8080/boot.ipxe
dhcp-option=3,$network24.1
dhcp-range=$network24.$address,$network24.$((address + 63))
dhcp-userclass=set:ipxe,iPXE
address=/matchbox.$domain/$network24.2
$(for node in "${!MASTER_NODES[@]}" "${!WORKER_NODES[@]}" "${!OFF_NODES[@]}"
do
mac=${WORKER_NODES[$node]:-${MASTER_NODES[$node]:-${OFF_NODES[$node]}}}
[ "$node" = "$cluster_node" ] &&
echo "address=/cluster.$domain/$network24.$address"
[ "$node" = "$ingress_node" ] &&
echo "address=/tectonic.$domain/$network24.$address"
echo "address=/$node/$network24.$address"
echo "dhcp-host=$mac,$network24.$((address++)),1h"
done)
log-dhcp
log-queries
EOF
# Check dnsmasq first, but share the host's DNS servers.
echo "nameserver $network24.2" |
$cat - /etc/resolv.conf > matchbox-root/etc/resolv.conf
}
# Set up a network namespace to isolate matchbox and dnsmasq.
function configure_netns() {
# Create a network namespace.
$ip netns add matchbox
defer $ip netns del matchbox
$ip -netns matchbox link set lo up
# Create a bridge and veth pair.
$ip link add matchbox-br type bridge
defer $ip link delete matchbox-br
$ip link add matchbox-v1 type veth peer name matchbox-v2
defer $ip link del matchbox-v1
# Add the host-side veth to the bridge.
$ip link set matchbox-v1 master matchbox-br
$ip link set matchbox-v1 up
# Bring up the bridge with an IP address.
$ip address add "$network24.1/24" dev matchbox-br
$ip link set matchbox-br up
# Put the other veth in the network namespace, and configure it.
$ip link set matchbox-v2 netns matchbox
$ip -netns matchbox address add "$network24.2/24" dev matchbox-v2
$ip -netns matchbox link set matchbox-v2 up
$ip -netns matchbox route add default via "$network24.1"
# Allow forwarding and masquerading so dnsmasq can reach DNS servers.
defer echo '>' /proc/sys/net/ipv4/ip_forward \
$(</proc/sys/net/ipv4/ip_forward)
echo 1 > /proc/sys/net/ipv4/ip_forward
$iptables -I FORWARD -i matchbox-br -j ACCEPT
defer $iptables -D FORWARD -i matchbox-br -j ACCEPT
$iptables -I FORWARD -o matchbox-br -j ACCEPT
defer $iptables -D FORWARD -o matchbox-br -j ACCEPT
$iptables -t nat -I POSTROUTING -o matchbox-br -j MASQUERADE
defer $iptables -t nat -D POSTROUTING -o matchbox-br -j MASQUERADE
$iptables -t nat -I POSTROUTING -o "$interface" -j MASQUERADE
defer $iptables -t nat -D POSTROUTING -o "$interface" -j MASQUERADE
}
# Run matchbox and dnsmasq in the network namespace.
function run_services() {
# Block teardown until spawned services have all stopped completely.
defer wait
defer echo Waiting for services to stop cleanly...
# Spawn matchbox in the network namespace.
$ip netns exec matchbox $chroot matchbox-root \
/tmp/matchbox \
--address="$network24.2:8080" \
--rpc-address="$network24.2:8081" &
defer kill -TERM $!
# Spawn dnsmasq in the network namespace.
$ip netns exec matchbox $chroot matchbox-root \
/usr/sbin/dnsmasq &
defer kill -TERM $!
}
# Write the Tectonic configuration to be deployed.
function configure_cluster() {
local -x TERRAFORM_CONFIG=/tmp/terraformrc
local installer=/tectonic/tectonic-installer/linux/installer
# Terraform requires SSH keys in the SSH agent.
$chroot matchbox-root \
/usr/bin/ssh-keygen -f /root/.ssh/id_rsa -N '' -b 4096 -t rsa
$cat << 'EOF' > matchbox-root/root/.ssh/config
AddKeysToAgent yes
ConnectTimeout 5
StrictHostKeyChecking no
EOF
# Write the extracted installer path in the Terraform configuration.
$sed "s,<PATH_TO_INSTALLER>,/tmp$installer,g" \
matchbox-root/tmp/tectonic/terraformrc.example \
> "matchbox-root$TERRAFORM_CONFIG"
# Terraform tries to read /proc.
$mount -t proc proc matchbox-root/proc
defer $umount matchbox-root/proc
# Fetch required modules for Terraform.
$chroot matchbox-root \
/tmp//tectonic/tectonic-installer/linux/terraform get \
/tmp/tectonic/platforms/metal
# Define variables to configure Tectonic.
$cat "${license_path:-/dev/null}" > matchbox-root/tmp/license.txt
defer $shred -u matchbox-root/tmp/license.txt
$cat "${secret_path:-/dev/null}" > matchbox-root/tmp/secret.json
defer $shred -u matchbox-root/tmp/secret.json
$cat << EOF > matchbox-root/tmp/terraform.tfvars
tectonic_cluster_name = "virtual"
tectonic_base_domain = "$domain"
tectonic_admin_email = "$email"
tectonic_admin_password_hash = "$($chroot matchbox-root /usr/bin/python3 -c \
"import bcrypt;print(bcrypt.hashpw('$password',bcrypt.gensalt()))")"
tectonic_license_path = "/tmp/license.txt"
tectonic_pull_secret_path = "/tmp/secret.json"
tectonic_stats_url = "https://stats-collector.tectonic.com"
tectonic_calico_network_policy = ${experimental:-false}
tectonic_experimental = ${experimental:-false}
tectonic_vanilla_k8s = ${vanilla:-false}
tectonic_cl_channel = "$channel"
tectonic_metal_cl_version = "$version"
tectonic_cluster_cidr = "10.2.0.0/16"
tectonic_service_cidr = "10.3.0.0/16"
tectonic_etcd_count = 0
tectonic_master_count = ${#MASTER_NODES[@]}
tectonic_metal_controller_domain = "cluster.$domain"
tectonic_metal_controller_domains = $(args2pylist "${!MASTER_NODES[@]}")
tectonic_metal_controller_macs = $(args2pylist "${MASTER_NODES[@]}")
tectonic_metal_controller_names = $(args2pylist "${!MASTER_NODES[@]}")
tectonic_worker_count = ${#WORKER_NODES[@]}
tectonic_metal_ingress_domain = "tectonic.$domain"
tectonic_metal_worker_domains = $(args2pylist "${!WORKER_NODES[@]}")
tectonic_metal_worker_macs = $(args2pylist "${WORKER_NODES[@]}")
tectonic_metal_worker_names = $(args2pylist "${!WORKER_NODES[@]}")
tectonic_metal_matchbox_http_url = "http://matchbox.$domain:8080"
tectonic_metal_matchbox_rpc_endpoint = "matchbox.$domain:8081"
tectonic_metal_matchbox_ca = <<EOV
$(<matchbox-root/etc/matchbox/ca.crt)
EOV
tectonic_metal_matchbox_client_cert = <<EOV
$(<matchbox-root/etc/matchbox/client.crt)
EOV
tectonic_metal_matchbox_client_key = <<EOV
$($sed 's/-\(BEGIN\|END\) P/-\1 RSA P/' matchbox-root/etc/matchbox/client.key)
EOV
tectonic_ssh_authorized_key = "$(<matchbox-root/root/.ssh/id_rsa.pub)"
EOF
# Test the plan.
$ip netns exec matchbox $chroot matchbox-root \
/tmp/tectonic/tectonic-installer/linux/terraform plan \
--var-file=/tmp/terraform.tfvars \
/tmp/tectonic/platforms/metal
}
# Launch the cluster nodes in virtual machines, and install Tectonic on them.
function run_cluster() {
local -A node_pids=()
local disk mac node
# Whitelist the bridge for the QEMU helper.
echo 'allow matchbox-br' >> /etc/qemu/bridge.conf
defer $sed -i -e /^allow.matchbox-br$/d /etc/qemu/bridge.conf
# Create raw disk images for each node, and run them.
for node in "${!MASTER_NODES[@]}" "${!WORKER_NODES[@]}"
do
disk="matchbox-root/tmp/$node-hda.img"
mac=${WORKER_NODES[$node]:-${MASTER_NODES[$node]}}
$truncate --size="$node_disk" "$disk"
$qemu -nodefaults -name "$node" \
-boot once=n -m "$node_ram" ${node_display:--nographic} \
-net nic,macaddr="$mac" -net bridge,br=matchbox-br \
-drive media=disk,if=ide,format=raw,file="$disk" &
node_pids[$node]=$!
done
# Apply the Tectonic configuration to matchbox and configure the nodes.
$ip netns exec matchbox $chroot matchbox-root \
/usr/bin/ssh-agent /bin/bash -ex << EOF
/usr/bin/ssh-add /root/.ssh/id_rsa
TERRAFORM_CONFIG=/tmp/terraformrc \
/tmp/tectonic/tectonic-installer/linux/terraform apply \
--var-file=/tmp/terraform.tfvars \
/tmp/tectonic/platforms/metal
/usr/bin/scp -p core@cluster.$domain:/etc/kubernetes/kubeconfig /tmp/
EOF
# Pause the script while the cluster runs. Shut down all VMs to quit.
wait "${node_pids[@]}"
}
install_root
configure_services
configure_netns
run_services
configure_cluster
run_cluster
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment