Skip to content

Instantly share code, notes, and snippets.

@intlabs
Last active November 29, 2016 22:08
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save intlabs/9ce71a4a6b93b4e2e90faec500ef70d0 to your computer and use it in GitHub Desktop.
Save intlabs/9ce71a4a6b93b4e2e90faec500ef70d0 to your computer and use it in GitHub Desktop.

Clone the two required repos (or pull if using an older checkout):

git clone https://github.com/att-comdev/halcyon-vagrant-kubernetes
git clone https://github.com/portdirect/halcyon-kubernetes

move into the halcyon-vagrant-kubernetes dir:

You will then need to replace or edit the ./config.rb[1] to match the one in this gist.

Now you can run:

  • vagrant up to create a kube cluster, running under CentOS, with romana CNI networking, Ceph clients installed and helm
  • vagrant destroy to make it all go away.
  • ./get-k8s-creds.sh to get the k8s credentials for the cluster and setup kubectl on your host to access it. If you have helm installed on your host[2], you can then run helm init on your local machine and should be able to work outside of the cluster if desired.
  • vagrant ssh kube1 to ssh into the master node

Note that it will take a few minutes for everthing to be operational, on my machine it typically takes 2-4 mins after vagrant/ansible has finished for all services to be online. This is as it takes time for the images to be pulled, and CNI networking to come up, DNS is usually the last thing to be active.

You can test that everything is working by running:

kubectl run -i -t $(uuidgen) --image=busybox --restart=Never

and then once inside the container:

nslookup kubernetes

To set the cluster up for developing kolla-k8s: you will most likley want to run the following command:

kubectl get nodes -L kubeadm.alpha.kubernetes.io/role --no-headers | awk '$NF ~ /^<none>/ { print $1}' | while read NODE ; do
kubectl label node $NODE --overwrite node-type=storage
kubectl label node $NODE --overwrite openstack-control-plane=enabled
done

This will mark all the workers as being avaible for both storage and api pods.

To test that helm is working in the env you just created you can run the following to smoke test:

helm init --client-only
helm repo update
helm install stable/mysql
helm ls
# and to check via kubectl
kubectl get all

The pods will not provision, in this example and be shown as pending as there is no dymanic pvc creation within the cluster yet.

[1] Line 27 is changed from upstream to pull in the forked playbook, you can change the number of nodes in the cluster on line 6, but will then need to update line 17 accordingly

[2] sudo sh -c "curl -L https://kubernetes-helm.storage.googleapis.com/helm-v2.0.0-linux-amd64.tar.gz | tar zxv --strip 1 -C /tmp; chmod +x /tmp/helm; mv /tmp/helm /usr/local/bin/helm"

# Kubernetes Details: Instances
$kube_version = "centos/7"
$kube_memory = 4096
$kube_vcpus = 2
$kube_count = 4
$git_commit = "6a7308d"
$subnet = "192.168.236"
$public_iface = "eth1"
$forwarded_ports = {}
# Ansible Declarations:
#$number_etcd = "kube[1:2]"
#$number_master = "kube[1:2]"
#$number_worker = "kube[1:3]"
$kube_masters = "kube1"
$kube_workers = "kube[2:4]"
$kube_control = "kube1"
# Virtualbox leave / Openstack change to OS default username:
$ssh_user = "ubuntu"
$ssh_keypath = "~/.ssh/id_rsa"
$ssh_port = 22
# Ansible Details:
$ansible_limit = "all"
$ansible_playbook = "../halcyon-kubernetes/kube-deploy/kube-deploy.yml"
$ansible_inventory = ".vagrant/provisioners/ansible/inventory_override"
# Openstack Authentication Information:
$os_auth_url = "http://your.openstack.url:5000/v2.0"
$os_username = "user"
$os_password = "password"
$os_tenant = "tenant"
# Openstack Instance Information:
$os_flavor = "m1.small"
$os_image = "ubuntu-trusty-16.04"
$os_floatnet = "public"
$os_fixednet = ['vagrant-net']
$os_keypair = "your_ssh_keypair"
$os_secgroups = ["default"]
# Proxy Configuration (only use if deploying behind a proxy):
$proxy_enable = false
$proxy_http = "http://proxy:8080"
$proxy_https = "https://proxy:8080"
$proxy_no = "localhost,127.0.0.1"
# So this is horrible - but it's what all the cool kids do:
sudo yum install -y https://releases.hashicorp.com/vagrant/1.8.1/vagrant_1.8.1_x86_64.rpm
# Now lets install the deps for vagrant libvirt:
sudo yum install -y libvirt libxslt-devel libxml2-devel libvirt-devel libguestfs-tools-c ruby-devel gcc git git-review gcc-c++
#and the libvirt plugin itself
vagrant plugin install vagrant-libvirt
#setup libvirt access
sudo to root:
cat > /etc/polkit-1/rules.d/80-libvirt-manage.rules <<EOF
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" && subject.local && subject.active && subject.isInGroup("wheel")) {
return polkit.Result.YES;
}
});
EOF
usermod -aG libvirt <your username>
# Install ansible
sudo yum install -y epel-release
sudo yum install -y ansible
#start libvirt
sudo systemctl start libvirtd
sudo systemctl enable libvirtd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment