Clone the two required repos (or pull if using an older checkout):
git clone https://github.com/att-comdev/halcyon-vagrant-kubernetes
git clone https://github.com/portdirect/halcyon-kubernetes
move into the halcyon-vagrant-kubernetes dir:
You will then need to replace or edit the ./config.rb
[1] to match the one in this gist.
Now you can run:
vagrant up
to create a kube cluster, running under CentOS, with romana CNI networking, Ceph clients installed and helmvagrant destroy
to make it all go away../get-k8s-creds.sh
to get the k8s credentials for the cluster and setup kubectl on your host to access it. If you have helm installed on your host[2], you can then runhelm init
on your local machine and should be able to work outside of the cluster if desired.vagrant ssh kube1
to ssh into the master node
Note that it will take a few minutes for everthing to be operational, on my machine it typically takes 2-4 mins after vagrant/ansible has finished for all services to be online. This is as it takes time for the images to be pulled, and CNI networking to come up, DNS is usually the last thing to be active.
You can test that everything is working by running:
kubectl run -i -t $(uuidgen) --image=busybox --restart=Never
and then once inside the container:
nslookup kubernetes
To set the cluster up for developing kolla-k8s: you will most likley want to run the following command:
kubectl get nodes -L kubeadm.alpha.kubernetes.io/role --no-headers | awk '$NF ~ /^<none>/ { print $1}' | while read NODE ; do
kubectl label node $NODE --overwrite node-type=storage
kubectl label node $NODE --overwrite openstack-control-plane=enabled
done
This will mark all the workers as being avaible for both storage and api pods.
To test that helm is working in the env you just created you can run the following to smoke test:
helm init --client-only
helm repo update
helm install stable/mysql
helm ls
# and to check via kubectl
kubectl get all
The pods will not provision, in this example and be shown as pending as there is no dymanic pvc creation within the cluster yet.
[1] Line 27 is changed from upstream to pull in the forked playbook, you can change the number of nodes in the cluster on line 6, but will then need to update line 17 accordingly
[2] sudo sh -c "curl -L https://kubernetes-helm.storage.googleapis.com/helm-v2.0.0-linux-amd64.tar.gz | tar zxv --strip 1 -C /tmp; chmod +x /tmp/helm; mv /tmp/helm /usr/local/bin/helm"