We assume we have a set of master nodes (master-{1..N}
) and a set of worker
nodes (worker-{1..M}
). We also assume that the container runtime (Docker, rkt)
is already installed. For now, also assume that networking is configured. In
my mind it is an open issue how much networking should be driven by kubernetes.
workstation$ ssh master-1
Start the kubelet. Here we just start it directly in the background for simplicity. We could also support running the kubelet itself in a container or managed under the user's preferred init system. Notice no command line flags.
root@master-1# nohup kube kubelet &
Start the core master components locally. This will also install and run a standard set of addons.
root@master-1# kube cluster master init
root@master-1# exit
Now do other masters.
workstation$ ssh master-2
root@master-2# nohup kube kubelet &
root@master-2# kube cluster master init --join master-1
root@master-2# exit
workstation$ ssh master-3
root@master-3# nohup kube kubelet &
root@master-3# kube cluster master init --join master-1
root@master-3# exit
Now do each node.
workstation$ ssh node-1
root@node-1# nohup kube kubelet &
root@node-1# kube cluster node init --join master-1
Use the cluster
workstation$ ssh master-1
root@master-1$ kube ctl ...
Create and download credentials
workstation$ ssh master-1
root@master-1# kube user create --name joe --level admin
root@master-1# cat joe.kubecred
{
'cluster-name': '<randomly generated string>',
'cluster-masters': ['master-1', 'master-2', 'master-3'],
'username': 'joe@<cluster-name>',
'user-cert': '<...>',
'user-key': '<...>'
}
root@master-1# exit
workstation$ scp master-1:joe.kubecred .
workstation$ kube cfg clusters import joe.kubecred
workstation$ kube cfg ...
But what about a dev cluster?!?
workstation$ kube dev &
workstation$ kube ctl --local ...
This is equivalent to
workstation$ kube kubelet &
workstation$ kube cluster master init --alsonode
Implementation notes:
kube
is an renamed and extendedhyperkube
- Setting up certificates is automated here. By default it is assumed that the
network is secure and so nodes are accepted automatically. We'd also define a
flow that is more explicit with (a) an approval queue (b) a token system or
(c) prepopulated certs.
- We may need a way to "finalize" the master set to avoid new masters joining after initialization without some explicit approval. This wouldn't mean that masters can't be replaced but rather the admin will have to take explicit action vs "auto join".
- Optionally, the Kubelet and the API server will start listening on a domain socket in addition to the network. Any access to that domain socket will allow full control. This is how user credentials are bootstrapped.
- It is assumed here that the API server (and any store) is pinned to specific nodes. This allows discovery to be boostrapped. We can look at being more dynamic in the future using something like a gossip protocol but, for now, that isn't necessary. There are good security reasons to keep masters separate.
- The
--alsonode
flag lets a master node also function as a worker. This is appropriate for dev/small clusters.
s/container engine/container/runtime to be consistent with k8s terminology