Skip to content

Instantly share code, notes, and snippets.

@jbeda
Last active July 25, 2016 16:39
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save jbeda/7e66965a23c40a91521cf6bbc3ebf007 to your computer and use it in GitHub Desktop.
Save jbeda/7e66965a23c40a91521cf6bbc3ebf007 to your computer and use it in GitHub Desktop.

We assume we have a set of master nodes (master-{1..N}) and a set of worker nodes (worker-{1..M}). We also assume that the container runtime (Docker, rkt) is already installed. For now, also assume that networking is configured. In my mind it is an open issue how much networking should be driven by kubernetes.

workstation$ ssh master-1

Start the kubelet. Here we just start it directly in the background for simplicity. We could also support running the kubelet itself in a container or managed under the user's preferred init system. Notice no command line flags.

root@master-1# nohup kube kubelet &

Start the core master components locally. This will also install and run a standard set of addons.

root@master-1# kube cluster master init
root@master-1# exit

Now do other masters.

workstation$ ssh master-2
root@master-2# nohup kube kubelet &
root@master-2# kube cluster master init --join master-1
root@master-2# exit
workstation$ ssh master-3
root@master-3# nohup kube kubelet &
root@master-3# kube cluster master init --join master-1
root@master-3# exit

Now do each node.

workstation$ ssh node-1
root@node-1# nohup kube kubelet &
root@node-1# kube cluster node init --join master-1

Use the cluster

workstation$ ssh master-1
root@master-1$ kube ctl ...

Create and download credentials

workstation$ ssh master-1
root@master-1# kube user create --name joe --level admin
root@master-1# cat joe.kubecred
{
  'cluster-name': '<randomly generated string>',
  'cluster-masters': ['master-1', 'master-2', 'master-3'],
  'username': 'joe@<cluster-name>',
  'user-cert': '<...>',
  'user-key': '<...>'
}
root@master-1# exit
workstation$ scp master-1:joe.kubecred .
workstation$ kube cfg clusters import joe.kubecred
workstation$ kube cfg ...

But what about a dev cluster?!?

workstation$ kube dev &
workstation$ kube ctl --local ...

This is equivalent to

workstation$ kube kubelet &
workstation$ kube cluster master init --alsonode

Implementation notes:

  • kube is an renamed and extended hyperkube
  • Setting up certificates is automated here. By default it is assumed that the network is secure and so nodes are accepted automatically. We'd also define a flow that is more explicit with (a) an approval queue (b) a token system or (c) prepopulated certs.
    • We may need a way to "finalize" the master set to avoid new masters joining after initialization without some explicit approval. This wouldn't mean that masters can't be replaced but rather the admin will have to take explicit action vs "auto join".
  • Optionally, the Kubelet and the API server will start listening on a domain socket in addition to the network. Any access to that domain socket will allow full control. This is how user credentials are bootstrapped.
  • It is assumed here that the API server (and any store) is pinned to specific nodes. This allows discovery to be boostrapped. We can look at being more dynamic in the future using something like a gossip protocol but, for now, that isn't necessary. There are good security reasons to keep masters separate.
  • The --alsonode flag lets a master node also function as a worker. This is appropriate for dev/small clusters.
@roberthbailey
Copy link

s/container engine/container/runtime to be consistent with k8s terminology

@roberthbailey
Copy link

Finalizing the master seems incorrect; how would that interact with repairing / replacing a broken master node?

@roberthbailey
Copy link

Does the domain socket imply local root access (which is why you trust it for bootstrapping creds)? It would be nice for managed environments (e.g. GKE) to either disable the domain socket or to ensure that access has a full audit trail.

@jbeda
Copy link
Author

jbeda commented Jul 9, 2016

@roberthbaily -- answered this stuff.

I'm thinking the master set is more "locked down" so that it'll take explicit admin action to add/change masters vs. autojoin that is acceptable during initial bootstrap. Modifying the master set is a privileged operation.

Totally cool to have the domain socket be optional.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment