Skip to content

Instantly share code, notes, and snippets.

@venezia
Last active August 9, 2019 06:00
Show Gist options
  • Save venezia/69e9bd25d79fec9834fe1c5b589d4206 to your computer and use it in GitHub Desktop.
Save venezia/69e9bd25d79fec9834fe1c5b589d4206 to your computer and use it in GitHub Desktop.
Multiple Tillers

Introduction

Did you know that you can have multiple helm tillers running in a cluster?
This may actually be useful to us for both security as well as isolating what our cluster ops team does from what our customers can do.

Background

Current Helm Access Control

Helm security is such that "If you can talk to tiller, you can use that tiller" To be able to contact tiller, one needs to be able to create a pod in the namespace tiller resides in. Otherwise you will get an error message like:

helm list
Error: forwarding ports: error upgrading connection: unable to upgrade connection: User "tkind" cannot create pods/portforward in the namespace "kube-system".

All actions done by tiller, (which does the actual work of any helm-initiated action), is done through tiller's service account. In order for a single installation of tiller to be useful to a cluster, tiller is typically given cluster-wide admin privileges.

Therefore, by extension, if you can create a pod in tiller's namespace - "If you can use helm" - you're in effect a cluster administrator through privilege escalation. (Even if you do not know it)

Future Helm Access Control

The helm team is aware the security issues with current helm. That being said, there is not a great solution available.
Impersonating a user has its own drawbacks as far as security is concerned, and other possible solutions are not close to being done. If you're interested, please look at helm issue 1918 which details possible solutions and caveats.

TL;DR: This isn't happening tomorrow

What can we do? Multiple Tillers!

Benefits

By having multiple tillers, an administrative team can deploy cluster-wide services through helm without ostensibly sharing that same access with cluster users. We want to encourage the use of helm, but we do not want a user to accidently delete another user's deployments.

Security delegated

We can install multiple tillers, each with dominion (enforced through RBAC) over a set of namespaces. By giving tiller access to only a subset of namespaces, access to that tiller does not mean access to the entire cluster.

An example could be:

Cluster-ops

  • tiller installed kube-system
  • tiller given access to either all namespaces, or a subset of namespaces
  • cluster-ops team members ensured access to creating pods in kube-system namespace

User 1

  • tiller installed in user-1-tiller
  • tiller given admin access to a specific list of namespaces
  • user 1 (or user 1 members) ensured access to creating pods in user-1-tiller namespace

User 2

  • tiller installed in user-2-tiller
  • tiller given admin access to a specific list of namespaces
  • user 2 (or user 2 members) ensured access to creating pods in user-2-tiller namespace

Result

The result is a shared cluster in which users can leverage helm, but that each user cannot trample over another user's creations.

How?

Per helm documentation a helm init call can be adjusted via --service-account and --tiller-namespace arguments. The gist is this:

  • create a service account for tiller to use
    • should be limited in scope
    • let's presume namespace is user-1-tiller and service account in this namespace is called tiller
  • call helm init --service-account tiller --tiller-namespace user-1-tiller

As a user, add --tiller-namespace user-1-tiller to any helm call.

Security caveats

Because helm/tiller security is essentially "If you can can connect to tiller, you can use helm" and that in order to connect to tiller you need to be able to create a pod in tiller's namespace, there is nothing stopping a user from creating a dubious pod in the tiller namespace with tiller's rights (which presumably are administrative over a list of namespaces)

But that's the compromise one must make

While tiller is installed through helm, there is no helm chart that can deploy tiller (AFAIK)

Implications / Questions for K2/Kraken

  • How do we list different tillers that need to be installed
  • Is this something that K2/Kraken should even have?
    • Right now helm/tiller is installed, but with cluster-admin privileges. Perhaps this should change
  • How easy is it to audit this?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment