Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save stevenc81/584c8ad382d614375a81452037057ced to your computer and use it in GitHub Desktop.
Save stevenc81/584c8ad382d614375a81452037057ced to your computer and use it in GitHub Desktop.
Quick Kubernetes Helm Learnings on Chart Backward Compatibility

Quick Kubernetes Helm Learnings on Chart Backward Compatibility

At Buffer we utilize kube2iam for AWS permissions. It's deployed, upgraded and managed with the stable Helm Chart repository.

It has been working great, until last week due to my oversight. The situation was resolved in a matter of 30 minutes and no users were affected. Nonetheless, I'd love to share my learnings with the community.

To upgrader kube2iam I have been using this command

helm upgrade kube2iam --install stable/kube2iam --namespace default -f ./values.yaml

Where the values.yaml file contains a bunch of overridden variables. In our case, it looks like

extraArgs:
  base-role-arn: "arn:aws:iam::<YOUR AWS ACCOUNT ID>:role/"
host:
  interface: cni0
  iptables: true
rbac:
  create: true
resources:
  limits:
    cpu: 100m
    memory: 256Mi
  requests:
    cpu: 4m
    memory: 16Mi

The command would upgrade kube2iam to the latest Chart that includes an upgraded kube2iam image. At least, that was what I hoped.

To my surprises, an error message was returned. It complained about some new labels weren't recognized. I then checked the Helm deployment status, and noticed the deployment had failed.

http://hi.buffer.com/2e24ee0043e3

Even the deployment failed, the already running pods continued to run normally. This is analogous to a failed upgrade to your desktop application shouldn't remove the old one. So no production service was affected. Nonetheless, it was something I needed to figure out as soon as possible. It didn't feel right leaving this failed deployment for too long as there might be some adverse effects.

I quickly researched the issue and found the problematic labels were added just months ago to the kube2iam Helm chart. In the commit, we can see the chart maintainer was trying to follow the new Helm chart standard (I believe is originated from k8s) and warned about this breaking change.

In the case of kube2iam, deleting the deployment and recreating wasn't an option because it's needed for many of our production services. I simply needed to find a new way to upgrade kube2iam version without updating the chart version.

Fortunately I found a good way of doing this with this command

helm upgrade kube2iam --install stable/kube2iam --namespace default -f ./values.yaml --version 1.1.0

It tells Helm to install version 1.1.0, instead of 2.0.1. This resolved the Helm upgrade issue.

As for the kube2iam image upgrade, I added a new image tag (0.10.7) to the values.yaml as follows.

extraArgs:
  base-role-arn: "arn:aws:iam::<YOUR AWS ACCOUNT ID>:role/"
host:
  interface: cni0
  iptables: true
rbac:
  create: true
resources:
  limits:
    cpu: 100m
    memory: 256Mi
  requests:
    cpu: 4m
    memory: 16Mi
image:
  tag: 0.10.7

The combined methods helped us to get a successful Helm deployment with kube2iam image upgraded.

http://hi.buffer.com/b8a3646d0f81


Key learning points

  • A Helm chart is not necessarily backward compatible
  • Because of the above, it's always good to use --dry-run parameter in the Helm command to see what to expect
  • In case of backward compatibility issue, use --version to find a version that works
  • Expect more breaking Helm charts due to the new label recommendation (appapp.kubernetes.io/name and releaseapp.kubernetes.io/instance)
  • A failed Helm deployment doesn't remove the existing one

Happy Helming!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment