Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save SteveLasker/aa363db4d1e1349cd5c675961e5d1070 to your computer and use it in GitHub Desktop.
Save SteveLasker/aa363db4d1e1349cd5c675961e5d1070 to your computer and use it in GitHub Desktop.
A means to proceed on Proposal to converge helm repositories with docker registries (https://github.com/helm/community/pull/55)

Voting on wither to proceed:

There's a lot in the proposal. Based on the feedback, and the timeline for Helm 3, I'd suggest we break it down to a list of features and vote on how to proceed.

  • Should we move forward with the idea?
  • Based on a high level summary would the work need to be done:
    • In Helm 2, to provide some amount of forward compatibility?
    • In Helm 3, due to breaking changes
    • Post Helm 3, as it can be incremental?

Each Cloud/Vendor that has supported docker registries (distribution) has provided a cloud specific implementation to handle Auth, Scale, Storage and reliability leveraging the cloud & vendors underlying platform.

Voting yes means we move forward as a working group to provide a spec and possibly a reference implementation for accepting and serving (push/pull) of helm charts into docker/distribution based registries.

As Helm Charts are designed around the intention of deploying images, it's a natural evolution to store Helm Charts in registries, alongside the images they reference. By aligning charts within a registry, registry vendors can extend the auth infrastructure to provide a common means to authenticate charts and the referenced images, as well as provide a common UI for search, discovery and additional meta-data.

Comparing Helm Chart Repos and Image Registries

  • A single helm repo contains a collection of charts.
  • Each chart has a collection of versions.
  • There is no definition for a collection of charts repos as an entity.
  • A docker registry contains a collection of repos.
  • A repo has a single image with a collection of versions.

By consolidating Chart Repos and Image Registries, a single registry can store a collection of charts, with common terminology:

Tech Registry Repo Tag/Version
Helm Collection of Charts Single Chart, collection of versions semver
Image Registry Collection of Images Single image, collection of versions Tag (any string)

Images & Charts aligned

By aligning charts and images, various collections can be stored in one registry, without having to registry multiple "repos" helm repo add ...

  • contoso.azurecr.io/inventory/cache:2
  • contoso.azurecr.io/inventory/cache:2
  • contoso.azurecr.io/inventory/chart:2
  • contoso.azurecr.io/marketing/campaign1/web:1.2
  • contoso.azurecr.io/marketing/campaign1/email:1.3
  • contoso.azurecr.io/marketing/campaign1/chart:1.1
  • contoso.azurecr.io/products/returns/batch-import:1

A Helm 3 client, using a helm chart enabled registry, would only need:

helm login contoso.azurecr.io -u $user -p $pwd
helm upgrade campaign1  contoso.azurecr.io/marketing/campaign1/chart:1.1

By voting yes, we agree Helm 3 charts would align with the image repo model. Alignment will require coordination with the Helm Client team.

Each cloud/vendor has implemented authentication for access to their registry, typically including:

  • Basic Auth
  • Token/Bearer Auth (with token refresh)
  • Headless/Service Auth
  • Two-Factor Auth
  • Cert based Auth

Each cloud/vendor has a means to login to their registry, integrating with the docker CLI.

To integrate with docker login, each vendor typically assigns a token to .docker/config.json.

This proposal enables helm operations to leverage a token externally placed, allowing a developer to login once, while proceeding with docker and helm actions.

az acr login -n contoso
docker push contoso.azurecr.io/marketing/campaign1/web:1.2
helm push contoso.azurecr.io/marketing/campaign1/chart:1.1
helm upgrade marketing-campaign1 contoso.azurecr.io/marketing/campaign1/chart:1.1

Helm Login

For specific Helm operations, the user may wish to login to the registry using the Helm client, avoiding additional cloud/vendor specific CLIs.

The user could login directly using:

helm login [registry] - u -p

or

helm login [registry] - u [token]

To minimize the impact to Helm 2 users, a Capabilities API will be provided

Voting yes provides a working group to design and recommend a client and cloud/vendor recommendation.

To align with docker semantics, helm fetch is deprecated, replaced with helm pull

Voting yes becomes work on the Helm client team.

Proposal 5 [ helm update & helm install natively support remote urls]

Deploying a helm chart from a remote URL is supported today, however the repo must be specified as a parameter.

helm upgrade \
    --repo=https://marketing-campaign1 contoso.azurecr.io/marketing/campaign1/chart \
    marketing-campaign1

With this change, charts can be fully referenced by their registry, reducing the number of lines of code required:

Build system

az acr login -n contoso
helm package \
    ./charts/marketing-campaign \
    contoso.azurecr.io/marketing/campaign1/chart:1.0.0
helm push contoso.azurecr.io/marketing/campaign1/chart:1.0.0

Deployment

az acr login -n contoso
helm upgrade marketing-campaign1 contoso.azurecr.io/marketing/campaign1/chart:1.1

Voting yes becomes work on the Helm client team to manage the internal helm pull before applying an update.

With all the good and evil of "latest" and "stable" tags and versions. This proposal suggests :version in lieu of the -version.tgz format

Voting yes becomes work on the Helm client team as well as server team to reconcile versions

Adding additional channels to provide flighting of pending updates.

As customers complete their v1 of their efforts, handing off the OS & Framework patching to operational teams while the consultants leave the building, or the team moves on to another effort, companies will need a means pre-flight patches.

This effort can likely be done as incremental work to the Helm 3 changes, however it is viewed as important.

Voting yes becomes a working group on how to integrate stable channels through the client and server APIs.

As Helm becomes commonly used in CI/CD systems, the number of steps become additional steps of complexity, and points of failure, and points of disconnect on the client.

  • The server must perform multiple actions the client may only perform routinely as the developer initializes their machine
  • The client must update the repo cache (index.yaml) and can become easily out of sync with the server

A developer that may be working in two directories for the chart creation and consumption, or a CI/CD solution that builds and deploys charts separately may perform the following steps:

helm init --client-only 

az acr helm repo add -n contoso

helm package \
    ./charts/marketing-campaign \
    --version 1.0.1

az acr helm push \
    ./marketing-campaign.1.0.1.tgz
    --force -o table

# Refresh the local cache (index.yaml)
helm repo update
# or
az acr helm repo add -n contoso # as we're seeing issues with helm repo update working consistently

helm fetch \
    contoso/marketing-campaign \
    --untar \
    --untardir charts/marketing-campaign \
    --version 1.0.1
# without executing the helm repo update, the local index.yaml may point to an outdated version. If the user doesn't specify --version 1.0.1, they could get the older 1.0.0. If they do specify 1.0.1, without helm repo update, they get an error

# not specifying --untar makes the chart unusable for deployment

helm upgrade marketing-campaign \
    ./charts/marketing-campaign \
    --reuse-values \
    --set helloworld.image=$RUN_REGISTRY/$APP_NAME:$RUN_ID

With Helm 2, the local store must be constantly updated, as opposed to being an optimized cache. As we transition to a server implementation, can the client be a passive cache?

Client store becomes a passive cache

The above cli could be reduced to 1 login and 3 Helm commands:

az acr login -n contoso
helm package \
    ./charts/marketing-campaign \
    contoso.azurecr.io/marketing/campaign1/chart:1.0.1

helm push contoso.azurecr.io/marketing/campaign1/chart:1.0.1

helm upgrade marketing-campaign1 \
  contoso.azurecr.io/marketing/campaign1/chart:1.1

With Tiller being removed in helm 3, helm upgrade must bring the chart locally. However, the details of having to update the local cache, fetch and untar the chart are all moved to implementation details.

If the client happens to have the most recent version of the chart chart:1.0.1, the client leverages the chart already cached. This is achieved by using registry manifests to retrieve the current sha for the given version.

Voting yes investigates what must be changed on the client, as well as server implications. A recommendation for whether this can be made incrementally to helm 3, or helm 3 moves from a active to passive cache is part of the investigation.

Helm 2 relies on a local store, backed by index.yaml. As we propose transitioning to a passive cache, there's an opportunity to align search with server side semantics.

Proposal:

  • Searching the local cache:

    helm search wordpress

  • Searching a specific registry:

    helm search -r contoso.azurecr.io wordpress

Securing Search Results

Each cloud/vendor has implemented authentication. Each cloud/vendor has search/index solution that can limit results, based on a users permission set.

The ability to list artifacts based on the users permission set is no different for searching images or charts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment