-
Application must have a high-level definition (analog to current kubefile)
-
Based on it, a Custom Middleware would render Kubernetes resource(s) configuration and commit on Git repository
- Helm may be used for templating only, or even switched by another solution (as any other template engine)
- The less intelligence those templates have, the better - the ideal is to have only values interpolation and iteration in lists, and avoid too much if/else conditions (and even less more complex ones)
- Resource definitions may be described in "layers", with a common part and a specialized part per environment and/or cluster, using Kustomize
- Kustomize allows to define overlays (patches to a base definition), so things like envs, etc, may be defined as separate patch files, without changing the base
- Commons values (be it cluster-wide or namespace-wide, etc) become also very easy to set
- Helm may be used for templating only, or even switched by another solution (as any other template engine)
-
Each cluster will have an agent (like Flux or Argo) running, looking for the git repo, in its own branch or path, and will apply the resources once they're commited - a pattern know as Gitops
-
This delegates to the cluster the responsibility of deploying their own resources and make a lot easier to replicate enviroments (as simple as branching out on Git and running a Flux in a new cluster)
-
Deploys, then, as far Drone is concerned, would be only a commit in a Git repository, changing the image tag and other possible values (labels, env, etc). The rollout proccess would be tracked and viewed outside of the CI solution, with a different tool.
The role of our deploy middleware in this whole proccess would be, basically, to generate all those Kubernetes Resource files based on a higher level configuration that would be provided by each application, and commit it on Git. It must know cluster-specific attributes
Application permissions (as IAM and Vault Roles) should have their own Custom Resources with Operator(s) managing it, so they could be ensured on the cluster as resources as well. This way, this would be applied on the fly, along with other application resources itself (deployment, services, policies, etc).
Vault setup, including the injection of sidecar (kubevault) and any necessary modifications on deployments (including the application's container) should be ensured by a custom injector (made possible my Kubernetes Admission Controllers). This way, each cluster could be responsible for controlling its own specific Vault client-configuration (which may be different), allowing the deployment to be agnostic about it.
Taking this logic off Helm charts, any application could have Vault secrets injected, without the need to use an specific chart, or even incorporating all the rules from app-deploy
chart.