export PLATFORM="kind-platform"
export WORKER="kind-worker"
export MINIKUBE_HOME=/app/minikube_volume`
Check for existing clusters:
minikube profile list
The above command will give an output similar to:
|---------------|-----------|---------|--------------|------|---------|---------|-------|
| Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes |
|---------------|-----------|---------|--------------|------|---------|---------|-------|
| kind-platform | docker | docker | 192.168.49.2 | 8443 | v1.23.3 | Running | 1 |
| kind-worker | docker | docker | 192.168.58.2 | 8443 | v1.23.3 | Running | 1 |
|---------------|-----------|---------|--------------|------|---------|---------|-------|
Check that Kratix is installed on a platform
cluster:
kubectl --context $PLATFORM get deployments --namespace kratix-platform-system
The above command will give an output similar to:
NAME READY UP-TO-DATE AVAILABLE AGE
kratix-platform-controller-manager 1/1 1 1 1h
minio 1/1 1 1 1h
Check the worker
cluster has Flux configured:
kubectl get kustomizations.kustomize.toolkit.fluxcd.io --context $WORKER --all-namespaces
The above command will give an output similar to:
NAMESPACE NAME AGE READY STATUS
flux-system kratix-workload-crds 4d2h True Applied revision: c906c5e46becfe35302b092ef405aadac696cf12095d0b038d9b44f3855a44c5
flux-system kratix-workload-resources 4d2h True Applied revision: c906c5e46becfe35302b092ef405aadac696cf12095d0b038d9b44f3855a44c5
The worker
cluster is registered with Kratix:
kubectl get clusters.platform.kratix.io --context $PLATFORM --all-namespaces
The above command will give an output similar to:
NAMESPACE NAME AGE
default worker-cluster 1h
kubectl --context $PLATFORM get bucketstatestores.platform.kratix.io
The above command will give an output similar to:
NAME AGE
minio-store 1h
To discover more about the state store:
kubectl --context $PLATFORM describe bucketstatestore minio-store
-
redhat/ubi8
base image -
Ensure lab environment doesn't have conflicting version of software already deployed (ie if you're doing CNPG and have an already deployed controller, be sure to delete it).
Choose the right architecture from the releases page.
- yq version 4: a YAML manipulation and inspection tool
- worker-resource-builder: the Kratix binary for injecting the CNPG dependency definitions into your final promise.yaml file.
-
Create folder structure and
cd
intocnpg-promise
π cnpg-promise π βββ promise-template.yaml π βββ π resources/ π βββ π pipeline/ π
-
Add base properties to the Promise definition in
promise-template.yaml
-
Name the Promise and add the
spec
keysapiVersion: platform.kratix.io/v1alpha1 kind: Promise metadata: name: <REPLACE> spec: api: {} dependencies: [] scheduling: {} workflows: {}
-
Define the Promise API
api: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: <REPLACE> spec: group: <REPLACE> names: kind: <REPLACE> plural: <REPLACE> singular: <REPLACE> scope: Namespaced versions: - name: v1alpha1 schema: openAPIV3Schema: properties: spec: properties: # instances: # description: The number of instances in cluster # type: integer # required: # - instances type: object type: object served: true storage: true
-
Define the Promise configure workflow (pipeline)
-
Decide a name for your pipeline image and set an environment variable to make things easier.
export PIPELINE_NAME="<REPLACE/REPLACE:v0.0.0>"
-
Copy the
yq
binary intopipeline/
(ensure you have the correct version!)π cnpg-promise βββ promise-template.yaml βββ π resources/ βββ π pipeline/ βββ yq π
-
Copy the CNPG request you'd like to use (from their docs) to provision PG instances
π cnpg-promise βββ promise-template.yaml βββ π resources/ βββ π pipeline/ βββ yq βββ cnpg-request-template.yaml π
-
Define what happens in the pipeline
-
Create a script that you will use in the Dockerfile (it will execute when there is a new request to generate the workload)
π cnpg-promise βββ promise-template.yaml βββ π resources/ βββ π pipeline/ βββ execute-pipeline.bash π βββ yq βββ cnpg-request-template.yaml
-
Write the script. Here's an example:
#!/usr/bin/env sh set -xe # Read current values from the provided resource request export name="$(./yq eval '.metadata.name' /input/object.yaml)" export instances="$(./yq eval '.spec.instances' /input/object.yaml)" # Replace defaults with user provided values sed "s/TBDNAME/${name}/g" /tmp/transfer/cnpg-request-template.yaml > cnpg-request.yaml sed "s/TBDINSTANCECOUNT/${instances}/g" cnpg-request.yaml > /output/cnpg-request.yaml
-
Set permissions
chmod +x pipeline/execute-pipeline.bash
-
-
Define the Dockerfile
-
Create the Dockerfile that will execute on the Pod for the workflow
π cnpg-promise βββ promise-template.yaml βββ π resources/ βββ π pipeline/ βββ Dockerfile π βββ execute-pipeline.bash βββ yq βββ cnpg-request-template.yaml
-
Write the Dockerfile. Here's an example:
FROM redhat/ubi8 RUN [ "mkdir", "/tmp/transfer" ] ADD cnpg-request-template.yaml /tmp/transfer/ ADD execute-pipeline.bash execute-pipeline.bash ADD yq yq CMD [ "sh", "-c", "./execute-pipeline.bash" ] ENTRYPOINT []
-
-
π Test the pipeline locally (wasn't covered in our session on Tuesday)
-
Within the
pipline/
directory create new folders and files as follows:π cnpg-promise βββ promise-template.yaml βββ π resources/ βββ π pipeline/ βββ Dockerfile βββ execute-pipeline.bash βββ yq βββ cnpg-request-template.yaml βββ π test/ π βββ build-and-test-pipeline.bash π βββ π output/ π βββ π input/ π βββ object.yaml π
-
In
build-and-test-pipeline.bash
, define a test script that builds and runs the pipeline locally to validate that, given a specific input, you get the expected output.#!/usr/bin/env bash set -eu -o pipefail testdir=$(cd "$(dirname "$0")"/../test; pwd) pipelinename="<UPDATE TO MATCH $PIPELINE_NAME>" docker build --tag $pipelinename $testdir/../pipeline scriptsdir=$(cd "$(dirname "$0")"; pwd) testdir=$(cd "$(dirname "$0")"/../test; pwd) inputDir="$testdir/input" outputDir="$testdir/output" rm $outputDir/* docker run --rm --volume ${outputDir}:/kratix/output --volume ${inputDir}:/kratix/input --volume $pipelinename
-
Make the script executable
chmod +x build-and-test-pipeline.bash
-
Run the script
./build-and-test-pipeline
You should now have a file in
output/
which is a definition of the workload for Kubernetes.Verify that the file contains the workload definition you expected. If it doesn't, adjust your Dockerfile and pipeline script and run the test script until you get the expected output.
π cnpg-promise βββ promise-template.yaml βββ π resources/ βββ π pipeline/ βββ Dockerfile βββ execute-pipeline.bash βββ yq βββ cnpg-request-template.yaml βββ π test/ βββ build-and-test-pipeline.bash βββ π output/ βββ cnpg-request.yaml π βββ π input/ βββ object.yaml
Testing Pipeline images. You can quickly validate that the stage is outputting exactly what you want, without even touching Kubernetes. The ability to treat images as independent pieces of software that can have their own development lifecycle (fully testable, easy to execute locally, release independently) allows platform teams to move faster, sharing and reusing images across their Promises.
-
-
Once you're happy that your your pipeline is ready, make sure it is available.
docker build -f pipeline/Dockerfile -t $PIPELINE_NAME pipeline/ minikube image load $PIPELINE_NAME -p $PLATFORM
-
Finally, be sure to update the
promise-template.yaml
Promise definition with a reference to the workflowworkflows: resource: configure: - apiVersion: platform.kratix.io/v1alpha1 kind: Pipeline metadata: name: configure-resource namespace: default spec: containers: - image: <UPDATE TO MATCH $PIPELINE_NAME> name: REPLACE
-
-
Define scheduling to create Resources in the correct destinations
-
Confirm existing labels
kubectl --context $PLATFORM get clusters --show-labels
You should see output like:
NAME AGE LABELS kind-worker 1h environment=dev
-
Update the
promise-template.yaml
Promise definitionscheduling: - target: matchLabels: environment: dev
-
-
Define the Promise dependencies
-
Add the CNPG operator YAML manifest (from their docs) to the resources directory
π cnpg-promise βββ promise-template.yaml βββ π resources/ | βββ cnpg-operator.yaml π βββ π pipeline/ βββ Dockerfile βββ execute-pipeline.bash βββ yq βββ cnpg-request-template.yaml βββ π test/ βββ build-and-test-pipeline.bash βββ π output/ βββ cnpg-request.yaml βββ π input/ βββ object.yaml
-
Ensure the Kratix
worker-resource-builder
binary is installed in your$PATH
and is executable -
Run the
worker-resource-builder
binary to copy the contents of theresources/
folder into your finalpromise.yaml
Promise definitionworker-resource-builder -promise promise-template.yaml -resources-dir resources > promise.yaml
-
Verify the contents of
promise.yaml
, which was generated by theworker-resource-builder
. Ensure the Operator dependency is listed and everything else in the file is defined as it was in thepromise-template.yaml
π cnpg-promise βββ promise-template.yaml βββ promise.yaml π βββ π resources/ | βββ cnpg-operator.yaml βββ π pipeline/ βββ Dockerfile βββ execute-pipeline.bash βββ yq βββ cnpg-request-template.yaml βββ π test/ βββ build-and-test-pipeline.bash βββ π output/ βββ π input/ βββ cnpg-request.yaml βββ object.yaml
-
-
Install the Promise
-
Install the Promise (note you need to use create and not apply in this case because of the size of the Promise)
kubectl --context $PLATFORM create -f promise.yaml
-
Check the Operator dependency is installed on the
worker
clusterkubectl --context $WORKER get pods -A --watch
-
-
Make a request for a Promise Resource
-
Create a request for a Resource for the Promise
π cnpg-promise βββ promise-template.yaml βββ promise.yaml βββ resource-request.yaml π βββ π resources/ | βββ cnpg-operator.yaml βββ π pipeline/ βββ Dockerfile βββ execute-pipeline.bash βββ yq βββ cnpg-request-template.yaml βββ π test/ βββ build-and-test-pipeline.bash βββ π output/ βββ cnpg-request.yaml βββ π input/ βββ object.yaml
-
Fill out the request
apiVersion: REPLACE kind: REPLACE metadata: name: example namespace: default spec: # Properties from spec.api in Promise
-
Submit the request to the
platform
cluster for the Promise Resourcekubectl --context $PLATFORM apply -f resource-request.yaml
-
Verify the pipeline ran
kubectl --context $PLATFORM get pods -A --watch
-
Verify request succeeded
kubectl --context $WORKER get pods -A --watch
-
Verify you can connect to Postgres with the cnpg kubectl plugin
kubectl cnpg psql test-request -n default
-
β
We defined the Promise successfully.
β
It installed successfully.
β
We successfully created and submitted a request for a Resource from the Promise.
π© But then the instance never got created on the worker
cluster.
-
The
yq
binary version worked on the local machine but wasn't compatible with the remote base image. Using a different release for the pipeline fixed the issue. -
Minikube has a bug where it fails silently if you try to load a new version of an image that is in use by anything in the system. You should be able to delete the pipeline pods that use the outdated version of the pipeline image. But even that didn't work -- the pipeline image wasn't getting updated. Instead we needed forcibly remove the image for completeness.
minkube image rm $PIPELINE_NAME -p $PLATFORM
-
We did not start with testing the pipeline locally, which we should have done. The instructions above now include a test script.
The Promise exists:
kubectl --context $PLATFORM get promises.platform.kratix.io
The above command will give an output similar to:
NAME AGE
cnpg 1h
The request exists:
kubectl --context $PLATFORM get postgresql
The above command will give an output similar to:
NAME AGE
test-request 1h
The pipeline completed:
kubectl --context $PLATFORM get pods -w
The above command will give an output similar to:
NAME READY STATUS RESTARTS AGE
postgresql-pipeline-<UID> 1/1 Completed 0 1m
Flux successfully applied the documents outputted from the pipeline:
kubectl get kustomizations.kustomize.toolkit.fluxcd.io --context $WORKER --all-namespaces --watch
The above command will give an output similar to:
NAMESPACE NAME AGE READY STATUS
flux-system kratix-workload-crds 4d2h True Applied revision: c906c5e46becfe35302b092ef405aadac696cf12095d0b038d9b44f3855a44c5
flux-system kratix-workload-resources 4d2h True Applied revision: c906c5e46becfe35302b092ef405aadac696cf12095d0b038d9b44f3855a44c5
Kratix Works exist for the request (Kratix internals to translate pipeline documents to K8s workloads):
kubectl -βcontext $PLATFORM get works
The above command will give an output similar to:
NAME AGE
cnpg-default-test-request 1h
Kratix Workplacements exist for the request (Kratix internals to translate pipeline documents to K8s workloads):
kubectl --context $PLATFORM get workplacements
The above command will give an output similar to:
NAME AGE
postgresql-default-test-request.worker-cluster 1h
The Kratix Work for the request is as expected (ie the .spec.workload.manifests
contains content):
kubectl -βcontext $PLATFORM get work cnpg-default-test-request -o yaml
The above command should include the YAML created by the pipeline for the PG resource.