Skip to content

Instantly share code, notes, and snippets.

@syntassodev
Last active August 9, 2023 10:22
Show Gist options
  • Save syntassodev/54ba656925b40e3fbf78a8f9aa550e34 to your computer and use it in GitHub Desktop.
Save syntassodev/54ba656925b40e3fbf78a8f9aa550e34 to your computer and use it in GitHub Desktop.
Promise Building workshop

Promise Writing Session

Verify system setup

Ensure environment variables are set for ease of running commands throughout

export PLATFORM="kind-platform"
export WORKER="kind-worker"

Ensure multi-cluster Minikube setup with Kratix installed

export MINIKUBE_HOME=/app/minikube_volume`

Check for existing clusters:

minikube profile list

The above command will give an output similar to:

|---------------|-----------|---------|--------------|------|---------|---------|-------|
|    Profile    | VM Driver | Runtime |      IP      | Port | Version | Status  | Nodes |
|---------------|-----------|---------|--------------|------|---------|---------|-------|
| kind-platform | docker    | docker  | 192.168.49.2 | 8443 | v1.23.3 | Running |     1 |
| kind-worker   | docker    | docker  | 192.168.58.2 | 8443 | v1.23.3 | Running |     1 |
|---------------|-----------|---------|--------------|------|---------|---------|-------|

Check that Kratix is installed on a platform cluster:

kubectl --context $PLATFORM get deployments --namespace kratix-platform-system

The above command will give an output similar to:

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
kratix-platform-controller-manager   1/1     1            1           1h
minio                                1/1     1            1           1h

The worker cluster is registered and ready

Check the worker cluster has Flux configured:

kubectl get kustomizations.kustomize.toolkit.fluxcd.io --context $WORKER --all-namespaces

The above command will give an output similar to:

NAMESPACE     NAME                        AGE    READY   STATUS
flux-system   kratix-workload-crds        4d2h   True    Applied revision: c906c5e46becfe35302b092ef405aadac696cf12095d0b038d9b44f3855a44c5
flux-system   kratix-workload-resources   4d2h   True    Applied revision: c906c5e46becfe35302b092ef405aadac696cf12095d0b038d9b44f3855a44c5

The worker cluster is registered with Kratix:

kubectl get clusters.platform.kratix.io --context $PLATFORM --all-namespaces

The above command will give an output similar to:

NAMESPACE   NAME               AGE
default     worker-cluster     1h

The BucketStateStore and MinIO are ready

kubectl --context $PLATFORM get bucketstatestores.platform.kratix.io

The above command will give an output similar to:

NAME          AGE
minio-store   1h

To discover more about the state store:

kubectl --context $PLATFORM describe bucketstatestore minio-store

Ensure images are uploaded and available

  • redhat/ubi8 base image

  • cnpg operator and PG images

  • Ensure lab environment doesn't have conflicting version of software already deployed (ie if you're doing CNPG and have an already deployed controller, be sure to delete it).

Install additional binaries for Promise building

Choose the right architecture from the releases page.

  • yq version 4: a YAML manipulation and inspection tool
  • worker-resource-builder: the Kratix binary for injecting the CNPG dependency definitions into your final promise.yaml file.

Steps

  1. Create folder structure and cd into cnpg-promise

    πŸ“‚ cnpg-promise πŸ†•
    β”œβ”€β”€ promise-template.yaml πŸ†•
    β”œβ”€β”€ πŸ“‚  resources/ πŸ†•
    └── πŸ“‚  pipeline/ πŸ†•
  2. Add base properties to the Promise definition in promise-template.yaml

  3. Name the Promise and add the spec keys

    apiVersion: platform.kratix.io/v1alpha1
    kind: Promise
    metadata:
      name: <REPLACE>
    spec:
      api: {}
      dependencies: []
      scheduling: {}
      workflows: {}
  4. Define the Promise API

    api:
      apiVersion: apiextensions.k8s.io/v1
      kind: CustomResourceDefinition
      metadata:
        name: <REPLACE>
      spec:
        group: <REPLACE>
        names:
          kind: <REPLACE>
          plural: <REPLACE>
          singular: <REPLACE>
        scope: Namespaced
        versions:
          - name: v1alpha1
            schema:
              openAPIV3Schema:
                properties:
                  spec:
                    properties:
                      # instances:
                      # description: The number of instances in cluster
                      # type: integer
                    # required:
                    # - instances
                    type: object
                type: object
            served: true
            storage: true
  5. Define the Promise configure workflow (pipeline)

    1. Decide a name for your pipeline image and set an environment variable to make things easier.

      export PIPELINE_NAME="<REPLACE/REPLACE:v0.0.0>"
    2. Copy the yq binary into pipeline/ (ensure you have the correct version!)

          πŸ“‚ cnpg-promise
          β”œβ”€β”€ promise-template.yaml
          β”œβ”€β”€ πŸ“‚  resources/
          └── πŸ“‚  pipeline/
              └── yq πŸ†•
      
    3. Copy the CNPG request you'd like to use (from their docs) to provision PG instances

        πŸ“‚ cnpg-promise
        β”œβ”€β”€ promise-template.yaml
        β”œβ”€β”€ πŸ“‚  resources/
        └── πŸ“‚  pipeline/
            β”œβ”€β”€ yq
            └── cnpg-request-template.yaml πŸ†•
      
    4. Define what happens in the pipeline

      1. Create a script that you will use in the Dockerfile (it will execute when there is a new request to generate the workload)

          πŸ“‚ cnpg-promise
          β”œβ”€β”€ promise-template.yaml
          β”œβ”€β”€ πŸ“‚  resources/
          └── πŸ“‚  pipeline/
              β”œβ”€β”€ execute-pipeline.bash πŸ†•
              β”œβ”€β”€ yq
              └── cnpg-request-template.yaml
        
      2. Write the script. Here's an example:

          #!/usr/bin/env sh
        
          set -xe
        
          # Read current values from the provided resource request
          export name="$(./yq eval '.metadata.name' /input/object.yaml)"
          export instances="$(./yq eval '.spec.instances' /input/object.yaml)"
        
          # Replace defaults with user provided values
          sed "s/TBDNAME/${name}/g" /tmp/transfer/cnpg-request-template.yaml > cnpg-request.yaml
          sed "s/TBDINSTANCECOUNT/${instances}/g" cnpg-request.yaml > /output/cnpg-request.yaml
      3. Set permissions

        chmod +x pipeline/execute-pipeline.bash
    5. Define the Dockerfile

      1. Create the Dockerfile that will execute on the Pod for the workflow

          πŸ“‚ cnpg-promise
          β”œβ”€β”€ promise-template.yaml
          β”œβ”€β”€ πŸ“‚ resources/
          └── πŸ“‚ pipeline/
              β”œβ”€β”€ Dockerfile πŸ†•
              β”œβ”€β”€ execute-pipeline.bash
              β”œβ”€β”€ yq
              └── cnpg-request-template.yaml
      2. Write the Dockerfile. Here's an example:

          FROM redhat/ubi8
        
          RUN [ "mkdir", "/tmp/transfer" ]
        
          ADD cnpg-request-template.yaml /tmp/transfer/
          ADD execute-pipeline.bash execute-pipeline.bash
          ADD yq yq
        
          CMD [ "sh", "-c", "./execute-pipeline.bash" ]
          ENTRYPOINT []
    6. πŸ†• Test the pipeline locally (wasn't covered in our session on Tuesday)

      1. Within the pipline/ directory create new folders and files as follows:

          πŸ“‚ cnpg-promise
          β”œβ”€β”€ promise-template.yaml
          β”œβ”€β”€ πŸ“‚ resources/
          └── πŸ“‚ pipeline/
              β”œβ”€β”€ Dockerfile
              β”œβ”€β”€ execute-pipeline.bash
              β”œβ”€β”€ yq
              β”œβ”€β”€ cnpg-request-template.yaml
              └── πŸ“‚ test/ πŸ†•
                  β”œβ”€β”€ build-and-test-pipeline.bash πŸ†•
                  β”œβ”€β”€ πŸ“‚ output/ πŸ†•
                  └── πŸ“‚ input/ πŸ†•
                      └── object.yaml πŸ†•
        
      2. In build-and-test-pipeline.bash, define a test script that builds and runs the pipeline locally to validate that, given a specific input, you get the expected output.

          #!/usr/bin/env bash
        
          set -eu -o pipefail
        
          testdir=$(cd "$(dirname "$0")"/../test; pwd)
          pipelinename="<UPDATE TO MATCH $PIPELINE_NAME>"
          docker build --tag $pipelinename $testdir/../pipeline
        
          scriptsdir=$(cd "$(dirname "$0")"; pwd)
          testdir=$(cd "$(dirname "$0")"/../test; pwd)
          inputDir="$testdir/input"
          outputDir="$testdir/output"
        
          rm $outputDir/*
        
          docker run --rm --volume ${outputDir}:/kratix/output --volume ${inputDir}:/kratix/input --volume $pipelinename
      3. Make the script executable

        chmod +x build-and-test-pipeline.bash
      4. Run the script

        ./build-and-test-pipeline

        You should now have a file in output/ which is a definition of the workload for Kubernetes.

        Verify that the file contains the workload definition you expected. If it doesn't, adjust your Dockerfile and pipeline script and run the test script until you get the expected output.

        πŸ“‚ cnpg-promise
        β”œβ”€β”€ promise-template.yaml
        β”œβ”€β”€ πŸ“‚ resources/
        └── πŸ“‚ pipeline/
            β”œβ”€β”€ Dockerfile
            β”œβ”€β”€ execute-pipeline.bash
            β”œβ”€β”€ yq
            β”œβ”€β”€ cnpg-request-template.yaml
            └── πŸ“‚ test/
                β”œβ”€β”€ build-and-test-pipeline.bash
                β”œβ”€β”€ πŸ“‚ output/
                    └── cnpg-request.yaml πŸ†•
                └── πŸ“‚ input/
                    └── object.yaml
        

        Testing Pipeline images. You can quickly validate that the stage is outputting exactly what you want, without even touching Kubernetes. The ability to treat images as independent pieces of software that can have their own development lifecycle (fully testable, easy to execute locally, release independently) allows platform teams to move faster, sharing and reusing images across their Promises.

    7. Once you're happy that your your pipeline is ready, make sure it is available.

        docker build -f pipeline/Dockerfile -t $PIPELINE_NAME pipeline/
        minikube image load $PIPELINE_NAME -p $PLATFORM
    8. Finally, be sure to update the promise-template.yaml Promise definition with a reference to the workflow

      workflows:
        resource:
          configure:
            - apiVersion: platform.kratix.io/v1alpha1
              kind: Pipeline
              metadata:
                name: configure-resource
                namespace: default
              spec:
                containers:
                  - image: <UPDATE TO MATCH $PIPELINE_NAME>
                    name: REPLACE
  6. Define scheduling to create Resources in the correct destinations

    1. Confirm existing labels

      kubectl --context $PLATFORM get clusters --show-labels

      You should see output like:

      NAME         AGE   LABELS
      kind-worker  1h    environment=dev
    2. Update the promise-template.yaml Promise definition

      scheduling:
        - target:
            matchLabels:
              environment: dev
  7. Define the Promise dependencies

    1. Add the CNPG operator YAML manifest (from their docs) to the resources directory

        πŸ“‚ cnpg-promise
        β”œβ”€β”€ promise-template.yaml
        β”œβ”€β”€ πŸ“‚  resources/
        |   └── cnpg-operator.yaml πŸ†•
        └── πŸ“‚ pipeline/
            β”œβ”€β”€ Dockerfile
            β”œβ”€β”€ execute-pipeline.bash
            β”œβ”€β”€ yq
            β”œβ”€β”€ cnpg-request-template.yaml
            └── πŸ“‚ test/
                β”œβ”€β”€ build-and-test-pipeline.bash
                β”œβ”€β”€ πŸ“‚ output/
                    └── cnpg-request.yaml
                └── πŸ“‚ input/
                    └── object.yaml
      
    2. Ensure the Kratix worker-resource-builder binary is installed in your $PATH and is executable

    3. Run the worker-resource-builder binary to copy the contents of the resources/ folder into your final promise.yaml Promise definition

        worker-resource-builder -promise promise-template.yaml -resources-dir resources > promise.yaml
    4. Verify the contents of promise.yaml, which was generated by the worker-resource-builder. Ensure the Operator dependency is listed and everything else in the file is defined as it was in the promise-template.yaml

        πŸ“‚ cnpg-promise
        β”œβ”€β”€ promise-template.yaml
        β”œβ”€β”€ promise.yaml πŸ†•
        β”œβ”€β”€ πŸ“‚  resources/
        |   └── cnpg-operator.yaml
        └── πŸ“‚ pipeline/
            β”œβ”€β”€ Dockerfile
            β”œβ”€β”€ execute-pipeline.bash
            β”œβ”€β”€ yq
            β”œβ”€β”€ cnpg-request-template.yaml
            └── πŸ“‚ test/
                β”œβ”€β”€ build-and-test-pipeline.bash
                β”œβ”€β”€ πŸ“‚ output/
                └── πŸ“‚ input/
                    └── cnpg-request.yaml
                    └── object.yaml
      
  8. Install the Promise

    1. Install the Promise (note you need to use create and not apply in this case because of the size of the Promise)

        kubectl --context $PLATFORM create -f promise.yaml
    2. Check the Operator dependency is installed on the worker cluster

        kubectl --context $WORKER get pods -A --watch
  9. Make a request for a Promise Resource

    1. Create a request for a Resource for the Promise

      πŸ“‚ cnpg-promise
      β”œβ”€β”€ promise-template.yaml
      β”œβ”€β”€ promise.yaml
      β”œβ”€β”€ resource-request.yaml πŸ†•
      β”œβ”€β”€ πŸ“‚  resources/
      |   └── cnpg-operator.yaml
      └── πŸ“‚ pipeline/
          β”œβ”€β”€ Dockerfile
          β”œβ”€β”€ execute-pipeline.bash
          β”œβ”€β”€ yq
          β”œβ”€β”€ cnpg-request-template.yaml
          └── πŸ“‚ test/
              β”œβ”€β”€ build-and-test-pipeline.bash
              β”œβ”€β”€ πŸ“‚ output/
                  └── cnpg-request.yaml
              └── πŸ“‚ input/
                  └── object.yaml
      
    2. Fill out the request

      apiVersion: REPLACE
      kind: REPLACE
      metadata:
        name: example
        namespace: default
      spec:
        # Properties from spec.api in Promise
    3. Submit the request to the platform cluster for the Promise Resource

        kubectl --context $PLATFORM apply -f resource-request.yaml
    4. Verify the pipeline ran

        kubectl --context $PLATFORM get pods -A --watch
    5. Verify request succeeded

        kubectl --context $WORKER get pods -A --watch
    6. Verify you can connect to Postgres with the cnpg kubectl plugin

        kubectl cnpg psql test-request -n default

Debug notes

Issues we ran into

βœ… We defined the Promise successfully.
βœ… It installed successfully.
βœ… We successfully created and submitted a request for a Resource from the Promise.
😩 But then the instance never got created on the worker cluster.

Discoveries

  • The yq binary version worked on the local machine but wasn't compatible with the remote base image. Using a different release for the pipeline fixed the issue.

  • Minikube has a bug where it fails silently if you try to load a new version of an image that is in use by anything in the system. You should be able to delete the pipeline pods that use the outdated version of the pipeline image. But even that didn't work -- the pipeline image wasn't getting updated. Instead we needed forcibly remove the image for completeness.

    minkube image rm $PIPELINE_NAME -p $PLATFORM
  • We did not start with testing the pipeline locally, which we should have done. The instructions above now include a test script.

General sanity checks we covered during investigation

The Promise exists:

kubectl --context $PLATFORM get promises.platform.kratix.io

The above command will give an output similar to:

NAME      AGE
cnpg      1h

The request exists:

kubectl --context $PLATFORM get postgresql

The above command will give an output similar to:

NAME              AGE
test-request      1h

The pipeline completed:

kubectl --context $PLATFORM get pods -w

The above command will give an output similar to:

NAME                        READY   STATUS      RESTARTS   AGE
postgresql-pipeline-<UID>   1/1     Completed   0          1m

Flux successfully applied the documents outputted from the pipeline:

kubectl get kustomizations.kustomize.toolkit.fluxcd.io --context $WORKER --all-namespaces --watch

The above command will give an output similar to:

NAMESPACE     NAME                        AGE    READY   STATUS
flux-system   kratix-workload-crds        4d2h   True    Applied revision: c906c5e46becfe35302b092ef405aadac696cf12095d0b038d9b44f3855a44c5
flux-system   kratix-workload-resources   4d2h   True    Applied revision: c906c5e46becfe35302b092ef405aadac696cf12095d0b038d9b44f3855a44c5

Kratix Works exist for the request (Kratix internals to translate pipeline documents to K8s workloads):

kubectl -–context $PLATFORM get works

The above command will give an output similar to:

NAME                        AGE
cnpg-default-test-request   1h

Kratix Workplacements exist for the request (Kratix internals to translate pipeline documents to K8s workloads):

kubectl --context $PLATFORM get workplacements

The above command will give an output similar to:

NAME                                              AGE
postgresql-default-test-request.worker-cluster    1h

The Kratix Work for the request is as expected (ie the .spec.workload.manifests contains content):

kubectl -–context $PLATFORM get work cnpg-default-test-request -o yaml

The above command should include the YAML created by the pipeline for the PG resource.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment