Skip to content

Instantly share code, notes, and snippets.

@ojhughes
Last active January 25, 2024 21:51
Show Gist options
  • Save ojhughes/8b47c09a6815d147f37b370c8e09fc30 to your computer and use it in GitHub Desktop.
Save ojhughes/8b47c09a6815d147f37b370c8e09fc30 to your computer and use it in GitHub Desktop.
Bootstrapping a Spring project for deployment on Kubernetes with a few commands.

This is my current "dev workflow" for quickly bootstrapping an Spring app on a Kubernetes cluster. The general approach is polyglot but you will need to use something like https://buildpacks.io instead of Jib to support non-JVM projects

This example uses skaffold and kapp so you will need to install from https://skaffold.dev/docs/install/ & https://k14s.io/ You will also need kubectl installed and a Kubernetes cluster targetted. You can use Minikube, Kind or K3S for lightweight, local Kubernetes environment

export APPNAME=bootstrap-k8s
export DOCKER_REGISTRY=ojhughes

Initialise a Spring Boot application from start.spring.io

curl https://start.spring.io/starter.tgz \
  -d dependencies=web,actuator \
  -d language=groovy \
  -d type=gradle-project \
  -d name=${APPNAME} \
  -d packageName=me.ohughes.${APPNAME} \
  -d baseDir=${APPNAME} | tar -xzvf -

cd ${APPNAME}

This awk snippet adds "id com.google.cloud.tools.jib version 1.8.0" to the plugins block of build.gradle. Requires a new version of awk but you can just edit the build.gradle manually

awk '/^plugins/{print;print "\011id \x27com.google.cloud.tools.jib\x27 version \x271.8.0\x27";next}{print $0}' build.gradle

Create directory for all Kubernetes manifest files

mkdir k8s && cd k8s

Create a deployment using kubectl create and the built in generators. We will use --dry-run -o yaml > deployment.yaml to save the manifest instead of creating the delpoyment imperatively

kubectl create deployment ${APPNAME} --image=${DOCKER_REGISTRY}/${APPNAME} --dry-run -oyaml  > deployment.yaml

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: bootstrap-k8s
  name: bootstrap-k8s
spec:
  replicas: 1
  selector:
    matchLabels:
      app: bootstrap-k8s
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: bootstrap-k8s
    spec:
      containers:
      - image: ojhughes/bootstrap-k8s
        name: bootstrap-k8s
        resources: {}
status: {}

Expose the service, but use the same --dry-run -o yaml pattern we used creating the deployment. This will generate a valid service definition for the deployment. We will keep the default, ClusterIP which means we will need a forwarded port to access the service Using ClusterIP has advantages as it works across all k8s platforms, doesn't utilise a public IP or cloud load balancer. For production usage, an Ingress resource can be used to forward traffic to the ClusterIP

kubectl expose -f deployment.yaml --port=80 --target-port=8080 --dry-run -oyaml > service.yaml

service.yaml

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: bootstrap-k8s
  name: bootstrap-k8s
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: bootstrap-k8s
status:
  loadBalancer: {}

Now we have our k8s manifest files prepared, we can initialise our Skaffold manifest. Skaffold isn't required but it does make life a little easier for development as it provides a local workflow which will;

  • build your Image (using Jib in this case but could be buildpacks or a Dockerfile)
  • Create a tag for the image based on the Git commit sha (other strategies can be used such as timestamp or custom)
  • Push the image to a registry
  • Rewrite the Kubernetes deployment manifest to reference the tag previously generated
  • Deploy to Kubernetes (this is optional, we will also show how other tools such as kapp can be used for deployment
  • If using kubectl or helm for deployment with skaffold, you can rebuild on change, watch logs, forward ports and setup a remote debugger
cd ..
skaffold init --XXenableJibInit

skaffold.yaml

apiVersion: skaffold/v1
kind: Config
metadata:
  name: bootstrap-k-s
build:
  artifacts:
  - image: ojhughes/bootstrap-k8s
    jib: {}
deploy:
  kubectl:
    manifests:
    - k8s/deployment.yaml
    - k8s/service.yaml

Now you can build your app image, tag it in a useful way, push to dockerhub, deploy to kubernetes and forward a port from your service, using a single comm

skaffold dev --port-forward

skaffold output

Listing files to watch...
 - ojhughes/bootstrap-k8s
Generating tags...
 - ojhughes/bootstrap-k8s -> ojhughes/bootstrap-k8s:8a89608
Checking cache...
 - ojhughes/bootstrap-k8s: Found. Tagging
Tags used in deployment:
 - ojhughes/bootstrap-k8s -> ojhughes/bootstrap-k8s:8a89608@sha256:eb1c0e80ae3982a61bbb82ce7b65050d64d6c25b6ccbfd49e632efab394c2232
Starting deploy...
 - deployment.apps/bootstrap-k8s created
 - service/bootstrap-k8s created
Port forwarding service/bootstrap-k8s in namespace todo, remote port 80 -> local port 4503
Watching for changes...

Skaffold will delete the deployment once you Ctrl+C out of the watcher. You can use skaffold run to create the resources without watching.

My preferred tool for deploying to Kubernetes is not kubectl, but kapp (see https://k14s.io). This is because kapp does a much better job at managing dependencies between resources. You can use annotations within your manifests to describe ordering of resource creation plus a number of other features (see docs)

Here is how to take advantage of Skaffolds workflow for building, tagging, pushing and rewriting manifests in conjunction with kapp. Unfortunately this approach means you can not use the Skaffold deployment features such as wathcing and port forwarding but for me it is worth it to use kapp

kapp deploy -a $(yq r skaffold.yaml metadata.name) -f <(skaffold render)

Hopefully in the future it will be possible to run kapp natively within a Skaffold pipeline, see GoogleContainerTools/skaffold#2277

@ojhughes
Copy link
Author

I think you are missing a cd k8s before running the kubectl ... --dry-run -oyaml commands

Thanks, this is fixed now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment