Skip to content

Instantly share code, notes, and snippets.

What would you like to do?

Kubernetes Practical Intro for Data Scientists

Today we're going to show you how to deploy an application on Kubernetes. You'll only need to Docker Desktop installed locally with Kubernetes enabled.

Testing Our Application with Docker

To deploy an app on Kubernetes, we first need to make sure it works locally with Docker. Use the example fastapi app here:

# clone repo
git clone
cd fast-bad-ml

# build and run app with docker 
docker build -t ml-app .
docker run -p 5000:5000 -it ml-app

Ensure you can reach the app at: http://localhost:5000/ and make a prediction at http://localhost:5000/predict?feature_1=0&feature_2=1&feature_3=2.

This is a simple ML application that takes 3 features and returns a binary prediction.

Verify Kubernetes is Working With Docker Desktop

When you enable Kubernetes with Docker Desktop, it installs the Kubernetes CLI kubectl for you and configures it for your local cluster. To ensure it's working, make sure you have kubectl installed.

kubectl version

Now, we can look at some example Kubernetes stuff.

# ensure your using the docker-desktop cluster
kubectl config use-context docker-desktop

# check the "nodes" for your cluster (for docker desktop it's just 1)
kubectl get nodes

# check the namespaces (logical separation of resources)
kubectl get ns 

# check the pods running in a given namespace
kubectl get pods -n kube-system

Cluster Setup

We have 1 quick setup step for our cluster which is to install the nginx ingress controller. This will route traffic to our applications and make URLs available outside the cluster.

kubectl apply -f

Verify installation by going to http://localhost:80 in a browser. You should see Not Found.

Write the Object Definitions for our Application

To create Kubernetes resources for your containerized application, you need to write Kubernetes object definitions. Typically this is done using YAML (Yet Another Markup Language) or JSON. We will use YAML to define the resouces we went over today.

Deployment will define the pods:

apiVersion: apps/v1
kind: Deployment
  name: ml-app-deployment
    app: ml-app
  replicas: 1
      app: ml-app
        app: ml-app
      - name: ml-app
        image: ml-app
        imagePullPolicy: Never
        - containerPort: 5000

Service will provide a layer of abstraction over all our pod replicas:

apiVersion: v1
kind: Service
  name: ml-app-service
    app: ml-app
    - protocol: TCP
      port: 5000
      targetPort: 5000

Ingress will let users access our application from outside the cluster:

kind: Ingress
  name: ml-app-ingress
  annotations: /$1
    - host: localhost
          - path: /(.*)
            pathType: Prefix
                name: ml-app-service
                  number: 5000

We can apply these to our cluster directly from the fast-bad-ml directory with:

kubectl apply -f deployment.yaml  
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml

To verify it's working, go to http://localhost:80. To make a prediction we can use http://localhost:80/predict?feature_1=0&feature_2=1&feature_3=2.


We just created an app, made a Docker image for it, and deployed that app to Kubernetes on our local machine.

To do this in the cloud, you can use K8s services like EKS (on AWS) or GKE (on GCP) and apply these same Kubernetes objects there to have them run at scale in the cloud.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment