Skip to content

Instantly share code, notes, and snippets.

@mikesparr
Last active November 8, 2023 19:38
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mikesparr/00d78ef1aa39c63b5c89b64be8b3be51 to your computer and use it in GitHub Desktop.
Save mikesparr/00d78ef1aa39c63b5c89b64be8b3be51 to your computer and use it in GitHub Desktop.
Example deploying iperf3 network analysis server and client in Kubernetes cluster pods to test network throughput
#!/usr/bin/env bash
#####################################################################
# REFERENCES
# - https://github.com/esnet/iperf
# - https://cloud.google.com/artifact-registry/docs/docker/store-docker-container-images
# - https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
# - https://kubernetes.io/docs/concepts/workloads/controllers/job/
#####################################################################
export PROJECT_ID=$(gcloud config get-value project)
export PROJECT_USER=$(gcloud config get-value core/account) # set current user
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")
export IDNS=${PROJECT_ID}.svc.id.goog # workflow identity domain
export GCP_REGION="us-central1" # CHANGEME (OPT)
export GCP_ZONE="us-central1-a" # CHANGEME (OPT)
export NETWORK_NAME="default"
# enable apis
gcloud services enable compute.googleapis.com \
container.googleapis.com \
storage.googleapis.com \
artifactregistry.googleapis.com \
cloudbuild.googleapis.com
# configure gcloud sdk
gcloud config set compute/region $GCP_REGION
gcloud config set compute/zone $GCP_ZONE
###########################################
# DOCKER IMAGE
###########################################
# build entrypoint start script
cat > entrypoint.sh << EOF
#!/usr/bin/env bash
set -e
echo "IPERF_MODE=\$IPERF_MODE"
echo "IPERF_PORT=\$IPERF_PORT"
echo "IPERF_TARGET=\$IPERF_TARGET"
if [[ "\$IPERF_MODE" = "client" ]] && [[ ! -z "\$IPERF_TARGET" ]]; then
echo "Running iperf in client mode using port \$IPERF_PORT ..."
echo "Targeting IP: \$IPERF_TARGET ..."
sleep 5
iperf3 -c \$IPERF_TARGET -p \$IPERF_PORT
sleep 5
elif [ "\$IPERF_MODE" = "server" ]; then
echo "Running iperf in server mode using port \$IPERF_PORT ..."
iperf3 -s -p \$IPERF_PORT
else
echo "Missing one or more env params (IPERF_MODE, IPERF_PORT, IPERF_TARGET)"
exit 1
fi
EOF
chmod +x entrypoint.sh
# build iperf image
export ALPINE_IMG_TAG="3.17.2"
export IPERF_PORT="5201"
cat > Dockerfile << EOF
FROM alpine:$ALPINE_IMG_TAG
ARG port=5201
ARG mode=server
ENV IPERF_PORT=\$port
ENV IPERF_MODE=\$mode
ENV IPERF_TARGET=""
RUN apk add --no-cache \\
iperf3 \\
bash
ADD ./entrypoint.sh /
ENTRYPOINT "./entrypoint.sh"
EOF
#############################################################
# ARTIFACT REGISTRY
# - WARNING: arm architecture on Mac will produce non-runnable image
# run pull / tag / push commands from your temp bastion,
# cloud shell, or use buildx
#############################################################
export REPO_NAME="demo-repo"
export IMAGE_NAME="iperf"
export TAG_NAME="1.1"
export IMAGE_PATH=$GCP_REGION-docker.pkg.dev/$PROJECT_ID/$REPO_NAME/$IMAGE_NAME:$TAG_NAME
export IMAGE_PLATFORM="linux/amd64" # or linux/arm64, linux/arm/v7, linux/arm/v6
gcloud artifacts repositories create $REPO_NAME \
--repository-format=docker \
--location=$GCP_REGION \
--description="Docker repository"
# configure auth
gcloud auth configure-docker ${GCP_REGION}-docker.pkg.dev
# check available platforms
docker buildx inspect --bootstrap
# build and tag image
docker buildx build -t $IMAGE_PATH --platform=$IMAGE_PLATFORM .
# push image to artifact registry
docker push $IMAGE_PATH
###########################################
# K8S CLUSTER
###########################################
export CLUSTER_NAME="central"
export IPERF_PORT="5201"
export IPERF_SVC_NAME="iperf-server"
# (optional) create GKE cluster
gcloud container clusters create $CLUSTER_NAME \
--region $GCP_REGION \
--num-nodes 1
# create server pod manifest
cat > server.yaml << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
labels:
app: $IPERF_SVC_NAME
name: $IPERF_SVC_NAME
spec:
containers:
- name: $IPERF_SVC_NAME
image: $IMAGE_PATH
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "768Mi"
cpu: "1000m"
ports:
- containerPort: $IPERF_PORT
env:
- name: IPERF_PORT
value: "$IPERF_PORT"
restartPolicy: Always
EOF
# expose server
kubectl expose pod/$IPERF_SVC_NAME --port $IPERF_PORT
# create client job manifest
cat > client.yaml << EOF | kubectl apply -f -
apiVersion: batch/v1
kind: Job
metadata:
name: iperf-client-test
spec:
backoffLimit: 1
ttlSecondsAfterFinished: 30
template:
spec:
containers:
- name: iperf-client-test
image: $IMAGE_PATH
env:
- name: IPERF_PORT
value: "$IPERF_PORT"
- name: IPERF_MODE
value: "client"
- name: IPERF_TARGET
value: "$IPERF_SVC_NAME"
resources: {}
restartPolicy: Never
EOF
# inspect output in cloud logging console
@mikesparr
Copy link
Author

mikesparr commented Mar 28, 2023

iperf3 network test between pods

Sometimes to rule out infrastructure issues vs application issues it is necessary to install testing tools on a cluster. This example illustrates how to build a custom Dockerfile and entrypoint.sh script to run iperf3 to test the network bandwidth / throughput of pod to pod communications on a cluster.

Potential enterprise problems

  • security policy restricts using unknown/untrusted public Docker images

Solution

  • create your own entrypoint and Dockerfile and push to your own container registry
  • create your own server and client pods for total control / optional configuration

Result

Builds tiny image < 10MB

Screenshot 2023-03-27 at 6 27 53 PM

Runs k8s job and logs results

Screenshot 2023-03-27 at 9 43 17 PM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment