Skip to content

Instantly share code, notes, and snippets.

@AaradhyaSaxena
Last active March 30, 2023 05:01
Show Gist options
  • Save AaradhyaSaxena/1e08163fa1b503b709558bc71cfaf5cc to your computer and use it in GitHub Desktop.
Save AaradhyaSaxena/1e08163fa1b503b709558bc71cfaf5cc to your computer and use it in GitHub Desktop.
Load/Performance Testing

Locust

Events

If you want to run some setup code as part of your test, it is often enough to put it at the module level of your locustfile, but sometimes you need to do things at particular times in the run. For this need, Locust provides event hooks.

test_start and test_stop

If you need to run some code at the start or stop of a load test, you should use the test_start and test_stop events. You can set up listeners for these events at the module level of your locustfile:

from locust import events

@events.test_start.add_listener
def on_test_start(environment, **kwargs):
    print("A new test is starting")

@events.test_stop.add_listener
def on_test_stop(environment, **kwargs):
    print("A new test is ending")

Apache Benchmark

Benchmarking

ab -n 100 -c 10 https://www.apache.org/
  • -n is the number of requests to perform for the benchmarking session.
  • -c is the concurrency and denotes the number of multiple requests to perform at a time.

Plotting the result

-g option in the previous command followed by the file name (here out.data) in which the ab output data will be saved.

ab -n 100 -c 10 -g out.data https://www.apache.org/

Plotting

gnuplot

As we are working over terminal and supposing that graphics are not available, we can choose the dumb terminal which will give output in ASCII over the terminal itself.

gnuplot> set terminal dumb
gnuplot> plot "out.data" using 9  w l

Distributed load testing using Google Kubernetes Engine #

To deploy a distributed load testing framework that uses multiple containers to create traffic for a simple REST-based API. This tutorial load-tests a web application deployed to App Engine that exposes REST-style endpoints to respond to incoming HTTP POST requests.

Objectives

  • Define environment variables to control deployment configuration.
  • Create a GKE cluster.
  • Perform load testing.
  • Optionally scale up the number of users or extend the pattern to other use cases.

Architecture

This architecture involves two main components:

  1. The Locust Docker container image.
  2. The container orchestration and management mechanism.

The Locust Docker container image contains the Locust software. The Dockerfile, which you get when you clone the GitHub repository that accompanies this tutorial, uses a base Python image and includes scripts to start the Locust service and execute the tasks.

GKE provides container orchestration and management. With GKE, you can specify the number of container nodes that provide the foundation for your load testing framework. You can also organize your load testing workers into Pods, and specify how many Pods you want GKE to keep running.

To deploy the load testing tasks, you do the following:

  1. Deploy a load testing master.
  2. Deploy a group of load testing workers. With these load testing workers, you can create a substantial amount of traffic for testing purposes.

The master Pod serves the web interface used to operate and monitor load testing. The worker Pods generate the REST request traffic for the application undergoing test, and send metrics to the master.

dl-architecture

About the load testing master

The Locust master is the entry point for executing the load testing tasks. The Locust master configuration specifies several elements, including the default ports used by the container:

  • 8089 for the web interface
  • 5557 and 5558 for communicating with workers

This information is later used to configure the Locust workers.

You deploy a Service to ensure that the necessary ports are accessible to other Pods within the cluster through hostname:port. These ports are also referenceable through a descriptive port name.

This Service allows the Locust workers to easily discover and reliably communicate with the master, even if the master fails and is replaced with a new Pod by the Deployment.

A second Service is deployed with the necessary annotation to create an Internal TCP/UDP Load Balancing that makes the Locust web application Service accessible to clients outside of your cluster that use the same VPC network and are located in the same Google Cloud region as your cluster.

After you deploy the Locust master, you can open the web interface using the private IP address provisioned by Internal TCP/UDP Load Balancing. After you deploy the Locust workers, you can start the simulation and look at aggregate statistics through the Locust web interface.

Distributed load testing using Google Kubernetes Engine 2

Initialize common variables

Set the environment variables that require customization

export GKE_CLUSTER="centralised-search"
export AR_REPO="distributed-load-testing-using-kubernetes"

(AR repo)
export REGION="asia-south1"
export ZONE="asia-south1-a"
export SAMPLE_APP_LOCATION="asia-south1"

(GKE cluster)
export REGION="asia-south1-a"
export ZONE="asia-south1-a"
export SAMPLE_APP_LOCATION="asia-south1"
export GKE_NODE_TYPE=e2-standard-4
export GKE_SCOPE="https://www.googleapis.com/auth/cloud-platform"
export PROJECT=$(gcloud config get-value project)
export SAMPLE_APP_TARGET=${PROJECT}.appspot.com

GKE Cluster

Connect to the GKE cluster:

gcloud container clusters get-credentials ${GKE_CLUSTER} \
   --region ${REGION} \
   --project ${PROJECT}

Set up the environment

Clone the sample repository from GitHub:

git clone https://github.com/GoogleCloudPlatform/distributed-load-testing-using-kubernetes

Change your working directory to the cloned repository:

cd distributed-load-testing-using-kubernetes

Build the container image

Create an Artifact Registry repository:

gcloud artifacts repositories create ${AR_REPO} \
    --repository-format=docker  \
    --location=${REGION} \
    --description="Distributed load testing with GKE and Locust"

Build the container image and store it in your Artifact Registry repository:

export LOCUST_IMAGE_NAME=locust-tasks
export LOCUST_IMAGE_TAG=latest
gcloud builds submit \
    --tag ${REGION}-docker.pkg.dev/${PROJECT}/${AR_REPO}/${LOCUST_IMAGE_NAME}:${LOCUST_IMAGE_TAG} \
    docker-image

The accompanying Locust Docker image embeds a test task that calls the /login and /metrics endpoints in the sample application.

docker-image/locust-tasks/tasks.py

class MetricsTaskSet(TaskSet):
    _deviceid = None

    def on_start(self):
        self._deviceid = str(uuid.uuid4())

    @task(1)
    def login(self):
        self.client.post(
            '/login', {"deviceid": self._deviceid})

    @task(999)
    def post_metrics(self):
        self.client.post(
            "/metrics", {"deviceid": self._deviceid, "timestamp": datetime.now()})


class MetricsLocust(FastHttpUser):
    tasks = {MetricsTaskSet}

Verify that the Docker image is in your Artifact Registry repository:

gcloud artifacts docker images list ${REGION}-docker.pkg.dev/${PROJECT}/${AR_REPO} | \
    grep ${LOCUST_IMAGE_NAME}

Deploy the Locust master and worker Pods

Substitute the environment variable values for target host, project, and image parameters in the locust-master-controller.yaml and locust-worker-controller.yaml files, and create the Locust master and worker Deployments:

envsubst < kubernetes-config/locust-master-controller.yaml.tpl | kubectl apply -f -
envsubst < kubernetes-config/locust-worker-controller.yaml.tpl | kubectl apply -f -
envsubst < kubernetes-config/locust-master-service.yaml.tpl | kubectl apply -f -

Verify the Locust Deployments:

kubectl get pods -o wide

Verify the Services:

kubectl get services

Run a watch loop while an Internal TCP/UDP Load Balancing private IP address (GKE external IP address) is provisioned for the Locust master web application Service:

kubectl get svc locust-master-web --watch

Press Ctrl+C to exit the watch loop once an EXTERNAL-IP address is provisioned.

Connect to Locust web front end

Obtain the internal load balancer IP address of the web host service:

export INTERNAL_LB_IP=$(kubectl get svc locust-master-web  \
                               -o jsonpath="{.status.loadBalancer.ingress[0].ip}") && \
                               echo $INTERNAL_LB_IP

Depending on your network configuration, there are two ways that you can connect to the Locust web application through the provisioned IP address:

  1. Network routing. If your network is configured to allow routing from your workstation to your project VPC network, you can directly access the Internal TCP/UDP Load Balancing IP address from your workstation.

  2. Proxy & SSH tunnel. If there is not a network route between your workstation and your VPC network, you can route traffic to the Internal TCP/UDP Load Balancing IP address by creating a Compute Engine instance with an nginx proxy and an SSH tunnel between your workstation and the instance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment