You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you want to run some setup code as part of your test, it is often enough to put it at the module level of your locustfile, but sometimes you need to do things at particular times in the run. For this need, Locust provides event hooks.
test_start and test_stop
If you need to run some code at the start or stop of a load test, you should use the test_start and test_stop events. You can set up listeners for these events at the module level of your locustfile:
fromlocustimportevents@events.test_start.add_listenerdefon_test_start(environment, **kwargs):
print("A new test is starting")
@events.test_stop.add_listenerdefon_test_stop(environment, **kwargs):
print("A new test is ending")
-n is the number of requests to perform for the benchmarking session.
-c is the concurrency and denotes the number of multiple requests to perform at a time.
Plotting the result
-g option in the previous command followed by the file name (here out.data) in which the ab output data will be saved.
ab -n 100 -c 10 -g out.data https://www.apache.org/
Plotting
gnuplot
As we are working over terminal and supposing that graphics are not available, we can choose the dumb terminal which will give output in ASCII over the terminal itself.
gnuplot> set terminal dumb
gnuplot> plot "out.data" using 9 w l
Distributed load testing using Google Kubernetes Engine #
To deploy a distributed load testing framework that uses multiple containers to create traffic for a simple REST-based API. This tutorial load-tests a web application deployed to App Engine that exposes REST-style endpoints to respond to incoming HTTP POST requests.
Objectives
Define environment variables to control deployment configuration.
Create a GKE cluster.
Perform load testing.
Optionally scale up the number of users or extend the pattern to other use cases.
Architecture
This architecture involves two main components:
The Locust Docker container image.
The container orchestration and management mechanism.
The Locust Docker container image contains the Locust software. The Dockerfile, which you get when you clone the GitHub repository that accompanies this tutorial, uses a base Python image and includes scripts to start the Locust service and execute the tasks.
GKE provides container orchestration and management.
With GKE, you can specify the number of container nodes that provide the foundation for your load testing framework. You can also organize your load testing workers into Pods, and specify how many Pods you want GKE to keep running.
To deploy the load testing tasks, you do the following:
Deploy a load testing master.
Deploy a group of load testing workers. With these load testing workers, you can create a substantial amount of traffic for testing purposes.
The master Pod serves the web interface used to operate and monitor load testing. The worker Pods generate the REST request traffic for the application undergoing test, and send metrics to the master.
About the load testing master
The Locust master is the entry point for executing the load testing tasks. The Locust master configuration specifies several elements, including the default ports used by the container:
8089 for the web interface
5557 and 5558 for communicating with workers
This information is later used to configure the Locust workers.
You deploy a Service to ensure that the necessary ports are accessible to other Pods within the cluster through hostname:port. These ports are also referenceable through a descriptive port name.
This Service allows the Locust workers to easily discover and reliably communicate with the master, even if the master fails and is replaced with a new Pod by the Deployment.
A second Service is deployed with the necessary annotation to create an Internal TCP/UDP Load Balancing that makes the Locust web application Service accessible to clients outside of your cluster that use the same VPC network and are located in the same Google Cloud region as your cluster.
After you deploy the Locust master, you can open the web interface using the private IP address provisioned by Internal TCP/UDP Load Balancing. After you deploy the Locust workers, you can start the simulation and look at aggregate statistics through the Locust web interface.
Verify that the Docker image is in your Artifact Registry repository:
gcloud artifacts docker images list ${REGION}-docker.pkg.dev/${PROJECT}/${AR_REPO} | \
grep ${LOCUST_IMAGE_NAME}
Deploy the Locust master and worker Pods
Substitute the environment variable values for target host, project, and image parameters in the locust-master-controller.yaml and locust-worker-controller.yaml files, and create the Locust master and worker Deployments:
Run a watch loop while an Internal TCP/UDP Load Balancing private IP address (GKE external IP address) is provisioned for the Locust master web application Service:
kubectl get svc locust-master-web --watch
Press Ctrl+C to exit the watch loop once an EXTERNAL-IP address is provisioned.
Connect to Locust web front end
Obtain the internal load balancer IP address of the web host service:
Depending on your network configuration, there are two ways that you can connect to the Locust web application through the provisioned IP address:
Network routing. If your network is configured to allow routing from your workstation to your project VPC network, you can directly access the Internal TCP/UDP Load Balancing IP address from your workstation.
Proxy & SSH tunnel. If there is not a network route between your workstation and your VPC network, you can route traffic to the Internal TCP/UDP Load Balancing IP address by creating a Compute Engine instance with an nginx proxy and an SSH tunnel between your workstation and the instance.