Skip to content

Instantly share code, notes, and snippets.

@Lewiscowles1986
Last active March 26, 2019 18:47
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Lewiscowles1986/dd6ec809e2ae624a827309e886fad506 to your computer and use it in GitHub Desktop.
Save Lewiscowles1986/dd6ec809e2ae624a827309e886fad506 to your computer and use it in GitHub Desktop.
Replay against system to test stability

Utility to replay HTTP requests on a service

This utility helps replay a statistically significant number of HTTP requests against a service (serially) which are enough to give significance to two-decimal places.

Usage

replay-http {url} [{headers_path} {payload_path} {http_method}]

Parameters

url

Expects a URL with Scheme, such as https://www.google.co.uk

headers_path

Expects a path to a file containing headers entries. Initial format is interpolated to CURL directly (no new lines)

Example

/path/to/headers.fragment

Example (file-contents)
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \

payload_path

Expects a path to a raw request body payload.

Example

/path/to/body.extension

Example (file-contents)

{"data":{"date":"2019-03-26T13:51:00"}}

http_method

Expects a string to pass as a HTTP method. Examples

  • GET
  • POST
  • PUT
  • HEAD
  • PATCH
  • DELETE

Output format

I'm using this to pipe output to a CSV file which I load in LibreOffice.

By default it gives me all 200 & 400 requests and is being used to test a backend-for-frontend.

Copying and pasting the END print statements, you could easily add more than current

  • total number of requests
  • number of 200 responses
  • number of 400 requests
  • Percentage of successful requests (higher is better)

The idea is to get a sense of distribution. Gven the same payload a top-class web-service will return only 1 or less non-2XX responses given the same idempotent payload with no unique constraints.

This script does not attempt to create unique data or data per-request. In-order to instrument that the path to the bodies would need to move into the loop and execute a script per-execution, per area that needs unique data. The script called would need to track state of provided values to ensure it only outputs data that has not been outputted.

Disclaimer

This is not a replacement for tests. It's a polyfill for lacking instrumentation. It's only designed to be better than nothing.

#!/bin/bash
if [[ $# -lt 1 ]];
then echo "You need to pass a url!"
echo "Usage:"
echo "$0 {url} [{headers_path} {payload_path} {http_method}]"
exit
fi
URL=$1
PAYLOAD=''
HEADERS=''
METHOD='GET'
# SET REQUEST HEADERS
if [[ ! "$2" -eq "" ]];
HEADERS="$(cat $2)"
fi
# SET REQUEST BODY
if [[ ! "$3" -eq "" ]];
METHOD='POST'
PAYLOAD="-d $(cat $3)"
fi
# SET REQUEST METHOD
if [[ ! "$4" -eq "" ]];
METHOD="$4"
fi
for i in {1..10000}
do
echo $(curl -s -o /dev/null -w '%{http_code}\n' \
-X "$METHOD" \
-H 'Pragma: no-cache' \
$HEADERS $PAYLOAD "$URL")
done | awk '{ sum += $1; n++; print ","$1; } END { if(n > 0) print "total,=count(B1:B"n")"; print "200,\"=countif(B1:B"n", \"=200\")\""; print "400,\"=countif(B1:B"n", \"=400\")\""; print "Succeess,=B"n+1"/("n"/100)"; }'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment