This utility helps replay a statistically significant number of HTTP requests against a service (serially) which are enough to give significance to two-decimal places.
replay-http {url} [{headers_path} {payload_path} {http_method}]
Expects a URL with Scheme, such as https://www.google.co.uk
Expects a path to a file containing headers entries. Initial format is interpolated to CURL directly (no new lines)
/path/to/headers.fragment
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
Expects a path to a raw request body payload.
/path/to/body.extension
{"data":{"date":"2019-03-26T13:51:00"}}
Expects a string to pass as a HTTP method. Examples
GET
POST
PUT
HEAD
PATCH
DELETE
I'm using this to pipe output to a CSV file which I load in LibreOffice.
By default it gives me all 200 & 400 requests and is being used to test a backend-for-frontend.
Copying and pasting the END
print
statements, you could easily add more than current
- total number of requests
- number of 200 responses
- number of 400 requests
- Percentage of successful requests (higher is better)
The idea is to get a sense of distribution. Gven the same payload a top-class web-service will return only 1 or less non-2XX responses given the same idempotent payload with no unique constraints.
This script does not attempt to create unique data or data per-request. In-order to instrument that the path to the bodies would need to move into the loop and execute a script per-execution, per area that needs unique data. The script called would need to track state of provided values to ensure it only outputs data that has not been outputted.
This is not a replacement for tests. It's a polyfill for lacking instrumentation. It's only designed to be better than nothing.