Skip to content

Instantly share code, notes, and snippets.

@antirez
Last active September 5, 2018 11:12
Show Gist options
  • Star 12 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save antirez/8913533 to your computer and use it in GitHub Desktop.
Save antirez/8913533 to your computer and use it in GitHub Desktop.
VM latency
/* vmlatency.c
*
* Measure max latency of a running process that does not result from
* syscalls. Basically this software should provide an hint about how much
* time the kernel leaves the process without a chance to run.
*
* Copyright (C) 2014 Salvatore Sanfilippo.
* This software is released under the BSD two-clause license.
*
* compile with: cc vmlatency.c -o vmlatency
*/
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <stdint.h>
/* Return the UNIX time in microseconds */
long long ustime(void) {
struct timeval tv;
long long ust;
gettimeofday(&tv, NULL);
ust = ((long long)tv.tv_sec)*1000000;
ust += tv.tv_usec;
return ust;
}
/* This is just some computation the compiler can't optimize out.
* Should run in less than 100-200 microseconds even using very
* slow hardware. */
uint64_t compute_something_fast(void) {
uint8_t s[256], i, j, t;
int count = 1000, k;
uint64_t output = 0;
for (k = 0; k < 256; k++) s[k] = k;
i = 0;
j = 0;
while(count--) {
i++;
j = j + s[i];
t = s[i];
s[i] = s[j];
s[j] = t;
output += s[(s[i]+s[j])&255];
}
return output;
}
int main(int argc, char **argv) {
long long test_end, run_time, max_latency = 0, runs = 0;
if (argc != 2) {
fprintf(stderr,"Usage: %s <test-seconds>\n", argv[0]);
exit(1);
}
run_time = atoi(argv[1]) * 1000000;
test_end = ustime() + run_time;
while(1) {
long long start, end, latency;
start = ustime();
compute_something_fast();
end = ustime();
latency = end-start;
runs++;
if (latency <= 0) continue;
/* Reporting */
if (latency > max_latency) {
max_latency = latency;
printf("Max latency so far: %lld microseconds.\n", max_latency);
}
if (end > test_end) {
printf("\n%lld total runs (avg %lld microseconds per run).\n",
runs, run_time/runs);
printf("Worst run took %.02fx times the avarege.\n",
(double) max_latency / (run_time/runs));
exit(0);
}
}
}
@antirez
Copy link
Author

antirez commented Feb 10, 2014

Example results.

Linode VM: 10 milliseconds max latency (100 seconds run).

Old Desktop server (Dell): 0.7 milliseconds max latency (100 seconds run, while installing stuff on slow disk).

@e-oz
Copy link

e-oz commented Feb 10, 2014

Interesting. Can you (or somebody else) please get results for Google Cloud? For those who don't know C and how to compile it :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment