can query w/ grafana
workload produced artifacts and total value is ops / duration (all operation types and all durations added).
- ycsb
- kv
- ledger
{ | |
"annotations": { | |
"list": [ | |
{ | |
"builtIn": 1, | |
"datasource": { | |
"type": "grafana", | |
"uid": "-- Grafana --" | |
}, | |
"enable": true, |
store rebalancer | |
phase(1) | |
while qps > max threshold and a target hot range exists | |
transfer lease to replica with lowest qps for hot range | |
(phase 2) | |
while qps > max threshold | |
get new replica set, using the allocator add target and remove target N times | |
call relocate range and send lease to store with lowest qps |
nightly benchmark testing sketch | |
currently we have roachperf which is not well suited to arbitrary collection. | |
The index is stored in a javascript file and files on disk for details. | |
We should instead have a simple library that allows for exporting to a better | |
visualization platform. | |
- on adhoc roachprod clusters, add --scrape to export prom stats to influxdb | |
- on adhoc roachprod clusters, add --scrape-ttl to set scrape export ttl (important for short lived rp clusters) |
follower-to-leaseholder request statistics
problem
The leaseholder replica currently has no information on the number of reads that occur on a follower replica, aside from the follower read request contribution to the follower's store QPS. The leaseholder is the decision maker for the replica placement for the range.
The writes that the leaseholder observes should be symmetric to the writes that
0 runs so far, 0 failures, over 5s | |
i | |
0 runs so far, 0 failures, over 10s | |
16 runs so far, 0 failures, over 15s | |
16 runs so far, 0 failures, over 20s | |
32 runs so far, 0 failures, over 25s | |
32 runs so far, 0 failures, over 30s |
import random | |
def check(candidates, watermark): | |
mean = sum(candidates) / len(candidates) | |
targets, excluded = [], [] | |
for c in candidates: | |
if c > 20 and c > mean * watermark: | |
excluded.append(c) | |
continue | |
targets.append(c) |
import yaml | |
import sys | |
FILENAME = "prometheus.yml" | |
def format_file(filename, scrape_endpoints): | |
contents = {} | |
with open(filename) as f: | |
contents = yaml.load(f, Loader=yaml.FullLoader) | |
for scrape_config in contents["scrape_configs"]: |
# This is the main configuration file for Spigot. | |
# As you can see, there's tons to configure. Some options may impact gameplay, so use | |
# with caution, and make sure you know what each option does before configuring. | |
# For a reference for any variable inside this file, check out the Spigot wiki at | |
# http://www.spigotmc.org/wiki/spigot-configuration/ | |
# | |
# If you need help with the configuration or have any questions related to Spigot, | |
# join us at the IRC or drop by our forums and leave a post. | |
# | |
# IRC: #spigot @ irc.spi.gt ( http://www.spigotmc.org/pages/irc/ ) |