Skip to content

Instantly share code, notes, and snippets.

@ppanyukov
Last active October 7, 2019 09:58
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ppanyukov/42b24bf258c09cf2937751072a76946d to your computer and use it in GitHub Desktop.
Save ppanyukov/42b24bf258c09cf2937751072a76946d to your computer and use it in GitHub Desktop.
attempt at generating raw Prometheys TSDB blocks
---
# Sample configuration for metric generation.
# This is used to create random but repeatable
# set of metrics to populate Prometheus TSDB
# for further benchmarks etc.
#
# WORK IN PROGRESS
# TODO:
# - where do we configure the sample interval?
# - what about labels?
# - basically we want:
# - some templates for metrics and values (counter/gauge)
# - emulation of running on a k8s cluster, so that we
# generate metrics say on 1000-node cluster from 100 services.
# - have profiles for things like:
# - high cardinality
# - just loads of data
# metrics is the list of metrics to generate.
metrics:
- # name is required parameter, must be a string
name: foo_sample_counter
# type can be either "counter" or "gauge"
# counters:
# - always increase until reset to min value, e.g. http_requests_total
# - go up from minValue to maxValue, then reset back to minValue
# gauges:
# - go up and down, e.g. http_requests_active
# - fluctuate between minValue and maxValue
#
type: counter
# minValue is the minimum value the metric can have.
minValue: 0.0
# maxValue is the maximum value the metric can have.
# For counters, if metric value becomes greater than maxValue,
# it will be reset to the initial minValue.
maxValue: 10000.0
# changeBaseValue is the base value by which the counter or gauge changes.
# changeRandSeed influences the actual random value of next value.
# For counters, the actual change value will be randomly between [0, +changeBaseValue].
# For gauges, the actual change value will be randomly between [-changeBaseValue, +changeBaseValue].
#
changeBaseValue: 10.0
changeRandSeed: 157
package foo
func generateTSDB2(logger log.Logger, dir string) error {
// generate 4h worth of metrics.
maxt := time.Now()
mint := maxt.Add(-4 * time.Hour)
step := 15 * time.Second
// just one metric just with name.
series := labels.Labels{
{Name: "__name__", Value: "poo_random_count"},
}
// code from here on is some voodoo.
//
// Idea is:
// Step 1:
// - create new head and start appending stuff to it.
// - is it in memory? looks like it.
// Step 2:
// - use compactor to write the head to disk.
// Step 1.
// TODO(ppanyukov): what is chunkRange arg?
head, err := tsdb.NewHead(nil, logger, nil, 1)
if err != nil {
return errors.Wrap(err, "NewHead")
}
defer func(){
if err := head.Close(); err != nil {
panic(err)
}
}()
app := head.Appender()
count := 0
for t := mint; !t.After(maxt); t = t.Add(step) {
if _, err := app.Add(series, timestamp.FromTime(t), float64(t.Unix())); err != nil {
return errors.Wrap(err, "appender.Add")
}
count++
}
level.Info(logger).Log("metric count", count)
if err := app.Commit(); err != nil {
return errors.Wrap(err, "appender.Commit")
}
// Step 2. Flush head to disk.
//
// copypasta from: github.com/prometheus/prometheus/tsdb/db.go:322
//
// Add +1 millisecond to block maxt because block intervals are half-open: [b.MinTime, b.MaxTime).
// Because of this block intervals are always +1 than the total samples it includes.
{
int_mint := timestamp.FromTime(mint)
int_maxt := timestamp.FromTime(maxt)
compactor, err := tsdb.NewLeveledCompactor(context.Background(), nil, logger, tsdb.DefaultOptions.BlockRanges, chunkenc.NewPool())
if err != nil {
return errors.Wrap(err, "create leveled compactor")
}
_, err = compactor.Write(dir, head, int_mint, int_maxt+1, nil)
return errors.Wrap(err, "writing WAL")
}
return nil
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment