Skip to content

Instantly share code, notes, and snippets.

@robskillington
Last active April 1, 2019 21:20
Show Gist options
  • Save robskillington/83ef505a713c1f3fb335bd9433e77b07 to your computer and use it in GitHub Desktop.
Save robskillington/83ef505a713c1f3fb335bd9433e77b07 to your computer and use it in GitHub Desktop.
M3 FOSDEM demo

M3 Demo

This repository contains a docker-compose file which can be used to setup a demo of the M3 stack. It runs the following containers:

  1. M3DB
  2. M3Coordinator
  3. Prometheus
  4. Grafana

Setup

Container Setup and Database Initialization

Start all the containers. Note that if you'd like to reuse the underlying storage between runs (I.E so that data and topology / namespaces are retained) then remove the --renew-anon-volumes argument from the command below.

docker-compose up --renew-anon-volumes

Create the initial placement.

curl -X POST localhost:7201/api/v1/placement/init -d '{
    "num_shards": 64,
    "replication_factor": 1,
    "instances": [
        {
            "id": "m3db_seed",
            "isolation_group": "embedded",
            "zone": "embedded",
            "weight": 1024,
            "endpoint": "m3db_seed:9000",
            "hostname": "m3db_seed",
            "port": 9000
        }
    ]
}' | jq  .

Create the unaggregated namespace to store the raw data.

curl -X POST localhost:7201/api/v1/namespace -d '{
  "name": "raw_metrics",
  "options": {
    "bootstrapEnabled": true,
    "flushEnabled": true,
    "writesToCommitLog": true,
    "cleanupEnabled": true,
    "snapshotEnabled": true,
    "repairEnabled": false,
    "retentionOptions": {
      "retentionPeriodDuration": "48h",
      "blockSizeDuration": "2h",
      "bufferFutureDuration": "5m",
      "bufferPastDuration": "5m",
      "blockDataExpiry": true,
      "blockDataExpiryAfterNotAccessPeriodDuration": "5m"
    },
    "indexOptions": {
      "enabled": true,
      "blockSizeDuration": "2h"
    }
  }
}' | jq .

Create the aggregated namespace to store the aggregated data.

curl -X POST localhost:7201/api/v1/namespace -d '{
  "name": "metrics_10s_168h",
  "options": {
    "bootstrapEnabled": true,
    "flushEnabled": true,
    "writesToCommitLog": true,
    "cleanupEnabled": true,
    "snapshotEnabled": true,
    "repairEnabled": false,
    "retentionOptions": {
      "retentionPeriodDuration": "168h",
      "blockSizeDuration": "24h",
      "bufferFutureDuration": "5m",
      "bufferPastDuration": "5m",
      "blockDataExpiry": true,
      "blockDataExpiryAfterNotAccessPeriodDuration": "5m"
    },
    "indexOptions": {
      "enabled": true,
      "blockSizeDuration": "24h"
    }
  }
}' | jq .

Graphite

Navigate to http://localhost:3000 to open Grafana.

Add a new Graphite data source with the URL: http://m3coordinator01:7201/api/v1/graphite

Write data to the carbon ingestion port

(export now=$(date +%s) && echo "stats.sum.bar 10 $now" | nc 0.0.0.0 7204)

(export now=$(date +%s) && echo "stats.mean.bar 10 $now" | nc 0.0.0.0 7204)

Create a new panel to visualize the results with the following query: transformNull(stats.*.bar)

Prometheus

Add a new Prometheus data source with the URL: http://m3coordinator01:7201

Data is already being scraped by Prometheus and written to M3, so add a new panel and visualize it with this sample query which shows the rate at which M3DB is writing to its commitlog: sum(rate(commitlog_writes_success{}[30s]))

version: "3.5"
services:
m3db_seed:
networks:
- backend
image: quay.io/m3/m3dbnode:latest
volumes:
- "./m3dbnode.yml:/etc/m3dbnode/m3dbnode.yml"
environment:
- M3DB_HOST_ID=m3db_seed
m3coordinator01:
expose:
- "7201"
- "7204"
ports:
- "0.0.0.0:7201:7201"
- "0.0.0.0:7204:7204"
networks:
- backend
image: quay.io/m3/m3coordinator:latest
volumes:
- "./m3coordinator.yml:/etc/m3coordinator/m3coordinator.yml"
prometheus01:
networks:
- backend
image: prom/prometheus:latest
volumes:
- "./prometheus.yml/:/etc/prometheus/prometheus.yml"
grafana:
expose:
- "3000"
ports:
- "0.0.0.0:3000:3000"
networks:
- backend
image: grafana/grafana:latest
networks:
backend:
listenAddress:
type: "config"
value: "0.0.0.0:7201"
metrics:
scope:
prefix: "coordinator"
prometheus:
handlerPath: /metrics
listenAddress: 0.0.0.0:7203 # until https://github.com/m3db/m3/issues/682 is resolved
sanitization: prometheus
samplingRate: 1.0
extended: none
clusters:
- namespaces:
- namespace: raw_metrics
type: unaggregated
retention: 24h
- namespace: metrics_10s_168h
type: aggregated
retention: 168h
resolution: 10s
client:
config:
service:
env: default_env
zone: embedded
service: m3db
cacheDir: /var/lib/m3kv
etcdClusters:
- zone: embedded
endpoints:
- m3db_seed:2379
writeConsistencyLevel: majority
readConsistencyLevel: unstrict_majority
writeTimeout: 10s
fetchTimeout: 15s
connectTimeout: 20s
writeRetry:
initialBackoff: 500ms
backoffFactor: 3
maxRetries: 2
jitter: true
fetchRetry:
initialBackoff: 500ms
backoffFactor: 2
maxRetries: 3
jitter: true
backgroundHealthCheckFailLimit: 4
backgroundHealthCheckFailThrottleFactor: 0.5
carbon:
ingester:
listenAddress: "0.0.0.0:7204"
rules:
- pattern: stats.sum.*
aggregation:
type: sum
policies:
- resolution: 10s
retention: 168h
- pattern: .*
aggregation:
type: mean
policies:
- resolution: 10s
retention: 168h
coordinator:
listenAddress:
type: "config"
value: "0.0.0.0:7201"
metrics:
scope:
prefix: "coordinator"
prometheus:
handlerPath: /metrics
listenAddress: 0.0.0.0:7203 # until https://github.com/m3db/m3/issues/682 is resolved
sanitization: prometheus
samplingRate: 1.0
extended: none
db:
logging:
level: info
metrics:
prometheus:
handlerPath: /metrics
sanitization: prometheus
samplingRate: 1.0
extended: detailed
listenAddress: 0.0.0.0:9000
clusterListenAddress: 0.0.0.0:9001
httpNodeListenAddress: 0.0.0.0:9002
httpClusterListenAddress: 0.0.0.0:9003
debugListenAddress: 0.0.0.0:9004
hostID:
resolver: environment
envVarName: M3DB_HOST_ID
client:
writeConsistencyLevel: majority
readConsistencyLevel: unstrict_majority
writeTimeout: 10s
fetchTimeout: 15s
connectTimeout: 20s
writeRetry:
initialBackoff: 500ms
backoffFactor: 3
maxRetries: 2
jitter: true
fetchRetry:
initialBackoff: 500ms
backoffFactor: 2
maxRetries: 3
jitter: true
backgroundHealthCheckFailLimit: 4
backgroundHealthCheckFailThrottleFactor: 0.5
gcPercentage: 100
writeNewSeriesAsync: true
writeNewSeriesLimitPerSecond: 1048576
writeNewSeriesBackoffDuration: 2ms
bootstrap:
bootstrappers:
- filesystem
- peers
- commitlog
- uninitialized_topology
fs:
numProcessorsPerCPU: 0.125
cache:
series:
policy: lru
commitlog:
flushMaxBytes: 524288
flushEvery: 1s
queue:
calculationType: fixed
size: 2097152
blockSize: 10m
fs:
filePathPrefix: /var/lib/m3db
writeBufferSize: 65536
dataReadBufferSize: 65536
infoReadBufferSize: 128
seekReadBufferSize: 4096
throughputLimitMbps: 100.0
throughputCheckEvery: 128
repair:
enabled: false
interval: 2h
offset: 30m
jitter: 1h
throttle: 2m
checkInterval: 1m
pooling:
blockAllocSize: 16
type: simple
seriesPool:
size: 262144
lowWatermark: 0.7
highWatermark: 1.0
blockPool:
size: 262144
lowWatermark: 0.7
highWatermark: 1.0
encoderPool:
size: 262144
lowWatermark: 0.7
highWatermark: 1.0
closersPool:
size: 104857
lowWatermark: 0.7
highWatermark: 1.0
contextPool:
size: 262144
lowWatermark: 0.7
highWatermark: 1.0
segmentReaderPool:
size: 16384
lowWatermark: 0.7
highWatermark: 1.0
iteratorPool:
size: 2048
lowWatermark: 0.7
highWatermark: 1.0
fetchBlockMetadataResultsPool:
size: 65536
capacity: 32
lowWatermark: 0.7
highWatermark: 1.0
fetchBlocksMetadataResultsPool:
size: 32
capacity: 4096
lowWatermark: 0.7
highWatermark: 1.0
hostBlockMetadataSlicePool:
size: 131072
capacity: 3
lowWatermark: 0.7
highWatermark: 1.0
blockMetadataPool:
size: 65536
lowWatermark: 0.7
highWatermark: 1.0
blockMetadataSlicePool:
size: 65536
capacity: 32
lowWatermark: 0.7
highWatermark: 1.0
blocksMetadataPool:
size: 65536
lowWatermark: 0.7
highWatermark: 1.0
blocksMetadataSlicePool:
size: 32
capacity: 4096
lowWatermark: 0.7
highWatermark: 1.0
identifierPool:
size: 262144
lowWatermark: 0.7
highWatermark: 1.0
bytesPool:
buckets:
- capacity: 16
size: 524288
lowWatermark: 0.7
highWatermark: 1.0
- capacity: 32
size: 262144
lowWatermark: 0.7
highWatermark: 1.0
- capacity: 64
size: 131072
lowWatermark: 0.7
highWatermark: 1.0
- capacity: 128
size: 65536
lowWatermark: 0.7
highWatermark: 1.0
- capacity: 256
size: 65536
lowWatermark: 0.7
highWatermark: 1.0
- capacity: 1440
size: 16384
lowWatermark: 0.7
highWatermark: 1.0
- capacity: 4096
size: 8192
lowWatermark: 0.7
highWatermark: 1.0
config:
service:
env: default_env
zone: embedded
service: m3db
cacheDir: /var/lib/m3kv
etcdClusters:
- zone: embedded
endpoints:
- m3db_seed:2379
seedNodes:
initialCluster:
- hostID: m3db_seed
endpoint: http://m3db_seed:2380
global:
external_labels:
role: "remote"
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'coordinator'
static_configs:
- targets: ['m3coordinator01:7203']
- job_name: 'dbnode'
static_configs:
- targets: ['m3db_seed:7203']
remote_write:
- url: http://m3coordinator01:7201/api/v1/prom/remote/write
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment