Skip to content

Instantly share code, notes, and snippets.

View mulbc's full-sized avatar

Chris Blum mulbc

  • IBM
  • Berlin, Germany
  • 14:26 (UTC +02:00)
View GitHub Profile
@mulbc
mulbc / all.yml
Last active October 11, 2023 16:38
EBPF IO latency histogram with IO size buckets
kind: Namespace
apiVersion: v1
metadata:
name: ebpf-exporter
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: ebpf-exporter-robot
namespace: ebpf-exporter
@mulbc
mulbc / state.json
Last active October 4, 2021 09:32 — forked from dobbythebot/state.json
{
"ocs": {
"flashSize": 2.5,
"usableCapacity": 10,
"deploymentType": "internal",
"nvmeTuning": false,
"cephFSActive": true,
"nooBaaActive": true,
"rgwActive": false,
"dedicatedMachines": []
@mulbc
mulbc / vmware-odf-machineset
Last active May 18, 2022 10:46
Creates a machineset for ODF based on an existing machineset (only for VMWARE IPI)
#!/bin/bash
MACHINESET=$(oc get -n openshift-machine-api machinesets -o name | grep -v ocs | head -n1)
oc get -n openshift-machine-api "$MACHINESET" -o json | jq '
del( .metadata.uid, .metadata.managedFields, .metadata.selfLink, .metadata.resourceVersion, .metadata.creationTimestamp, .metadata.generation, .status) |
(.metadata.name, .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"], .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"]) |= sub("worker";"ocs") |
(.spec.template.spec.providerSpec.value.numCPUs) |= 16 |
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: disk-gatherer
namespace: default
labels:
k8s-app: disk-gatherer
spec:
selector:
@mulbc
mulbc / keybase.md
Created September 15, 2019 16:06
keybase.md

Keybase proof

I hereby claim:

  • I am mulbc on github.
  • I am chrisnblum (https://keybase.io/chrisnblum) on keybase.
  • I have a public key whose fingerprint is 536F 2CCA CC87 1421 DC69 84A1 FE67 A71B 6BDE BDA8

To claim this, I am signing this object:

@mulbc
mulbc / cosbench.xml
Last active January 3, 2020 01:27
Cosbench versus Gosbench test config files
<?xml version="1.0" encoding="UTF-8" ?>
<workload name="7-RGW-64K-1M-32M" description="7 RGW 64K-1M-32M test" config="">
<storage type="s3" config="accesskey=S3user1;secretkey=S3user1key;timeout=999999;endpoint=http://192.168.170.20:8080" />
<workflow>
<!-- ************************* 64K ********************************* -->
<workstage name="init">
<work type="init" workers="8" config="cprefix=7rgw64k;containers=r(1,70)"></work>
@mulbc
mulbc / Gosbench Dashboard.json
Created September 4, 2019 09:05
Gosbench Dashboard
{
"__inputs": [
{
"name": "DS_LOCAL",
"label": "Local",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
@mulbc
mulbc / cosb_grafa_link.py
Created September 4, 2019 08:58
Cosbench misc parses
@mulbc
mulbc / radosgw-exporter.service
Created September 4, 2019 08:50
RGW textfile collector
[Unit]
Description=Ceph RGW Prometheus Exporter
After=docker.service
[Service]
EnvironmentFile=-/etc/environment
ExecStart=/usr/local/bin/python3 /usr/bin/rgw_exporter.py
Restart=always
RestartSec=90s
TimeoutStartSec=300
#!/usr/bin/env python2
import rados
import sys
import time
cluster = rados.Rados(conffile='/etc/ceph/ceph.conf')
cluster.connect()
for pool in cluster.list_pools():