Skip to content

Instantly share code, notes, and snippets.

View dvanders's full-sized avatar

Dan van der Ster dvanders

  • Clyso
  • Vancouver, Canada
  • 21:14 (UTC -07:00)
View GitHub Profile
diff --git a/src/osd/OSD.cc b/src/osd/OSD.cc
index 0562eed..1a2d397 100644
--- a/src/osd/OSD.cc
+++ b/src/osd/OSD.cc
@@ -1809,6 +1809,15 @@ int OSD::init()
dout(2) << "boot" << dendl;
+ // initialize the daily loadavg with current 15min loadavg
+ double loadavgs[3];

Keybase proof

I hereby claim:

  • I am dvanders on github.
  • I am dvanders (https://keybase.io/dvanders) on keybase.
  • I have a public key ASAJy2L7fN70X5n0vp7FyhL1m4jydu-WyRLpDc-ijtpNdAo

To claim this, I am signing this object:

@dvanders
dvanders / haproxy.cfg
Created July 11, 2017 15:51
CERN haproxy.cfg
# This file managed by Puppet
global
chroot /var/lib/haproxy
group haproxy
log 127.0.0.1 local0
maxconn 2048
pidfile /var/run/haproxy.pid
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128:AES256:AES:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK
stats socket /var/lib/haproxy/stats level admin
tune.ssl.default-dh-param 2048
[global]
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
fsid = xxx
debug filestore = 1
debug mon = 1
debug osd = 1
@dvanders
dvanders / btrfs-smr-balance.py
Last active May 5, 2020 15:08
btrfs-smr-balance.py
#!/usr/bin/env python3
# The goal of this is to gradually balance a btrfs filesystem which contains DM-SMR drives.
# Such drive are described in detail at https://www.usenix.org/node/188434
# A normal drive should be able to balance a single 1GB chunk in under 30s.
# Such a stripe would normally be written directly to the shingled blocks, but in the case
# it was cached, it would take roughly 100s to clean.
# So our heuristic here is:
@dvanders
dvanders / ceph-iosched
Created February 2, 2021 10:42
ceph-iosched
#!/bin/sh
# prefer cfq (el7), then bfq (el8), then do nothing
if grep -q cfq /sys/block/sd*/queue/scheduler; then
# tune SSDs to use cfq scheduler and spinning disks to use cfq also
for DISK in /sys/block/sd*; do grep -q 0 ${DISK}/queue/rotational && echo cfq > ${DISK}/queue/scheduler; done
for DISK in /sys/block/sd*; do grep -q 1 ${DISK}/queue/rotational && echo cfq > ${DISK}/queue/scheduler; done
# tune cfq not to penalize writes when reading heavily
@dvanders
dvanders / gist:ebe124cf9bdf0af9621cc2c6c6d450bf
Last active November 22, 2023 19:28
efi software md raid1 for kickstart
ignoredisk --only-use=sda,sdb
clearpart --all --initlabel --drives sda,sdb
# for /boot
partition raid.01 --size 1024 --ondisk sda
partition raid.02 --size 1024 --ondisk sdb
# for /boot/efi
partition raid.11 --size 256 --ondisk sda
partition raid.12 --size 256 --ondisk sdb