Skip to content

Instantly share code, notes, and snippets.

@paha
paha / cloudSettings
Last active July 6, 2020 21:16
Visual Studio Code Settings Sync Gist
{"lastUpload":"2020-06-01T19:46:55.579Z","extensionVersion":"v3.4.3"}
@paha
paha / gist:59ee91c7277aff023b121daf5aaf242a
Created November 1, 2017 18:01
test for OSD monitoring
2017-11-01 17:52:17.546621 I | cephosd: osd.0 is marked 'DOWN'
2017-11-01 17:52:17.546645 W | cephosd: waiting for the osd.0 to exceed the grace period
2017-11-01 17:52:27.547395 I | exec: Running command: ceph osd dump --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/893333188
2017-11-01 17:52:27.745852 I | cephosd: OSD HEALTH This OSD proc --> &{parent:0xc4204c08c0 cmd:0xc42034adc0 monitor:true retries:0 totalRetries:136 retrySecondsExponentBase:2 waitForExit:0x1112730}.
2017-11-01 17:52:27.745882 I | cephosd: osd.0 is marked 'DOWN'
2017-11-01 17:52:27.745940 I | cephosd: stopping osd.0, it has been down for longer than the grace period (down since 2017-11-01 17:50:42.806257887 +0000 UTC m=+15360.703079473)
2017-11-01 17:52:27.746242 I | proc: stopping child process 9059
2017-11-01 17:52:27.746467 I | proc: child process 9059 stopped successfully
2017-11-01 17:52:27.748366 I | cephosd: stopped osd.0
2017-11-01 17:52:27.75835
root@rookeval-3423604513-3f34k:/tests# fio test-rook.fio
seq-read: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
rand-read: (g=1): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
seq-write: (g=2): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
rand-write: (g=3): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
fio-2.2.10
Starting 4 processes
seq-read: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [_(3),w(1)] [83.6% done] [0KB/10728KB/0KB /s] [0/2682/0 iops] [eta 00m:34s]
seq-read: (groupid=0, jobs=1): err= 0: pid=733: Fri Sep 15 20:13:44 2017
fio test-io1.fio
seq-read: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
rand-read: (g=1): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
seq-write: (g=2): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
rand-write: (g=3): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
fio-2.2.10
Starting 4 processes
seq-read: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [_(3),w(1)] [100.0% done] [0KB/21452KB/0KB /s] [0/5363/0 iops] [eta 00m:00s]
seq-read: (groupid=0, jobs=1): err= 0: pid=815: Fri Sep 15 20:46:33 2017
fio-2.2.10
Starting 4 processes
seq-read: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [_(3),w(1)] [60.3% done] [0KB/532KB/0KB /s] [0/133/0 iops] [eta 01m:59s]
seq-read: (groupid=0, jobs=1): err= 0: pid=843: Tue Sep 12 00:23:44 2017
read : io=1024.0MB, bw=56339KB/s, iops=14084, runt= 18612msec
slat (usec): min=1, max=1981, avg= 5.07, stdev= 4.03
clat (usec): min=21, max=70398, avg=278.19, stdev=475.55
lat (usec): min=181, max=70403, avg=283.34, stdev=475.56
clat percentiles (usec):
fio-2.2.10
Starting 4 processes
seq-read: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [_(3),w(1)] [57.2% done] [0KB/4088KB/0KB /s] [0/1022/0 iops] [eta 03m:00s]
seq-read: (groupid=0, jobs=1): err= 0: pid=816: Tue Sep 12 00:19:03 2017
read : io=248876KB, bw=4147.7KB/s, iops=1036, runt= 60004msec
slat (usec): min=2, max=55, avg= 6.51, stdev= 2.88
clat (usec): min=229, max=119783, avg=3850.04, stdev=819.55
lat (usec): min=234, max=119789, avg=3856.63, stdev=819.65
clat percentiles (usec):
fio-2.2.10
Starting 4 processes
Jobs: 1 (f=1): [_(3),w(1)] [90.3% done] [0KB/12248KB/0KB /s] [0/3062/0 iops] [eta 00m:26s]
seq-read: (groupid=0, jobs=1): err= 0: pid=790: Tue Sep 12 00:14:02 2017
read : io=746648KB, bw=12444KB/s, iops=3110, runt= 60002msec
slat (usec): min=1, max=628, avg= 5.97, stdev= 3.07
clat (usec): min=250, max=211369, avg=1278.93, stdev=1005.40
lat (usec): min=255, max=211374, avg=1284.98, stdev=1005.42
clat percentiles (usec):
| 1.00th=[ 454], 5.00th=[ 796], 10.00th=[ 1176], 20.00th=[ 1240],
$ kubectl create -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-operator.yaml
clusterrole "rook-operator" created
serviceaccount "rook-operator" created
clusterrolebinding "rook-operator" created
deployment "rook-operator" created
$ kubectl -n rook exec -it rook-tools -- rookctl object create
succeeded starting creation of object store
$ kubectl -n rook exec -it rook-tools -- rookctl object user create test-user "rgw test user"
User Created
User ID: test-user
Display Name: rgw test user
Email:
Access Key: XXX
Secret Key: XXX
@paha
paha / filesystems.sh
Last active September 12, 2017 03:35
$ kubectl create -f test-pvc.yaml
persistentvolumeclaim "rookeval-claim" created
$ kubectl create -f test-deployment.yaml
deployment "rookeval" created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rook-operator-3796250946-d42hx 1/1 Running 0 5m
rookeval-4248393606-24x05 1/1 Running 0 30s
$ kubectl exec -it rookeval-4248393606-24x05 -- df -Th --exclude-type=tmpfs
overlay 7.7G 3.0G 4.8G 39% /