Skip to content

Instantly share code, notes, and snippets.

@tinnefeld
Created April 25, 2012 23:43
Show Gist options
  • Save tinnefeld/2494499 to your computer and use it in GitHub Desktop.
Save tinnefeld/2494499 to your computer and use it in GitHub Desktop.
Running clusterperf to determine the penalty introduced by having server statistics
Master client reading from all masters
Slave id 1 reading from all masters
Slave id 2 reading from all masters
Slave id 3 reading from all masters
Slave id 4 reading from all masters
Slave id 5 reading from all masters
Slave id 6 reading from all masters
Slave id 7 reading from all masters
Slave id 8 reading from all masters
Slave id 9 reading from all masters
Slave id 10 reading from all masters
Slave id 11 reading from all masters
Slave id 12 reading from all masters
Slave id 13 reading from all masters
Slave id 14 reading from all masters
Slave id 15 reading from all masters
Slave id 16 reading from all masters
Slave id 17 reading from all masters
Slave id 18 reading from all masters
Slave id 19 reading from all masters
Slave id 20 reading from all masters
Slave id 21 reading from all masters
Slave id 22 reading from all masters
Slave id 23 reading from all masters
Slave id 24 reading from all masters
Slave id 25 reading from all masters
Slave id 26 reading from all masters
Slave id 27 reading from all masters
Slave id 28 reading from all masters
Slave id 29 reading from all masters
Slave id 30 reading from all masters
Slave id 31 reading from all masters
Slave id 32 reading from all masters
Slave id 33 reading from all masters
Slave id 34 reading from all masters
Slave id 35 reading from all masters
Slave id 36 reading from all masters
Slave id 37 reading from all masters
Slave id 38 reading from all masters
Slave id 39 reading from all masters
Slave id 40 reading from all masters
Slave id 41 reading from all masters
Slave id 42 reading from all masters
Slave id 43 reading from all masters
Slave id 44 reading from all masters
Slave id 45 reading from all masters
Slave id 46 reading from all masters
Slave id 47 reading from all masters
Slave id 48 reading from all masters
Slave id 49 reading from all masters
Slave id 50 reading from all masters
Slave id 51 reading from all masters
Slave id 52 reading from all masters
Slave id 53 reading from all masters
Slave id 54 reading from all masters
Slave id 55 reading from all masters
Slave id 56 reading from all masters
Slave id 57 reading from all masters
Slave id 58 reading from all masters
Slave id 59 reading from all masters
basic.read100 9.6 us read single 100B object with 30B key
basic.readBw100 9.9 MB/s bandwidth reading 100B object with 30B key
basic.read1K 11.4 us read single 1KB object with 30B key
basic.readBw1K 83.5 MB/s bandwidth reading 1KB object with 30B key
basic.read10K 14.6 us read single 10KB object with 30B key
basic.readBw10K 652.5 MB/s bandwidth reading 10KB object with 30B key
basic.read100K 51.5 us read single 100KB object with 30B key
basic.readBw100K 1.8 GB/s bandwidth reading 100KB object with 30B key
basic.read1M 437.6 us read single 1MB object with 30B key
basic.readBw1M 2.1 GB/s bandwidth reading 1MB object with 30B key
basic.write100 35.8 us write single 100B object with 30B key
basic.writeBw100 2.7 MB/s bandwidth writing 100B object with 30B key
basic.write1K 40.7 us write single 1KB object with 30B key
basic.writeBw1K 23.4 MB/s bandwidth writing 1KB object with 30B key
basic.write10K 71.5 us write single 10KB object with 30B key
basic.writeBw10K 133.4 MB/s bandwidth writing 10KB object with 30B key
basic.write100K 407.2 us write single 100KB object with 30B key
basic.writeBw100K 234.2 MB/s bandwidth writing 100KB object with 30B key
basic.write1M 3.7 ms write single 1MB object with 30B key
basic.writeBw1M 255.2 MB/s bandwidth writing 1MB object with 30B key
broadcast 424.9 us broadcast message to 9 slaves
netBandwidth 141.3 GB/s many clients reading from different servers
netBandwidth.max 1.7 GB/s fastest client
netBandwidth.min 769.1 MB/s slowest client
readNotFound 22.8 us read object that doesn't exist
# RAMCloud write performance for 100 B object with 30 B key
# during interleaved asynchronous writes of various sizes
# Generated by 'clusterperf.py writeAsyncSync'
#
# firstWriteIsSync firstObjectSize firstWriteLatency(us) syncWriteLatency(us)
#----------------------------------------------------------------------------
0 100 22.9 47.6
0 1000 23.5 45.9
0 10000 32.4 53.1
0 100000 164.6 258.8
0 1000000 1542.7 2234.4
1 100 40.0 38.2
1 1000 43.4 37.3
1 10000 67.3 45.7
1 100000 334.4 124.0
1 1000000 2940.1 845.7
# RAMCloud read performance for 100 B objects
# with keys of various lengths.
# Generated by 'clusterperf.py readVaryingKeyLength'
#
# Key Length Latency (us) Bandwidth (MB/s)
#----------------------------------------------------------------------------
1 9.3 0.1
5 9.2 0.5
10 9.1 1.0
15 9.2 1.6
20 9.2 2.1
25 9.3 2.6
30 9.3 3.1
35 9.3 3.6
40 9.4 4.1
45 9.5 4.5
50 9.4 5.1
55 9.5 5.5
60 9.5 6.0
65 9.5 6.5
70 9.6 7.0
75 9.6 7.4
80 9.7 7.9
85 9.7 8.3
90 9.8 8.8
95 9.9 9.2
100 9.9 9.7
200 11.0 17.4
300 11.6 24.7
400 12.4 30.8
500 13.1 36.5
600 13.6 42.2
700 14.0 47.6
800 14.6 52.3
900 15.2 56.4
1000 15.8 60.3
2000 20.7 92.3
3000 25.5 112.2
4000 30.6 124.8
5000 35.7 133.7
6000 40.5 141.2
7000 45.6 146.5
8000 50.0 152.7
9000 55.6 154.5
10000 59.8 159.5
20000 110.9 171.9
30000 160.4 178.4
40000 211.6 180.3
50000 260.6 183.0
60000 309.7 184.7
# RAMCloud write performance for 100 B objects
# with keys of various lengths.
# Generated by 'clusterperf.py writeVaryingKeyLength'
#
# Key Length Latency (us) Bandwidth (MB/s)
#----------------------------------------------------------------------------
1 38.2 0.0
5 34.7 0.1
10 35.0 0.3
15 34.9 0.4
20 35.1 0.5
25 35.2 0.7
30 35.0 0.8
35 35.2 0.9
40 35.2 1.1
45 35.1 1.2
50 35.3 1.3
55 35.5 1.5
60 36.7 1.6
65 36.5 1.7
70 36.2 1.8
75 36.2 2.0
80 36.1 2.1
85 36.5 2.2
90 36.5 2.4
95 36.7 2.5
100 36.9 2.6
200 40.1 4.8
300 38.5 7.4
400 39.5 9.7
500 41.0 11.6
600 45.0 12.7
700 43.2 15.4
800 47.1 16.2
900 45.1 19.0
1000 49.6 19.2
2000 59.5 32.0
3000 72.9 39.3
4000 86.5 44.1
5000 98.2 48.6
6000 110.0 52.0
7000 124.6 53.6
8000 136.8 55.8
9000 153.1 56.1
10000 163.7 58.3
20000 291.3 65.5
30000 416.1 68.8
40000 531.8 71.7
50000 662.6 72.0
60000 787.5 72.7
# RAMCloud read performance as a function of load (1 or more
# clients all reading a single 100-byte object with 30-byte key
# repeatedly).
# Generated by 'clusterperf.py readLoaded'
#
# numClients readLatency(us) throughput(total kreads/sec)
#----------------------------------------------------------
1 9.7 103
2 10.3 194
3 10.8 278
4 11.8 340
5 14.7 339
6 17.2 349
7 20.3 344
8 23.2 345
9 27.1 332
10 29.2 343
11 32.3 340
12 35.0 343
13 38.2 340
14 42.5 329
15 43.7 343
16 46.8 342
17 49.2 346
18 53.3 337
19 55.0 345
20 60.7 329
# RAMCloud read performance when 1 or more clients read
# 100-byte objects with 30-byte keys chosen at random from
# 10 servers.
# Generated by 'clusterperf.py readRandom'
#
# numClients throughput(total kreads/sec) slowest(ms) reads > 10us
#--------------------------------------------------------------------
1 83 6.85 38.9%
2 181 1.49 34.5%
3 261 7.68 36.3%
4 379 0.68 37.3%
5 468 0.65 39.1%
6 563 0.64 42.6%
7 660 0.63 45.6%
8 741 0.79 47.2%
9 823 0.86 49.5%
10 907 0.66 52.0%
11 995 0.73 56.4%
12 1069 0.67 60.0%
13 1106 3.32 63.2%
14 1195 2.03 66.1%
15 1254 2.95 68.8%
16 1352 0.88 71.6%
17 1401 3.46 73.7%
18 1473 2.39 75.2%
19 1567 1.03 77.3%
20 1622 1.95 78.5%
21 1635 3.32 80.0%
22 1761 0.87 81.5%
23 1816 0.76 82.3%
24 1839 2.13 83.4%
25 1876 1.98 84.4%
26 1962 0.95 85.4%
27 2001 1.23 86.2%
28 2011 3.93 87.0%
29 2140 0.62 87.8%
30 2089 3.91 88.8%
31 2181 2.44 89.2%
32 2257 1.24 89.7%
33 2310 0.67 90.2%
34 2345 2.20 90.7%
35 2372 2.28 91.2%
36 2406 2.30 91.4%
37 2419 2.10 91.6%
38 2477 2.05 92.1%
39 2500 3.42 92.1%
40 2576 1.95 93.1%
41 2570 2.36 93.0%
42 2570 2.36 93.1%
43 2585 2.38 93.2%
44 2623 2.32 93.6%
45 2664 2.33 93.9%
46 2631 3.67 93.8%
47 2688 2.46 94.2%
48 2670 4.41 94.5%
49 2722 2.45 94.7%
50 2733 2.55 94.7%
Master client reading from all masters
Slave id 1 reading from all masters
Slave id 2 reading from all masters
Slave id 3 reading from all masters
Slave id 4 reading from all masters
Slave id 5 reading from all masters
Slave id 6 reading from all masters
Slave id 7 reading from all masters
Slave id 8 reading from all masters
Slave id 9 reading from all masters
Slave id 10 reading from all masters
Slave id 11 reading from all masters
Slave id 12 reading from all masters
Slave id 13 reading from all masters
Slave id 14 reading from all masters
Slave id 15 reading from all masters
Slave id 16 reading from all masters
Slave id 17 reading from all masters
Slave id 18 reading from all masters
Slave id 19 reading from all masters
Slave id 20 reading from all masters
Slave id 21 reading from all masters
Slave id 22 reading from all masters
Slave id 23 reading from all masters
Slave id 24 reading from all masters
Slave id 25 reading from all masters
Slave id 26 reading from all masters
Slave id 27 reading from all masters
Slave id 28 reading from all masters
Slave id 29 reading from all masters
Slave id 30 reading from all masters
Slave id 31 reading from all masters
Slave id 32 reading from all masters
Slave id 33 reading from all masters
Slave id 34 reading from all masters
Slave id 35 reading from all masters
Slave id 36 reading from all masters
Slave id 37 reading from all masters
Slave id 38 reading from all masters
Slave id 39 reading from all masters
Slave id 40 reading from all masters
Slave id 41 reading from all masters
Slave id 42 reading from all masters
Slave id 43 reading from all masters
Slave id 44 reading from all masters
Slave id 45 reading from all masters
Slave id 46 reading from all masters
Slave id 47 reading from all masters
Slave id 48 reading from all masters
Slave id 49 reading from all masters
Slave id 50 reading from all masters
Slave id 51 reading from all masters
Slave id 52 reading from all masters
Slave id 53 reading from all masters
Slave id 54 reading from all masters
Slave id 55 reading from all masters
Slave id 56 reading from all masters
Slave id 57 reading from all masters
Slave id 58 reading from all masters
Slave id 59 reading from all masters
basic.read100 9.7 us read single 100B object with 30B key
basic.readBw100 9.8 MB/s bandwidth reading 100B object with 30B key
basic.read1K 11.4 us read single 1KB object with 30B key
basic.readBw1K 83.7 MB/s bandwidth reading 1KB object with 30B key
basic.read10K 14.7 us read single 10KB object with 30B key
basic.readBw10K 650.3 MB/s bandwidth reading 10KB object with 30B key
basic.read100K 51.6 us read single 100KB object with 30B key
basic.readBw100K 1.8 GB/s bandwidth reading 100KB object with 30B key
basic.read1M 435.0 us read single 1MB object with 30B key
basic.readBw1M 2.1 GB/s bandwidth reading 1MB object with 30B key
basic.write100 35.2 us write single 100B object with 30B key
basic.writeBw100 2.7 MB/s bandwidth writing 100B object with 30B key
basic.write1K 38.8 us write single 1KB object with 30B key
basic.writeBw1K 24.6 MB/s bandwidth writing 1KB object with 30B key
basic.write10K 72.7 us write single 10KB object with 30B key
basic.writeBw10K 131.2 MB/s bandwidth writing 10KB object with 30B key
basic.write100K 413.5 us write single 100KB object with 30B key
basic.writeBw100K 230.6 MB/s bandwidth writing 100KB object with 30B key
basic.write1M 3.8 ms write single 1MB object with 30B key
basic.writeBw1M 250.9 MB/s bandwidth writing 1MB object with 30B key
broadcast 432.2 us broadcast message to 9 slaves
netBandwidth 135.8 GB/s many clients reading from different servers
netBandwidth.max 1.5 GB/s fastest client
netBandwidth.min 677.1 MB/s slowest client
readNotFound 22.9 us read object that doesn't exist
# RAMCloud write performance for 100 B object with 30 B key
# during interleaved asynchronous writes of various sizes
# Generated by 'clusterperf.py writeAsyncSync'
#
# firstWriteIsSync firstObjectSize firstWriteLatency(us) syncWriteLatency(us)
#----------------------------------------------------------------------------
0 100 16.5 38.8
0 1000 19.2 40.4
0 10000 31.3 56.3
0 100000 168.1 252.4
0 1000000 1511.2 2196.8
1 100 40.0 39.4
1 1000 43.4 41.4
1 10000 64.9 47.5
1 100000 325.5 125.9
1 1000000 2894.6 825.0
# RAMCloud read performance for 100 B objects
# with keys of various lengths.
# Generated by 'clusterperf.py readVaryingKeyLength'
#
# Key Length Latency (us) Bandwidth (MB/s)
#----------------------------------------------------------------------------
1 9.5 0.1
5 9.5 0.5
10 9.4 1.0
15 9.4 1.5
20 9.4 2.0
25 9.5 2.5
30 9.6 3.0
35 9.6 3.5
40 9.6 4.0
45 9.7 4.4
50 9.7 4.9
55 9.8 5.4
60 9.8 5.8
65 9.8 6.3
70 9.9 6.8
75 9.9 7.2
80 9.9 7.7
85 10.0 8.1
90 10.0 8.6
95 10.1 8.9
100 10.1 9.5
200 11.3 16.9
300 12.0 23.9
400 12.8 29.9
500 13.4 35.6
600 13.9 41.1
700 14.5 45.9
800 15.0 50.9
900 15.6 55.1
1000 16.2 59.0
2000 21.3 89.7
3000 26.4 108.3
4000 31.4 121.4
5000 36.8 129.7
6000 41.7 137.3
7000 46.8 142.6
8000 51.9 147.0
9000 57.2 150.1
10000 62.3 153.1
20000 114.6 166.5
30000 167.0 171.3
40000 218.9 174.3
50000 271.1 175.9
60000 322.5 177.4
# RAMCloud write performance for 100 B objects
# with keys of various lengths.
# Generated by 'clusterperf.py writeVaryingKeyLength'
#
# Key Length Latency (us) Bandwidth (MB/s)
#----------------------------------------------------------------------------
1 34.0 0.0
5 33.4 0.1
10 34.1 0.3
15 33.7 0.4
20 34.2 0.6
25 34.0 0.7
30 34.2 0.8
35 34.8 1.0
40 34.2 1.1
45 34.2 1.3
50 34.4 1.4
55 34.7 1.5
60 36.5 1.6
65 36.5 1.7
70 36.5 1.8
75 36.8 1.9
80 36.6 2.1
85 37.0 2.2
90 37.0 2.3
95 37.4 2.4
100 37.6 2.5
200 38.5 5.0
300 38.4 7.5
400 40.1 9.5
500 41.1 11.6
600 43.7 13.1
700 43.7 15.3
800 46.3 16.5
900 46.8 18.3
1000 48.8 19.5
2000 59.0 32.3
3000 70.7 40.5
4000 84.6 45.1
5000 94.9 50.2
6000 106.2 53.9
7000 120.5 55.4
8000 132.2 57.7
9000 145.5 59.0
10000 156.0 61.1
20000 283.5 67.3
30000 406.6 70.4
40000 525.9 72.5
50000 652.2 73.1
60000 773.8 74.0
# RAMCloud read performance as a function of load (1 or more
# clients all reading a single 100-byte object with 30-byte key
# repeatedly).
# Generated by 'clusterperf.py readLoaded'
#
# numClients readLatency(us) throughput(total kreads/sec)
#----------------------------------------------------------
1 9.4 107
2 10.0 201
3 10.3 292
4 11.3 353
5 14.2 352
6 17.0 353
7 19.6 358
8 22.3 358
9 25.1 358
10 28.0 357
11 31.3 351
12 33.9 354
13 36.6 356
14 40.0 350
15 42.2 356
16 45.1 355
17 47.9 355
18 50.5 356
19 54.1 351
20 56.4 354
# RAMCloud read performance when 1 or more clients read
# 100-byte objects with 30-byte keys chosen at random from
# 10 servers.
# Generated by 'clusterperf.py readRandom'
#
# numClients throughput(total kreads/sec) slowest(ms) reads > 10us
#--------------------------------------------------------------------
1 86 2.72 40.7%
2 184 0.71 34.7%
3 287 0.08 35.9%
4 378 0.56 38.7%
5 462 1.44 41.0%
6 543 1.09 43.2%
7 642 0.75 45.0%
8 728 1.89 48.7%
9 791 1.96 49.9%
10 849 2.54 51.8%
11 963 1.91 56.5%
12 1040 1.94 60.4%
13 1125 0.94 64.1%
14 1191 1.02 67.0%
15 1252 0.82 71.1%
16 1341 0.76 72.8%
17 1422 0.85 74.0%
18 1461 1.36 76.4%
19 1501 1.48 77.1%
20 1554 1.41 79.0%
21 1630 1.79 80.1%
22 1636 2.94 81.6%
23 1586 3.96 83.0%
24 1660 3.97 83.8%
25 1676 6.30 84.8%
26 1712 8.82 86.4%
27 1830 3.02 87.2%
28 1888 9.88 87.6%
29 1978 3.71 88.4%
30 2074 1.93 88.7%
31 2052 4.35 89.5%
32 2217 2.02 89.8%
33 1966 9.60 90.1%
34 2217 3.75 90.7%
35 2190 3.72 91.0%
36 1461 33.33 91.0%
37 2355 2.33 91.8%
38 2372 4.15 92.1%
39 2254 8.06 92.3%
40 2475 2.26 92.8%
41 2404 3.43 92.6%
42 2484 2.35 93.1%
43 2425 9.04 93.3%
44 2516 3.62 93.4%
45 2495 3.80 93.5%
46 2571 3.61 94.1%
47 2563 3.86 94.3%
48 2365 4.35 94.1%
49 2390 4.26 94.0%
50 2695 3.65 94.9%
**** server.rc04.log:
1335396913.774326015 src/FailureDetector.cc:196 in FailureDetector::alertCoordinator default WARNING[32393:3]: Ping timeout to server id 5 (locator "infrc:host=192.168.1.110,port=12247")
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment