Skip to content

Instantly share code, notes, and snippets.

@2matzzz
Last active August 29, 2015 14:13
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save 2matzzz/fd70b0d33cf30bcc0b82 to your computer and use it in GitHub Desktop.
Save 2matzzz/fd70b0d33cf30bcc0b82 to your computer and use it in GitHub Desktop.

ELB scale-out/scale-up test

This test goal was to determine ELB is to scale out / scale-up at what time when a load is applied to the ELB. But results of the tests it was completely different from the behavior of ELB that I was assumed in advance. As there is a possibility that there was a problem in the test method, the test I do again. This test result is I keep a record for interesting.

Env

Jan 12, 2015

  • ap-northeast-1
  • VPC
  • EC2(c3.8xlarge)
  • Internet-facing ELB(attach to 2 Availability-Zones)

RTT

EC2(ap-northeast-1c) <-> ELB(ap-northeast-1c)

[ec2-user@ip-10-14-1-132 ~]$ ping 10.14.1.47
PING 10.14.1.47 (10.14.1.47) 56(84) bytes of data.
64 bytes from 10.14.1.47: icmp_seq=1 ttl=64 time=0.336 ms
64 bytes from 10.14.1.47: icmp_seq=2 ttl=64 time=0.313 ms
64 bytes from 10.14.1.47: icmp_seq=3 ttl=64 time=0.240 ms
64 bytes from 10.14.1.47: icmp_seq=4 ttl=64 time=0.262 ms
64 bytes from 10.14.1.47: icmp_seq=5 ttl=64 time=0.323 ms

EC2(ap-northeast-1c) <-> ELB(ap-northeast-1a)

[ec2-user@ip-10-14-1-132 ~]$ ping 10.14.0.46
PING 10.14.0.46 (10.14.0.46) 56(84) bytes of data.
64 bytes from 10.14.0.46: icmp_seq=1 ttl=64 time=2.09 ms
64 bytes from 10.14.0.46: icmp_seq=2 ttl=64 time=2.14 ms
64 bytes from 10.14.0.46: icmp_seq=3 ttl=64 time=2.13 ms
64 bytes from 10.14.0.46: icmp_seq=4 ttl=64 time=2.08 ms
64 bytes from 10.14.0.46: icmp_seq=5 ttl=64 time=2.16 ms

Test script

I was determine same Availability-Zone ELB's end-point ip address by "dig" and healthcheck access. I thought that ELB may scale-out/scale-up by total load condition.(It is my mistake...)

$ cat wrk2elb.sh
#!/bin/bash
taskset 1 /path/to/wrk -t1 -c100 -d60 http://single-ip-addr-of-elb/1

Loadtest1

ELB IP addr

  • 54.199.198.200 -> 10.14.1.47 (ap-northeast-1c)
  • 54.238.165.224 -> 10.14.0.46 (ap-northeast-1a)
$ dig ELBDNSNAME +short
54.199.198.200
54.238.165.224

test1-1

[ec2-user@ip-10-14-1-159 ~]$ ./wrk2elb.sh
Running 3m test @ http://54.199.198.200/1
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    18.43ms   48.00ms 542.06ms   99.15%
    Req/Sec     7.38k     1.33k   10.17k    78.68%
  1308827 requests in 3.00m, 280.84MB read
Requests/sec:   7271.25
Transfer/sec:      1.56MB

test1-2

[ec2-user@ip-10-14-1-159 ~]$ ./wrk2elb.sh
Running 3m test @ http://54.199.198.200/1
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    15.64ms   23.34ms 317.92ms   99.21%
    Req/Sec     7.49k     1.31k   11.15k    76.40%
  1327794 requests in 3.00m, 284.91MB read
Requests/sec:   7376.59
Transfer/sec:      1.58MB

ELB Scaleout

  • Internal ELB instance 2 -> 4
  • Internet DNS record 2 -> 2
ELB healthcheck access from
  • 10.14.1.47
  • 10.14.0.46
  • 10.14.1.221
  • 10.14.0.145
ELB instance ip addr
$ dig ELBDNSNAME +short
54.64.133.218
54.64.172.150

Loadtest2

ELB IP addr

  • 54.64.172.150 -> 10.14.1.221 (ap-northeast-1c)
  • 54.64.133.218 -> 10.14.0.145 (ap-northeast-1a)

test2-1

[ec2-user@ip-10-14-1-159 ~]$ ./wrk2elb.sh
Running 3m test @ http://54.64.172.150/1
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    20.98ms   13.32ms 227.71ms   92.25%
    Req/Sec     5.25k     1.27k    9.25k    74.12%
  935016 requests in 3.00m, 200.63MB read
Requests/sec:   5194.38
Transfer/sec:      1.11MB

test2-2

[ec2-user@ip-10-14-1-159 ~]$ ./wrk2elb.sh
Running 3m test @ http://54.64.172.150/1
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    19.80ms    9.59ms 116.32ms   91.50%
    Req/Sec     5.46k     1.23k    8.71k    76.67%
  971446 requests in 3.00m, 208.45MB read
Requests/sec:   5396.92
Transfer/sec:      1.16MB

ELB Scaleout

  • Internal ELB instance 4 -> 6
  • Internet DNS record 2 -> 2
ELB healthcheck access from
  • 10.14.1.47
  • 10.14.0.46
  • 10.14.1.221
  • 10.14.0.145
  • 10.14.1.238
  • 10.14.0.57
ELB instance ip addr
$ dig ELBDNSNAME +short
54.238.128.213
54.238.153.39

Loadtest3

ELB IP addr

  • 54.238.153.39 -> 10.14.1.238 (ap-northeast-1c)
  • 54.238.128.213 -> 10.14.0.57 (ap-northeast-1a)

test3-1

[ec2-user@ip-10-14-1-159 ~]$ ./wrk2elb.sh
Running 3m test @ http://54.238.153.39/1
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    16.11ms    3.96ms  71.16ms   79.88%
    Req/Sec     6.42k     0.94k    8.68k    71.66%
  1140660 requests in 3.00m, 244.76MB read
Requests/sec:   6337.00
Transfer/sec:      1.36MB

test3-2

[ec2-user@ip-10-14-1-159 ~]$ ./wrk2elb.sh
Running 3m test @ http://54.238.153.39/1
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    16.11ms    4.00ms  63.85ms   80.38%
    Req/Sec     6.43k     0.95k    9.30k    73.10%
  1142916 requests in 3.00m, 245.24MB read
Requests/sec:   6349.53
Transfer/sec:      1.36MB

ELB Scaleout

  • Internal ELB instance 6 -> 8
  • Internet DNS record 2 -> 2
ELB healthcheck access from
  • 10.14.1.47
  • 10.14.0.46
  • 10.14.1.221
  • 10.14.0.145
  • 10.14.1.238
  • 10.14.0.57
  • 10.14.1.176
  • 10.14.0.168
$ dig ELBDNSNAME +short
54.65.186.86
54.65.5.225

Last ELB ip addr

  • 54.65.5.225 -> 10.14.1.176 (ap-northeast-1c)
  • 54.65.186.86 -> 10.14.0.168 (ap-northeast-1a)

Summary

ELB scale-out / scale-up is based on the HTTP request from the client. However, the results of this test, it indicates that the Internet-facing capacity of ELB is not increased with the client access. Each ELB instance were removed from ELB's DNS set, and new 2 instance's A-record were added to ELB's DNS set. that seems like ELB cluster was rotated.

This behavior is contrary to my intuition. I expected ELB scale-out increased A-record set, or scale-up is rotated A-record. But all ELB instance were same capacity, and each old ELB instance was removed from ELB. I think that caused by load traffic inequality across Availability-Zone.

Note:
DNS name of ELB was not contained in request host header of this test. I will trying that in next time.

Todo:

  • Simultaneous and equal to load test for all of Availability-Zone
  • Load access to be contained a host header that contains dns name of ELB.
  • Trying load test based on network bandwidth.

2matz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment