Skip to content

Instantly share code, notes, and snippets.

@yoppi
Last active May 12, 2017 04:17
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save yoppi/4929bfd85851d77b30d18110bd5898e7 to your computer and use it in GitHub Desktop.
Save yoppi/4929bfd85851d77b30d18110bd5898e7 to your computer and use it in GitHub Desktop.
DoS for TCP
package main
import (
"bytes"
"flag"
"fmt"
"net"
"runtime"
"strconv"
"sync"
"time"
)
var workerNum int
var host string
var port string
var wg sync.WaitGroup
const maxConn = 10000
func init() {
flag.IntVar(&workerNum, "worker", 8, "Number of worker")
flag.StringVar(&host, "host", "127.0.0.1", "Host for server")
flag.StringVar(&port, "port", "6379", "Port for server")
flag.Parse()
}
func getGoroutineID() uint64 {
b := make([]byte, 64)
b = b[:runtime.Stack(b, false)]
b = bytes.TrimPrefix(b, []byte("goroutine "))
b = b[:bytes.IndexByte(b, ' ')]
n, _ := strconv.ParseUint(string(b), 10, 64)
return n
}
func tcpDos() {
var connSize int
var connErr int
//var conns map[int]net.Conn // fd : net.Conn
startAt := time.Now().UnixNano()
connCh := make(chan net.Conn)
ticker := time.Tick(100 * time.Microsecond)
for {
if connSize >= maxConn {
fmt.Printf("[%d] TIME:%f CONN:%d ERR:%d\n", getGoroutineID(), (float32)(time.Now().UnixNano()-startAt)/1000000000.0, connSize, connErr)
wg.Done()
break
}
select {
case <-connCh:
connSize++
case <-ticker:
go func() {
conn, err := net.Dial("tcp", host+":"+port)
if err != nil {
connErr++
// do we close connection?
} else {
connCh <- conn
}
}()
}
}
}
func main() {
for i := 0; i < workerNum; i++ {
wg.Add(1)
go tcpDos()
}
wg.Wait()
}
@yoppi
Copy link
Author

yoppi commented Mar 16, 2017

最初はRedisのコネクションテストしてたけど、memcachedもしてみたくなったので汎用的にしてみた。
それぞれのワーカーが1万コネクション達成するまでの時間を計測している。

@yoppi
Copy link
Author

yoppi commented Mar 16, 2017

memcached

$ ./main -host 127.0.0.1 -port 11211
[21] TIME:9.619331 CONN:10000 ERR:0
[19] TIME:9.628394 CONN:10000 ERR:0
[23] TIME:9.638412 CONN:10000 ERR:0
[20] TIME:9.647182 CONN:10000 ERR:0
[22] TIME:9.651593 CONN:10000 ERR:0
[24] TIME:9.656730 CONN:10000 ERR:0
[25] TIME:9.673353 CONN:10000 ERR:0
[26] TIME:9.675355 CONN:10000 ERR:0

Redis

$ ./main -host 127.0.0.1 -port 6379
[9] TIME:68.424164 CONN:10000 ERR:0
[10] TIME:68.485397 CONN:10000 ERR:0
[7] TIME:68.497200 CONN:10000 ERR:0
[6] TIME:68.498398 CONN:10000 ERR:0
[5] TIME:68.735443 CONN:10000 ERR:0
[11] TIME:69.186928 CONN:10000 ERR:0
[12] TIME:69.477997 CONN:10000 ERR:0
[8] TIME:69.536194 CONN:10000 ERR:0

@yoppi
Copy link
Author

yoppi commented Mar 16, 2017

TODO: この圧倒的な遅さはRedis特有のものでチューニングの余地があるのかどうか調査する

@yoppi
Copy link
Author

yoppi commented Mar 17, 2017

Redisはsingle thread、memcachedはmulti thread(+libevent)モデルだけどそれにしても差が大杉と受け取れてしまう。ちなみに、memcachedは -t 1 にして起動している。

$ memcached -t 1 -c 10000 -d

@yoppi
Copy link
Author

yoppi commented Apr 1, 2017

Dockerのコンテナは net.core.somaxconn のデフォルト値が128ということを把握していなかった。 --sysctlオプションで65535に増やして再度ベンチするとRedis優位という結果に。

Redis

$ ./main
[5] TIME:32.012009 CONN:10000 ERR:0
[6] TIME:32.012558 CONN:10000 ERR:0
[11] TIME:32.012852 CONN:10000 ERR:0
[7] TIME:32.012989 CONN:10000 ERR:0
[8] TIME:32.013863 CONN:10000 ERR:0
[9] TIME:32.017174 CONN:10000 ERR:0
[12] TIME:32.017746 CONN:10000 ERR:0
[10] TIME:32.065727 CONN:10000 ERR:0

memcached

$ ./main -port 11211 # -t 1
[11] TIME:69.621826 CONN:10000 ERR:0
[12] TIME:69.762764 CONN:10000 ERR:0
[5] TIME:69.805656 CONN:10000 ERR:0
[10] TIME:69.852615 CONN:10000 ERR:0
[7] TIME:69.944725 CONN:10000 ERR:0
[6] TIME:69.957588 CONN:10000 ERR:0
[9] TIME:70.059830 CONN:10000 ERR:0
[8] TIME:70.076202 CONN:10000 ERR:0
$ ./main -port 11211 # -t 4
[11] TIME:64.573219 CONN:10000 ERR:0
[8] TIME:64.575333 CONN:10000 ERR:0
[5] TIME:64.584305 CONN:10000 ERR:0
[10] TIME:64.586678 CONN:10000 ERR:0
[9] TIME:64.591965 CONN:10000 ERR:0
[12] TIME:64.609077 CONN:10000 ERR:0
[6] TIME:64.611282 CONN:10000 ERR:0
[7] TIME:64.617073 CONN:10000 ERR:0

@yoppi
Copy link
Author

yoppi commented Apr 3, 2017

memcachedで -c オプションを付け忘れてしまっていた。

$ ./main -port 11211 # memcached -t 1 -c 10000 -d
[10] TIME:6.396081 CONN:10000 ERR:0
[12] TIME:6.398925 CONN:10000 ERR:0
[5] TIME:6.425556 CONN:10000 ERR:0
[8] TIME:6.439693 CONN:10000 ERR:0
[6] TIME:6.448411 CONN:10000 ERR:0
[11] TIME:6.448479 CONN:10000 ERR:0
[9] TIME:6.453142 CONN:10000 ERR:0
[7] TIME:6.456842 CONN:10000 ERR:0

うーん。やっぱりmemcached速い。

@yoppi
Copy link
Author

yoppi commented Apr 3, 2017

Redisのsrc/networking.cのMAX_ACCEPTS_PER_CALLを1000から10000にすると若干パフォーマンスを上げられる。

$ ./main
[12] TIME:24.675478 CONN:10000 ERR:0
[6] TIME:24.675638 CONN:10000 ERR:0
[5] TIME:24.680416 CONN:10000 ERR:0
[10] TIME:24.680794 CONN:10000 ERR:0
[11] TIME:24.686491 CONN:10000 ERR:0
[7] TIME:24.686701 CONN:10000 ERR:0
[9] TIME:24.686735 CONN:10000 ERR:0
[8] TIME:24.702139 CONN:10000 ERR:0

@yoppi
Copy link
Author

yoppi commented Apr 26, 2017

redis遅いのなんとなくわかってきた、気がする。コネクションのたびにclientをfreeするが、コネクションをdouble linked listで管理していてそこの計算に毎回O(n)かかってしまうので、同時接続数が増えるとここのコストで圧迫される感じがする。
memdは特に何もしてなくてコネクションの状態が変わったらclose、みたいな感じでシンプル。

@yoppi
Copy link
Author

yoppi commented May 12, 2017

redis
memcached

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment