Skip to content

Instantly share code, notes, and snippets.

@mmmries
Last active November 11, 2022 16:59
Show Gist options
  • Star 11 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save mmmries/f7636583e925f7c90d80b4ced5ead97d to your computer and use it in GitHub Desktop.
Save mmmries/f7636583e925f7c90d80b4ced5ead97d to your computer and use it in GitHub Desktop.
Load Test Phoenix Presence

Phoenix Nodes

First I created 3 droplets on digital ocean with 4-cores and 8GB of RAM. Login as root to each and run:

sysctl -w fs.file-max=12000500
sysctl -w fs.nr_open=20000500
ulimit -n 4000000
sysctl -w net.ipv4.tcp_mem='10000000 10000000 10000000'
sysctl -w net.ipv4.tcp_rmem='1024 4096 16384'
sysctl -w net.ipv4.tcp_wmem='1024 4096 16384'
sysctl -w net.core.rmem_max=16384
sysctl -w net.core.wmem_max=16384
wget http://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb
sudo dpkg -i erlang-solutions_1.0_all.deb
yes | sudo apt-get update
yes | sudo apt-get install elixir esl-erlang build-essential git gnuplot libtemplate-perl htop
echo "root soft nofile 4000000" >> /etc/security/limits.conf
echo "root hard nofile 4000000" >> /etc/security/limits.conf

Then I copied and compiled the application with:

rsync -avz --exclude _build . root@1107.170.200.193:~/brokaw
ssh root@107.170.200.193
cd ~/brokaw
MIX_ENV=prod mix deps.compile
MIX_ENV=prod mix compile
# give me all the ulimits
sysctl -w fs.file-max=12000500
sysctl -w fs.nr_open=20000500
ulimit -n 4000000
sysctl -w net.ipv4.tcp_mem='10000000 10000000 10000000'
sysctl -w net.ipv4.tcp_rmem='1024 4096 16384'
sysctl -w net.ipv4.tcp_wmem='1024 4096 16384'
sysctl -w net.core.rmem_max=16384
sysctl -w net.core.wmem_max=16384
# start the app
vim config.prod.exs # modify the check_hosts to include http://138.68.250.167
PORT=4000 MIX_ENV=prod iex --name brokaw@138.68.250.167 --cookie watwat -S mix phoenix.server

From whichever node booted last I would join the cluster like this:

iex> Node.ping(:"brokaw@138.68.250.87")
:pong
iex> Node.ping(:"brokaw@138.68.250.101")
:pong
iex> Node.list()
[..]

Tsung Nodes

First I created 3 droplets on digital ocean with 4-cores and 8GB of RAM.

Note: whichever node acted as the tsung coordinator ended up maxing out its CPU, so it's probably advisable to use something bigger next time Login as root to each and run:

curl -sSL https://agent.digitalocean.com/install.sh | sh # install digital ocean metric tracker
wget http://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb
sudo dpkg -i erlang-solutions_1.0_all.deb
yes | sudo apt-get update
yes | sudo apt-get install elixir esl-erlang build-essential git gnuplot libtemplate-perl htop
wget http://tsung.erlang-projects.org/dist/tsung-1.6.0.tar.gz
tar -xvf tsung-1.6.0.tar.gz
cd tsung-1.6.0/
./configure
make
sudo make install
cd ..
sysctl -w fs.file-max=12000500
sysctl -w fs.nr_open=20000500
ulimit -n 4000000
sysctl -w net.ipv4.tcp_mem='10000000 10000000 10000000'
sysctl -w net.ipv4.tcp_rmem='1024 4096 16384'
sysctl -w net.ipv4.tcp_wmem='1024 4096 16384'
sysctl -w net.core.rmem_max=16384
sysctl -w net.core.wmem_max=16384
echo "root soft nofile 4000000" >> /etc/security/limits.conf
echo "root hard nofile 4000000" >> /etc/security/limits.conf
vim /etc/hosts # add entries for tsung1, tsung2 and tsung3 with the IPs we were assigned so the nodes can find each other

Then I created the brokaw.xml file on one of those tsung nodes and started the bechmark run with:

tsung -k -f brokaw.xml start

Results

Basic Benchmark

1 server 4-cores, 8GB RAM on Digital Ocean, 1 load test machine in DO

num users check for online user (µs) check for offline user (µs)
10 6.52 6.71
100 14.44 9.74
1000 10.78 13.14
3000 34.52 33.36
10000 31 28.82
20000 31.89 28.81
40000 32.33 34.51
55000 48.65 34.88

Rate at which new users could connect

I was attempting to do 1k/sec

arrivals_55k

Total Connected Websockets

total_users_55k

Multi-Node Benchmark

Setup details for this benchmark can be found here During the test the brokaw nodes never used more than 50% of their CPU, but the load testing client boxes got maxed out

num users check for online user (µs) check for offline user (µs) memory used (per node)
30 10.39 9.87 200MB
1000 12.07 12.81 215MB
10000 42.43 42.23 400MB
50000 76.94 15.37 1GB
100000 15.18 15.64 1.8GB
150000 17.03 15.35 2.2GB

total_users_130k

<?xml version="1.0"?>
<!DOCTYPE tsung SYSTEM "/usr/local/share/tsung/tsung-1.0.dtd">
<tsung loglevel="debug" version="1.0">
<clients>
<client host="tsung1" use_controller_vm="true" maxusers="55000" />
<client host="tsung2" use_controller_vm="true" maxusers="55000" />
<client host="tsung3" use_controller_vm="true" maxusers="55000" />
</clients>
<servers>
<server host="138.68.250.87" port="4000" type="tcp" />
<server host="138.68.250.101" port="4000" type="tcp" />
<server host="138.68.250.201" port="4000" type="tcp" />
</servers>
<load>
<arrivalphase phase="1" duration="120" unit="second">
<users maxnumber="100000" arrivalrate="1500" unit="second" />
</arrivalphase>
</load>
<options>
<option name="ports_range" min="1025" max="65535"/>
</options>
<sessions>
<session name="websocket" probability="100" type="ts_websocket">
<request>
<websocket type="connect" path="/socket/websocket"></websocket>
</request>
<request subst="true">
<websocket type="message">{"topic":"user:%%ts_user_server:get_unique_id%%", "event":"phx_join", "payload": {}, "ref":"1"}</websocket>
</request>
<for var="i" from="1" to="10" incr="1">
<thinktime value="10"/>
<request>
<websocket ack="no_ack" type="message">{"topic":"phoenix","event":"heartbeat","payload":{},"ref":"3"}</websocket>
</request>
</for>
</session>
</sessions>
</tsung>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment