Skip to content

Instantly share code, notes, and snippets.

@tomquas
Last active September 10, 2020 16:41
Show Gist options
  • Save tomquas/7398530 to your computer and use it in GitHub Desktop.
Save tomquas/7398530 to your computer and use it in GitHub Desktop.
performance testing prosody, tigase, ejabberd
motivation
* test xmpp bot behavior with different servers, measure throughput and reliability.
* figure out numbers for prosody, nothing seems to be published at the time of writing
* the goal was _not_ to push the servers to their limits, so the test environment was not highly optimized.
test setup
* macbook pro hosting ubuntu raring on vmware fusion.
* communication flow: xmpp client (osx) > xmpp server (ubuntu) > xmpp bot (osx).
* client opens 20 connections to server and pumps 150 iq stanzas (similar disco#items) to bot. this is being repeated 40x.
* servers:
prosody 0.8.2 with use_libevent=true, c2s, tls=false, anon
tigase 5.1 with c2s, tls=false, anon
ejabberd 2.1.10-5, c2s, tls=false, anon
no database usage, empty rosters – well, anon users...
* all test runs happened in the same environemnt.
results (stripped top(1) output)
R NI RES SHR S %CPU %MEM TIME+ nTH COMMAND
20 0 451m 12m S 115.7 19.9 11:36.61 290 java/tigase # peak
error rate: 0
R NI RES SHR S %CPU %MEM TIME+ nTH COMMAND
20 0 3236 688 S 0.0 0.2 0:00.00 1 lua5.1/prosody # start
20 0 11m 960 S 18.3 0.5 0:05.67 1 lua5.1 # peak
20 0 10m 960 S 11.3 0.5 0:05.67 1 lua5.1 # average
error rate: 0
ejabberd tuned: POLL=true, SMP=enable, ERL_MAX_PORTS=100000, PROCESSES=250000:
R NI RES SHR S %CPU %MEM TIME+ nTH COMMAND
20 0 35m 3252 S 0.0 1.2 0:00.84 11 beam/ejabberd # start
20 0 52m 3252 S 19.6 1.2 0:00.84 17 beam/ejabberd # peak
20 0 35m 3252 S 9.8 1.2 0:00.84 17 beam/ejabberd # average
error rate: ~750 out of 3000
conclusion
given that i tested only 20 concurrent connections but focused on throughput, this test does not necessarily reflect your average case. most of the time, you want lots of concurrent connections and lesser messages going over them. such a test requires a different hardware setup.
anyway, without putting too much meaning into these numbers,
* i was surprised by how nicely prosody handled the test scenario; system resources were at a low during the test, and it delivered with reliable performance.
* i was pretty much disappointed to see ejabberd fail to keep up with the pace; even though system resources were used in a moderate way, evne a tuned instance could not deliver the reliability of prosody.
* i was pretty much disappointed by how tigase wastes system resources; its ridiculous memory footprint and high CPU usage is a no-go.
further impressions:
* prosody setup time: 15mins. ubuntu package, dead simple configuartion, start, done.
* ejabberd setup time: 30mins. ubuntu package, rather simple configuration but need to understand those parameters first related to the erlang vm, start, done.
* tigase setup time: 2h+. tigase tar ball, opendjdk (ubuntu), requires tweaks to resolve issues with jdk tls issues and decouple from databases, a configuration nightmare with endless google searches for documentation until it starts w/o stacktrace.
@AnsisMalins
Copy link

Thank you for taking the time to do this and publishing the results for everyone to see.

@emkman
Copy link

emkman commented Mar 14, 2014

Which part of the results show ejabbberd failing to keep up with the pace? The error rate? What constitutes an error. Thanks for this info.

@xjtufjj
Copy link

xjtufjj commented May 11, 2015

thanks for your test for these three serves, it's helpful for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment