Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
A tentative analysis of why Shoutcast is spinning a full CPU core

A Shoutcast process (called sc_serv25, with PID 10213) is using way more CPU than expected for the workload - specifically, 100% of a CPU core. Let's investigate.

$ sudo strace -p 10213 -o ~/sc_serv25_strace.txt -f -t -tt -T
(control-c after a few seconds)

Looking at sc_serv25_strace.txt for weirdness, we notice thread 10221 is very hot. Here's an excerpt:

$ grep 10221 ~/sc_serv25_strace.txt
...
10221 1499065013.244010 poll([{fd=14, events=POLLIN}, {fd=75, events=POLLIN}], 2, 0) = 0 (Timeout) <0.000009>
10221 1499065013.244043 recvfrom(75, 0x7f4a92fecc98, 16383, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000009>
10221 1499065013.244081 poll([{fd=14, events=POLLIN}, {fd=75, events=POLLIN}], 2, 0) = 0 (Timeout) <0.000009>
10221 1499065013.244116 recvfrom(75, 0x7f4a92fecc98, 16383, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000009>
10221 1499065013.244143 poll([{fd=14, events=POLLIN}, {fd=75, events=POLLIN}], 2, 0) = 0 (Timeout) <0.000009>
10221 1499065013.244176 recvfrom(75, 0x7f4a92fecc98, 16383, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000009>
10221 1499065013.244202 poll([{fd=14, events=POLLIN}, {fd=75, events=POLLIN}], 2, 0) = 0 (Timeout) <0.000009>
10221 1499065013.244235 recvfrom(75, 0x7f4a92fecc98, 16383, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000010>
10221 1499065013.244262 poll([{fd=14, events=POLLIN}, {fd=75, events=POLLIN}], 2, 0) = 0 (Timeout) <0.000009>
10221 1499065013.244306 recvfrom(75, 0x7f4a92fecc98, 16383, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000009>
10221 1499065013.244332 poll([{fd=14, events=POLLIN}, {fd=75, events=POLLIN}], 2, 0) = 0 (Timeout) <0.000008>
10221 1499065013.244364 recvfrom(75, 0x7f4a92fecc98, 16383, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000008>
10221 1499065013.244389 poll([{fd=14, events=POLLIN}, {fd=75, events=POLLIN}], 2, 0) = 0 (Timeout) <0.000009>
10221 1499065013.244420 recvfrom(75, 0x7f4a92fecc98, 16383, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000009>
10221 1499065013.244445 poll([{fd=14, events=POLLIN}, {fd=75, events=POLLIN}], 2, 0) = 1 ([{fd=75, revents=POLLIN}]) <0.000008>
10221 1499065013.244490 read(14, 0x7f4a92ffce00, 35) = -1 EAGAIN (Resource temporarily unavailable) <0.000014>
10221 1499065013.244532 recvfrom(75, "r\355\264\222H((\377\3630D\7\3\\\17\215,\30b\1\7x+\22XH\204\"\2\343\35\f"..., 16383, 0, NULL, NULL) = 78 <0.000029>
10221 1499065013.244610 recvfrom(75, 0x7f4a92fecc97, 16383, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000013>
10221 1499065013.244648 poll([{fd=14, events=POLLIN}, {fd=75, events=POLLIN}], 2, 0) = 0 (Timeout) <0.000013>
10221 1499065013.244695 recvfrom(75, 0x7f4a92fecc97, 16383, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000013>
10221 1499065013.244732 poll([{fd=14, events=POLLIN}, {fd=75, events=POLLIN}], 2, 0) = 0 (Timeout) <0.000012>
...

(I have carefully selected the portion of output to show, for reasons that will soon be revealed...)

The signature for poll() can be retrieved from the man page, and is:

int poll(struct pollfd *fds, nfds_t nfds, int timeout);

We see that poll() is being called to watch FD 75 - this returns very quickly (the first poll above takes <0.000009> seconds), and the return value of 0 indicates that the poll timeout was reached. Almost immediately afterwards (.000033 seconds later), recvfrom() is called on our FD, which returns quickly with EAGAIN, meaning that no data was read.

Using lsof, we can see that FD 75 is a network socket:

$ sudo lsof -i -n -l -P -c sc_serv -a -p 10213 +fg
COMMAND     PID     USER   FD   TYPE FILE-FLAG    DEVICE SIZE/OFF NODE NAME
sc_serv25 10213     1003   69u  IPv4     RW,ND 958988977      0t0  TCP 50.117.1.62:8010 (LISTEN)
sc_serv25 10213     1003   70u  IPv4     RW,ND 958988978      0t0  TCP 50.117.1.62:8011 (LISTEN)
                           **
sc_serv25 10213     1003   75u  IPv4     RW,ND 959009075      0t0  TCP 50.117.1.62:8011->50.117.1.58:36854 (ESTABLISHED)
                           **
sc_serv25 10213     1003   76u  IPv4     RW,ND 959004833      0t0  TCP 69.46.67.10:44516->37.59.27.98:443 (ESTABLISHED)

The flags of RW and ND are defined as:

$ man lsof | egrep " RW | ND "
                       ND        no delay
                       RW        read and write access

At this point, we pretty much have our proof, but double check to confirm that this FD is non-blocking:

# cat /proc/10221/fdinfo/75
pos:	0
flags:	04002
mnt_id:	8

4000 is the flag for non-blocking (according to internet; not sure where the local reference is...)

Calling poll() with a timeout of zero on a non-blocking socket seems to be an anti-pattern, because in that case poll() returns immediately, which the program appears to interpret as "ready for reading". The vast majority of the recvfrom() calls result in EAGAIN because there's no data. Every once in a while there is data, such as the one at timestamp 1499065013.244532. Some quick grepping can reveal the ratio (note that strace only ran for a few seconds):

# number of poll() calls from this thread that resulted in EAGAIN
$ egrep "10221.*recvfrom.*EAGAIN" ~/sc_serv25_strace.txt | wc -l
57720

# number of poll() calls from this thread that did not result in EAGAIN
$ egrep "10221.*recvfrom" ~/sc_serv25_strace.txt| grep -v EAGAIN | wc -l
141

So that's 57720:141. Only about 1 out of every 409 poll()s have any data, which is a problem in this case because they are happening as fast as possible.

(time passes)

UPDATE: Turns out this server's shoutcast config included the following non-default setting: AutoDumpTime=0. The documentation for this config variable reads:

Specify how long before an idle connection is dumped from the server (in seconds) [Default = 30]

A value of zero means there is no timeout of an idle connection. If you set this too low then it is likely that valid sources will fail to connect during the initial stages of a source connection and client connections may timeout unexpectedly.

Shoutcast uses the AutoDumpTime value as the timeout for poll(), which is a bug because zero doesn't mean infinite to poll (-1 means infinite). The poll man page includes the following guidance about 'special' timeout values:

Note that the timeout interval will be rounded up to the system clock granularity, and kernel scheduling delays mean that the blocking inter‐ val may overrun by a small amount. Specifying a negative value in timeout means an infinite timeout. Specifying a timeout of zero causes poll() to return immediately, even if no file descriptors are ready.

Setting AutoDumpTime to 30 (the default value) aleviates the CPU spin by causing poll to not return immediately unless there is data to read.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.