Skip to content

Instantly share code, notes, and snippets.

@mgagliardo91
Last active June 27, 2018 14:35
Show Gist options
  • Save mgagliardo91/4c3d9117802f3b44b0557c650e3965f0 to your computer and use it in GitHub Desktop.
Save mgagliardo91/4c3d9117802f3b44b0557c650e3965f0 to your computer and use it in GitHub Desktop.
ANALYTICS-1932: Frequent EOFExceptions

ANALYTICS-1932

There seems to be two different EOFExceptions being thrown: java.io.EOFException and org.eclipse.jetty.io.EofException. The main file in play here (used by both client/server side) is CommitSpool.java

org.eclipse.jetty.io.EofException

This exception is being thrown to indicate an early, unexpected, EOF. The exception is most commonly thrown at the start of an ATH test against staging, but it can be found in prod with clients other than cloudbees. Following the stack trace, it seems like, while reading the input stream, a read is requested with a valid size determined from what is remaining to be read but the read() call throws an exception because the stream returned at -1 here.

I think that it would be safe to just handle the exception gracefully from the server side, but still indicate to the client that it wasn't successful.

java.io.EOFException

This one is more interesting since the error is being thrown at the initialization of the GZipInputStream before an external read is even attempted. When CommitSpoolClient starts the upload inside the send() method, the first thing that happens is wrapping the output stream inside a GZipOutputStream which, during the construction, writes the GZipHeaders. From the server side, the exception is getting thrown at the CommitSpool#receiveUpload() when the input stream gets wrapped inside a GZIPInputStream which, during construction, attempts to reads the same GZipHeaders. This read results in a -1 indicating an empty stream and throws the java.io.EOFException.

I've been looking at the client to try and determine where an empty outputstream could possibly occur, but it seems like the outputstream always writes those GZipHeaders.

public static void receiveUpload(InputStream in, ConsumerWithException<RevCommit> receiver) throws IOException {
try (GZIPInputStream gz = new GZIPInputStream(in)) { // << The java.io.EOFException is thrown here
int size;
while (0 <= (size = readRecordSize(gz))) {
// RevCommit.parse will take ownership of the buffer, so we need to allocate a new one each time
byte[] buf = new byte[size];
if (!readRecordData(gz, size, buf)) {
throw new EOFException("Premature end of stream");
}
receiver.accept(RevCommit.parse(buf));
}
}
}
/*package*/ void send(OutputStream o) throws IOException {
prepareSend();
try (OutputStream os = new GZIPOutputStream(o)) { // << The headers are written here
if (uploadFile().exists()) {
byte[] buf = null;
try (InputStream is = Files.newInputStream(uploadFile().toPath(), StandardOpenOption.READ)) {
// only send complete records
int size;
while (0 <= (size = readRecordSize(is))) {
buf = resizeIfNecessary(buf, size);
if (!readRecordData(is, size, buf)) {
break;
}
writeRecordSize(os, size);
writeRecordData(os, size, buf);
}
}
}
// end of record marker
os.write(new byte[]{-1,-1,-1,-1});
os.flush();
}
}
private static boolean readRecordData(InputStream is, int size, byte[] buf) throws IOException {
int read = 0;
while (read < size) {
int count = is.read(buf, read, size - read); // << The jetty.io.EofException is occurring on this line during the read()
if (count == -1) {
// premature end of stream
return false;
}
read += count;
}
return read == size;
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment