Skip to content

Instantly share code, notes, and snippets.

@zakkak
Last active December 18, 2016 23:49
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save zakkak/b2f3ded6e6bdbf545581a15bc9250d04 to your computer and use it in GitHub Desktop.
Save zakkak/b2f3ded6e6bdbf545581a15bc9250d04 to your computer and use it in GitHub Desktop.
Spark and Hadoop Troubleshooting

java.io.IOException All datanodes are bad

Make sure ulimit -n is set to a high enough number (currently, experimenting with 1000000)

To do so check/edit /etc/security/limits.conf.

java.lang.IllegalArgumentException: Self-suppression not permitted

You can ignore this kind of exceptions

java.io.IOException: Unable to close file because the last block does not have enough number of replicas.

File could only be replicated to 0 nodes instead of minReplication (=1). There are 4 datanode(s) running and no node(s) are excluded in this operation.

Assuming the workload succeeds with less data, most probably you are running out of space.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment