Skip to content

Instantly share code, notes, and snippets.

@tsackton
Last active August 29, 2015 14:19
Show Gist options
  • Save tsackton/23b165cf7d2f3fa68b75 to your computer and use it in GitHub Desktop.
Save tsackton/23b165cf7d2f3fa68b75 to your computer and use it in GitHub Desktop.
Logging set at level: DEBUG
Logging to file: ratite_align.log
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/pseHum.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/taeGut.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/ficAlb.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/corBra.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/melUnd.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/falPer.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/picPub.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/lepDis.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/halLeu.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/aptFor.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/pygAde.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/fulGla.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/nipNip.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/balReg.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/chaVoc.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/calAnn.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/chaPel.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/cucCan.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/colLiv.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/mesUni.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/galGal.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/melGal.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/anaPla.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/aptHaa.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/aptOwe.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/aptRow.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/casCas.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/droNov.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/cryCin.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/tinGut.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/eudEle.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/notPer.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/rheAme.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/rhePen.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/strCam.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/allMis.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/allSin.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/croPor.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/gavGan.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/chrPic.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/cheMyd.fa
Running the command: cactus_analyseAssembly /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/refGenomes/anoCar.fa
Running the command: rm -rf ./ratiteDir/jobTree
Running the command: . /n/home12/tsackton/cuff/progressiveCactus/bin/../src/../environment && cactus_progressive.py --jobTree ./ratiteDir/jobTree --stats --batchSystem "slurm" --maxThreads "60" --maxCpus "1000" --bigBatchSystem "singleMachine" --bigMemoryThreshold "50000000000" --jobTime "20000" --maxLogFileSize "10000000" --slurm-partition "serial_requeue" --slurm-scriptpath "slurm_scripts" --slurm-time "480" ./ratiteDir/progressiveAlignment/progressiveAlignment_project.xml >> ./ratiteDir/cactus.log 2>&1
Running the command: jobTreeStatus --failIfNotComplete --jobTree ./ratiteDir/jobTree > /dev/null 2>&1
Error: Command: jobTreeStatus --failIfNotComplete --jobTree ./ratiteDir/jobTree > /dev/null 2>&1 exited with non-zero status 1
Temporary data was left in: ./ratiteDir
More information can be found in ./ratiteDir/cactus.log
Continuing existing alignment. Use --overwrite or erase the working directory to force restart from scratch.
2015-04-09 10:50:25.032496: Beginning Progressive Cactus Alignment
Got message from job at time: 1428592320.15 : Running blast using the trimming strategy
Got message from job at time: 1428592320.15 : Ingroup sequences: ['/n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/gTD31/tmp_i0Ou5IeFxm/renamedInputs/taeGut.fa_1', '/n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/gTD31/tmp_i0Ou5IeFxm/renamedInputs/pseHum.fa_0', '/n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/gTD31/tmp_i0Ou5IeFxm/renamedInputs/ficAlb.fa_2']
Got message from job at time: 1428592320.15 : Outgroup sequences: ['/n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/gTD31/tmp_i0Ou5IeFxm/renamedInputs/halLeu.fa_8', '/n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/gTD31/tmp_i0Ou5IeFxm/renamedInputs/nipNip.fa_12', '/n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/gTD31/tmp_i0Ou5IeFxm/renamedInputs/aptFor.fa_9']
Got message from job at time: 1428592320.15 : Blasting ingroups vs outgroups to file /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/gTD31/tmp_i0Ou5IeFxm/tmp_ev2JUOX0vsunconvertedAlignments
The job seems to have left a log file, indicating failure: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t1/t2/t2/t3/t0/job
Reporting file: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t1/t2/t2/t3/t0/log.txt
log.txt: ---JOBTREE SLAVE OUTPUT LOG---
log.txt:
log.txt: bunzip2: I/O or other error, bailing out. Possible reason follows.
log.txt: bunzip2: No space left on device
log.txt: Input file = /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/gTD0/tmp_DC33apZjig/chunks/41.bz2, output file = (stdout)
log.txt: Traceback (most recent call last):
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/jobTree/src/jobTreeSlave.py", line 271, in main
log.txt: defaultMemory=defaultMemory, defaultCpu=defaultCpu, depth=depth)
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/jobTree/scriptTree/stack.py", line 153, in execute
log.txt: self.target.run()
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/cactus/blast/cactus_blast.py", line 385, in run
log.txt: self.seqFile1 = decompressFastaFile(self.seqFile1 + ".bz2", os.path.join(self.getLocalTempDir(), "1.fa"))
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/cactus/blast/cactus_blast.py", line 370, in decompressFastaFile
log.txt: system("bunzip2 --stdout %s > %s" % (fileName, tempFileName))
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/sonLib/bioio.py", line 184, in system
log.txt: raise RuntimeError("Command: %s exited with non-zero status %i" % (command, sts))
log.txt: RuntimeError: Command: bunzip2 --stdout /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/gTD0/tmp_DC33apZjig/chunks/41.bz2 > /tmp/tmpQNAueu/localTempDir/1.fa exited with non-zero status 1
log.txt: Exiting the slave because of a failed job on host holy2b07208.rc.fas.harvard.edu
log.txt: Due to failure we are reducing the remaining retry count of job /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t1/t2/t2/t3/t0/job to 0
log.txt: We have set the default memory of the failed job to 4294967296 bytes
Job: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t1/t2/t2/t3/t0/job is completely failed
The job seems to have left a log file, indicating failure: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t2/t0/t0/t3/t1/job
Reporting file: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t2/t0/t0/t3/t1/log.txt
log.txt: ---JOBTREE SLAVE OUTPUT LOG---
log.txt:
log.txt: bunzip2: I/O or other error, bailing out. Possible reason follows.
log.txt: bunzip2: No space left on device
log.txt: Input file = /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/gTD0/tmp_DC33apZjig/chunks/46.bz2, output file = (stdout)
log.txt: Traceback (most recent call last):
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/jobTree/src/jobTreeSlave.py", line 271, in main
log.txt: defaultMemory=defaultMemory, defaultCpu=defaultCpu, depth=depth)
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/jobTree/scriptTree/stack.py", line 153, in execute
log.txt: self.target.run()
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/cactus/blast/cactus_blast.py", line 386, in run
log.txt: self.seqFile2 = decompressFastaFile(self.seqFile2 + ".bz2", os.path.join(self.getLocalTempDir(), "2.fa"))
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/cactus/blast/cactus_blast.py", line 370, in decompressFastaFile
log.txt: system("bunzip2 --stdout %s > %s" % (fileName, tempFileName))
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/sonLib/bioio.py", line 184, in system
log.txt: raise RuntimeError("Command: %s exited with non-zero status %i" % (command, sts))
log.txt: RuntimeError: Command: bunzip2 --stdout /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/gTD0/tmp_DC33apZjig/chunks/46.bz2 > /tmp/tmpkzYJT3/localTempDir/2.fa exited with non-zero status 1
log.txt: Exiting the slave because of a failed job on host holy2b07208.rc.fas.harvard.edu
log.txt: Due to failure we are reducing the remaining retry count of job /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t2/t0/t0/t3/t1/job to 0
log.txt: We have set the default memory of the failed job to 4294967296 bytes
Job: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t2/t0/t0/t3/t1/job is completely failed
Batch system is reporting that the job (1, 1705) /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t1/job failed with exit value 1
No log file is present, despite job failing: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t1/job
Due to failure we are reducing the remaining retry count of job /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t1/job to 0
We have set the default memory of the failed job to 2147483648 bytes
Job: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t1/job is completely failed
The job seems to have left a log file, indicating failure: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t1/t2/t2/t2/t1/job
Reporting file: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t1/t2/t2/t2/t1/log.txt
log.txt: ---JOBTREE SLAVE OUTPUT LOG---
log.txt:
log.txt: bunzip2: I/O or other error, bailing out. Possible reason follows.
log.txt: bunzip2: No space left on device
log.txt: Input file = /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/gTD0/tmp_DC33apZjig/chunks/61.bz2, output file = (stdout)
log.txt: Traceback (most recent call last):
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/jobTree/src/jobTreeSlave.py", line 271, in main
log.txt: defaultMemory=defaultMemory, defaultCpu=defaultCpu, depth=depth)
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/jobTree/scriptTree/stack.py", line 153, in execute
log.txt: self.target.run()
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/cactus/blast/cactus_blast.py", line 386, in run
log.txt: self.seqFile2 = decompressFastaFile(self.seqFile2 + ".bz2", os.path.join(self.getLocalTempDir(), "2.fa"))
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/cactus/blast/cactus_blast.py", line 370, in decompressFastaFile
log.txt: system("bunzip2 --stdout %s > %s" % (fileName, tempFileName))
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/sonLib/bioio.py", line 184, in system
log.txt: raise RuntimeError("Command: %s exited with non-zero status %i" % (command, sts))
log.txt: RuntimeError: Command: bunzip2 --stdout /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/gTD0/tmp_DC33apZjig/chunks/61.bz2 > /tmp/tmpQzwW8l/localTempDir/2.fa exited with non-zero status 1
log.txt: Exiting the slave because of a failed job on host holy2b07208.rc.fas.harvard.edu
log.txt: Due to failure we are reducing the remaining retry count of job /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t1/t2/t2/t2/t1/job to 0
log.txt: We have set the default memory of the failed job to 4294967296 bytes
Job: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t1/t2/t2/t2/t1/job is completely failed
The job seems to have left a log file, indicating failure: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t2/t0/t1/t0/t0/job
Reporting file: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t2/t0/t1/t0/t0/log.txt
log.txt: ---JOBTREE SLAVE OUTPUT LOG---
log.txt:
log.txt: bunzip2: I/O or other error, bailing out. Possible reason follows.
log.txt: bunzip2: No space left on device
log.txt: Input file = /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/gTD0/tmp_DC33apZjig/chunks/44.bz2, output file = (stdout)
log.txt: Traceback (most recent call last):
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/jobTree/src/jobTreeSlave.py", line 271, in main
log.txt: defaultMemory=defaultMemory, defaultCpu=defaultCpu, depth=depth)
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/jobTree/scriptTree/stack.py", line 153, in execute
log.txt: self.target.run()
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/cactus/blast/cactus_blast.py", line 385, in run
log.txt: self.seqFile1 = decompressFastaFile(self.seqFile1 + ".bz2", os.path.join(self.getLocalTempDir(), "1.fa"))
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/cactus/blast/cactus_blast.py", line 370, in decompressFastaFile
log.txt: system("bunzip2 --stdout %s > %s" % (fileName, tempFileName))
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/sonLib/bioio.py", line 184, in system
log.txt: raise RuntimeError("Command: %s exited with non-zero status %i" % (command, sts))
log.txt: RuntimeError: Command: bunzip2 --stdout /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/gTD0/tmp_DC33apZjig/chunks/44.bz2 > /tmp/tmpsH3Bgr/localTempDir/1.fa exited with non-zero status 1
log.txt: Exiting the slave because of a failed job on host holy2b07208.rc.fas.harvard.edu
log.txt: Due to failure we are reducing the remaining retry count of job /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t2/t0/t1/t0/t0/job to 0
log.txt: We have set the default memory of the failed job to 4294967296 bytes
Job: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t2/t0/t1/t0/t0/job is completely failed
The job seems to have left a log file, indicating failure: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t2/t0/t1/t0/t1/job
Reporting file: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t2/t0/t1/t0/t1/log.txt
log.txt: ---JOBTREE SLAVE OUTPUT LOG---
log.txt:
log.txt: bunzip2: I/O or other error, bailing out. Possible reason follows.
log.txt: bunzip2: No space left on device
log.txt: Input file = /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/gTD0/tmp_DC33apZjig/chunks/48.bz2, output file = (stdout)
log.txt: Traceback (most recent call last):
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/jobTree/src/jobTreeSlave.py", line 271, in main
log.txt: defaultMemory=defaultMemory, defaultCpu=defaultCpu, depth=depth)
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/jobTree/scriptTree/stack.py", line 153, in execute
log.txt: self.target.run()
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/cactus/blast/cactus_blast.py", line 386, in run
log.txt: self.seqFile2 = decompressFastaFile(self.seqFile2 + ".bz2", os.path.join(self.getLocalTempDir(), "2.fa"))
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/cactus/blast/cactus_blast.py", line 370, in decompressFastaFile
log.txt: system("bunzip2 --stdout %s > %s" % (fileName, tempFileName))
log.txt: File "/n/home12/tsackton/cuff/progressiveCactus/submodules/sonLib/bioio.py", line 184, in system
log.txt: raise RuntimeError("Command: %s exited with non-zero status %i" % (command, sts))
log.txt: RuntimeError: Command: bunzip2 --stdout /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/gTD0/tmp_DC33apZjig/chunks/48.bz2 > /tmp/tmp0y0tJQ/localTempDir/2.fa exited with non-zero status 1
log.txt: Exiting the slave because of a failed job on host holy2b07208.rc.fas.harvard.edu
log.txt: Due to failure we are reducing the remaining retry count of job /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t2/t0/t1/t0/t1/job to 0
log.txt: We have set the default memory of the failed job to 4294967296 bytes
Job: /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t3/t2/t0/t1/t0/t1/job is completely failed
Batch system is reporting that the job (1, 2479) /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t1/t1/t1/t2/t2/t1/job failed with exit value 1
Despite the batch system claiming failure the job /n/regal/edwards_lab/ratites/wga/ratite_align_sr2/ratiteDir/jobTree/jobs/t0/t1/t1/t1/t2/t2/t1/job seems to have finished and been removed
2015-04-10 09:06:25.613194: Finished Progressive Cactus Alignment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment