Last active
February 28, 2018 04:31
-
-
Save Georgehe4/2897f2ead685a9fcf014cbf01cee4375 to your computer and use it in GitHub Desktop.
Failed ADAM command using --packages. google cloud nio
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
18/02/28 04:26:52 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_1519790048472_0003 | |
18/02/28 04:27:13 INFO org.bdgenomics.adam.rdd.ADAMContext: Loading gs://mango-initialization-bucket/HG01384.mapped.ILLUMINA.bwa.CLM.low_coverage.20120522.bam as BAM/CRAM/SAM and converting to AlignmentRecords. | |
18/02/28 04:27:14 INFO org.bdgenomics.adam.rdd.ADAMContext: Loaded header from gs://mango-initialization-bucket/HG01384.mapped.ILLUMINA.bwa.CLM.low_coverage.20120522.bam | |
18/02/28 04:27:17 INFO org.bdgenomics.adam.serialization.ADAMKryoRegistrator: Did not find Spark internal class. This is expected for earlier Spark versions. | |
18/02/28 04:27:18 INFO org.bdgenomics.adam.rdd.read.RDDBoundAlignmentRecordRDD: Saving data in ADAM format | |
18/02/28 04:27:19 INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat: Total input files to process : 1 | |
18/02/28 04:27:33 INFO org.bdgenomics.adam.serialization.ADAMKryoRegistrator: Did not find Spark internal class. This is expected for earlier Spark versions. | |
[Stage 0:> (0 + 4) / 78]18/02/28 04:27:40 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 2.0 in stage 0.0 (TID 2, adam-test-3-w-2.c.mango-bdgenomics.internal, executor 1): java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;Ljava/lang/Object;)V | |
at com.google.cloud.storage.contrib.nio.CloudStorageFileSystemProvider.newFileSystem(CloudStorageFileSystemProvider.java:218) | |
at com.google.cloud.storage.contrib.nio.CloudStorageFileSystemProvider.newFileSystem(CloudStorageFileSystemProvider.java:85) | |
at java.nio.file.FileSystems.newFileSystem(FileSystems.java:336) | |
at org.seqdoop.hadoop_bam.util.NIOFileUtil.asPath(NIOFileUtil.java:40) | |
at org.seqdoop.hadoop_bam.BAMRecordReader.initialize(BAMRecordReader.java:144) | |
at org.seqdoop.hadoop_bam.BAMInputFormat.createRecordReader(BAMInputFormat.java:211) | |
at org.seqdoop.hadoop_bam.AnySAMInputFormat.createRecordReader(AnySAMInputFormat.java:190) | |
at org.apache.spark.rdd.NewHadoopRDD$$anon$1.liftedTree1$1(NewHadoopRDD.scala:180) | |
at org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:179) | |
at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:134) | |
at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:69) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) | |
at org.apache.spark.scheduler.Task.run(Task.scala:108) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:748) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment