Last active
November 24, 2016 08:45
-
-
Save yilaguan/e5e95b988653b16eb2ac3cb74db74487 to your computer and use it in GitHub Desktop.
"Core-dumped" in MnistMLPExample.java
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
**When I run the example in "dl4j-spark-examples/dl4j-spark/src/main/java/org.deeplearning4j/mlp/MnistMLPExample.java", I find some Error as below: | |
I have updata dl4j-example to 0.7.1 version | |
And I can train the network, but some Error in evaluation period.** | |
```java | |
16/11/24 16:12:55 ERROR YarnScheduler: Lost executor 1 on longzhou-hdp4.lz.dscc: Container marked as failed: container_1479880322401_0153_02_000002 on host: longzhou-hdp4.lz.dscc. Exit status: 134. Diagnostics: Exception from container-launch. | |
Container id: container_1479880322401_0153_02_000002 | |
Exit code: 134 | |
Exception message: /bin/bash: line 1: 30037 Aborted (core dumped) /home/work/java//bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms8192m -Xmx8192m '-Dorg.deeplearning4j.spark.time.TimeSource=org.deeplearning4j.spark.time.SystemClockTimeSource' -Djava.io.tmpdir=/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/tmp '-Dspark.driver.port=50840' -Dspark.yarn.app.container.log.dir=/home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@192.168.19.109:50840 --executor-id 1 --hostname longzhou-hdp4.lz.dscc --cores 8 --app-id application_1479880322401_0153 --user-class-path file:/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/__app__.jar > /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stdout 2> /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stderr | |
Stack trace: ExitCodeException exitCode=134: /bin/bash: line 1: 30037 Aborted (core dumped) /home/work/java//bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms8192m -Xmx8192m '-Dorg.deeplearning4j.spark.time.TimeSource=org.deeplearning4j.spark.time.SystemClockTimeSource' -Djava.io.tmpdir=/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/tmp '-Dspark.driver.port=50840' -Dspark.yarn.app.container.log.dir=/home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@192.168.19.109:50840 --executor-id 1 --hostname longzhou-hdp4.lz.dscc --cores 8 --app-id application_1479880322401_0153 --user-class-path file:/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/__app__.jar > /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stdout 2> /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stderr | |
at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) | |
at org.apache.hadoop.util.Shell.run(Shell.java:478) | |
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738) | |
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
Container exited with a non-zero exit code 134 | |
16/11/24 16:12:55 WARN TaskSetManager: Lost task 21.3 in stage 706.0 (TID 22695, longzhou-hdp4.lz.dscc): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container marked as failed: container_1479880322401_0153_02_000002 on host: longzhou-hdp4.lz.dscc. Exit status: 134. Diagnostics: Exception from container-launch. | |
Container id: container_1479880322401_0153_02_000002 | |
Exit code: 134 | |
Exception message: /bin/bash: line 1: 30037 Aborted (core dumped) /home/work/java//bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms8192m -Xmx8192m '-Dorg.deeplearning4j.spark.time.TimeSource=org.deeplearning4j.spark.time.SystemClockTimeSource' -Djava.io.tmpdir=/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/tmp '-Dspark.driver.port=50840' -Dspark.yarn.app.container.log.dir=/home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@192.168.19.109:50840 --executor-id 1 --hostname longzhou-hdp4.lz.dscc --cores 8 --app-id application_1479880322401_0153 --user-class-path file:/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/__app__.jar > /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stdout 2> /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stderr | |
Stack trace: ExitCodeException exitCode=134: /bin/bash: line 1: 30037 Aborted (core dumped) /home/work/java//bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms8192m -Xmx8192m '-Dorg.deeplearning4j.spark.time.TimeSource=org.deeplearning4j.spark.time.SystemClockTimeSource' -Djava.io.tmpdir=/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/tmp '-Dspark.driver.port=50840' -Dspark.yarn.app.container.log.dir=/home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@192.168.19.109:50840 --executor-id 1 --hostname longzhou-hdp4.lz.dscc --cores 8 --app-id application_1479880322401_0153 --user-class-path file:/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/__app__.jar > /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stdout 2> /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stderr | |
at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) | |
at org.apache.hadoop.util.Shell.run(Shell.java:478) | |
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738) | |
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
Container exited with a non-zero exit code 134 | |
16/11/24 16:12:55 ERROR TaskSetManager: Task 21 in stage 706.0 failed 4 times; aborting job | |
16/11/24 16:12:55 WARN TaskSetManager: Lost task 1.3 in stage 706.0 (TID 22689, longzhou-hdp4.lz.dscc): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container marked as failed: container_1479880322401_0153_02_000002 on host: longzhou-hdp4.lz.dscc. Exit status: 134. Diagnostics: Exception from container-launch. | |
Container id: container_1479880322401_0153_02_000002 | |
Exit code: 134 | |
Exception message: /bin/bash: line 1: 30037 Aborted (core dumped) /home/work/java//bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms8192m -Xmx8192m '-Dorg.deeplearning4j.spark.time.TimeSource=org.deeplearning4j.spark.time.SystemClockTimeSource' -Djava.io.tmpdir=/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/tmp '-Dspark.driver.port=50840' -Dspark.yarn.app.container.log.dir=/home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@192.168.19.109:50840 --executor-id 1 --hostname longzhou-hdp4.lz.dscc --cores 8 --app-id application_1479880322401_0153 --user-class-path file:/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/__app__.jar > /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stdout 2> /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stderr | |
Stack trace: ExitCodeException exitCode=134: /bin/bash: line 1: 30037 Aborted (core dumped) /home/work/java//bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms8192m -Xmx8192m '-Dorg.deeplearning4j.spark.time.TimeSource=org.deeplearning4j.spark.time.SystemClockTimeSource' -Djava.io.tmpdir=/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/tmp '-Dspark.driver.port=50840' -Dspark.yarn.app.container.log.dir=/home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@192.168.19.109:50840 --executor-id 1 --hostname longzhou-hdp4.lz.dscc --cores 8 --app-id application_1479880322401_0153 --user-class-path file:/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/__app__.jar > /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stdout 2> /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stderr | |
at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) | |
at org.apache.hadoop.util.Shell.run(Shell.java:478) | |
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738) | |
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
Container exited with a non-zero exit code 134 | |
16/11/24 16:12:55 WARN TaskSetManager: Lost task 9.3 in stage 706.0 (TID 22692, longzhou-hdp4.lz.dscc): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container marked as failed: container_1479880322401_0153_02_000002 on host: longzhou-hdp4.lz.dscc. Exit status: 134. Diagnostics: Exception from container-launch. | |
Container id: container_1479880322401_0153_02_000002 | |
Exit code: 134 | |
Exception message: /bin/bash: line 1: 30037 Aborted (core dumped) /home/work/java//bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms8192m -Xmx8192m '-Dorg.deeplearning4j.spark.time.TimeSource=org.deeplearning4j.spark.time.SystemClockTimeSource' -Djava.io.tmpdir=/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/tmp '-Dspark.driver.port=50840' -Dspark.yarn.app.container.log.dir=/home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@192.168.19.109:50840 --executor-id 1 --hostname longzhou-hdp4.lz.dscc --cores 8 --app-id application_1479880322401_0153 --user-class-path file:/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/__app__.jar > /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stdout 2> /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stderr | |
Stack trace: ExitCodeException exitCode=134: /bin/bash: line 1: 30037 Aborted (core dumped) /home/work/java//bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms8192m -Xmx8192m '-Dorg.deeplearning4j.spark.time.TimeSource=org.deeplearning4j.spark.time.SystemClockTimeSource' -Djava.io.tmpdir=/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/tmp '-Dspark.driver.port=50840' -Dspark.yarn.app.container.log.dir=/home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@192.168.19.109:50840 --executor-id 1 --hostname longzhou-hdp4.lz.dscc --cores 8 --app-id application_1479880322401_0153 --user-class-path file:/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0153/container_1479880322401_0153_02_000002/__app__.jar > /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stdout 2> /home/hadoop/log/yarn/userlogs/application_1479880322401_0153/container_1479880322401_0153_02_000002/stderr | |
at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) | |
at org.apache.hadoop.util.Shell.run(Shell.java:478) | |
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738) | |
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
Container exited with a non-zero exit code 134 | |
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
**One container in spark, I find a Error as below:** | |
16/11/24 14:59:08 WARN reflections.Reflections: could not create Vfs.Dir from url. ignoring the exception and continuing | |
org.reflections.ReflectionsException: could not create Vfs.Dir from url, no matching UrlType was found [file:/home/hadoop/hadoop-2.6.0-cdh5.5.2.fixed/lib/*] | |
either use fromURL(final URL url, final List<UrlType> urlTypes) or use the static setDefaultURLTypes(final List<UrlType> urlTypes) or addDefaultURLTypes(UrlType urlType) with your specialized UrlType. | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:109) | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:91) | |
at org.reflections.Reflections.scan(Reflections.java:237) | |
at org.reflections.Reflections.scan(Reflections.java:204) | |
at org.reflections.Reflections.<init>(Reflections.java:129) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.registerSubtypes(NeuralNetConfiguration.java:405) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.configureMapper(NeuralNetConfiguration.java:354) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.initMapper(NeuralNetConfiguration.java:344) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.<clinit>(NeuralNetConfiguration.java:108) | |
at org.deeplearning4j.nn.conf.MultiLayerConfiguration.fromJson(MultiLayerConfiguration.java:119) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:71) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:41) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) | |
at org.apache.spark.scheduler.Task.run(Task.scala:89) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
16/11/24 14:59:13 INFO reflections.Reflections: Reflections took 14300 ms to scan 272 urls, producing 4349 keys and 33876 values | |
16/11/24 14:59:13 INFO reflections.Reflections: Reflections took 249 ms to scan 1 urls, producing 366 keys and 1405 values | |
16/11/24 14:59:13 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 74 | |
16/11/24 14:59:13 INFO storage.MemoryStore: Block broadcast_74_piece0 stored as bytes in memory (estimated size 350.2 KB, free 365.7 KB) | |
16/11/24 14:59:13 INFO broadcast.TorrentBroadcast: Reading broadcast variable 74 took 44 ms | |
421,1 99% | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.registerSubtypes(NeuralNetConfiguration.java:405) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.configureMapper(NeuralNetConfiguration.java:354) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.initMapper(NeuralNetConfiguration.java:344) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.<clinit>(NeuralNetConfiguration.java:108) | |
at org.deeplearning4j.nn.conf.MultiLayerConfiguration.fromJson(MultiLayerConfiguration.java:119) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:71) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:41) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) | |
at org.apache.spark.scheduler.Task.run(Task.scala:89) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
Caused by: java.io.FileNotFoundException: /home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0152/container_1479880322401_0152_01_000011/__app__.jar (No such file or directory) | |
at java.io.FileInputStream.open0(Native Method) | |
at java.io.FileInputStream.open(FileInputStream.java:195) | |
at java.io.FileInputStream.<init>(FileInputStream.java:138) | |
at java.io.FileInputStream.<init>(FileInputStream.java:93) | |
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90) | |
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188) | |
at org.reflections.vfs.JarInputDir$1$1.<init>(JarInputDir.java:36) | |
... 24 more | |
16/11/24 14:59:08 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 1596 | |
16/11/24 14:59:08 INFO executor.Executor: Running task 24.2 in stage 48.0 (TID 1596) | |
16/11/24 14:59:08 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 1597 | |
16/11/24 14:59:08 INFO executor.Executor: Running task 8.2 in stage 48.0 (TID 1597) | |
16/11/24 14:59:08 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 1598 | |
16/11/24 14:59:08 INFO executor.Executor: Running task 2.2 in stage 48.0 (TID 1598) | |
16/11/24 14:59:08 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 1599 | |
16/11/24 14:59:08 INFO executor.Executor: Running task 22.2 in stage 48.0 (TID 1599) | |
16/11/24 14:59:08 WARN reflections.Reflections: could not create Dir using directory from url file:/home/hadoop/hadoop-2.6.0-cdh5.5.2.fixed/lib/*. skipping. | |
java.lang.NullPointerException | |
at org.reflections.vfs.Vfs$DefaultUrlTypes$3.matches(Vfs.java:239) | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:98) | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:91) | |
at org.reflections.Reflections.scan(Reflections.java:237) | |
at org.reflections.Reflections.scan(Reflections.java:204) | |
at org.reflections.Reflections.<init>(Reflections.java:129) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.registerSubtypes(NeuralNetConfiguration.java:405) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.configureMapper(NeuralNetConfiguration.java:354) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.initMapper(NeuralNetConfiguration.java:344) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.<clinit>(NeuralNetConfiguration.java:108) | |
at org.deeplearning4j.nn.conf.MultiLayerConfiguration.fromJson(MultiLayerConfiguration.java:119) | |
373,2-9 86% | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) | |
at org.apache.spark.scheduler.Task.run(Task.scala:89) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
16/11/24 14:59:04 WARN reflections.Reflections: could not create Dir using jarFile from url file:/home/disk1/yarn/usercache/zhangliang/appcache/application_1479880322401_0152/container_1479880322401_0152_01_000011/__app__.jar. skipping. | |
java.lang.NullPointerException | |
at java.util.zip.ZipFile.<init>(ZipFile.java:207) | |
at java.util.zip.ZipFile.<init>(ZipFile.java:149) | |
at java.util.jar.JarFile.<init>(JarFile.java:166) | |
at java.util.jar.JarFile.<init>(JarFile.java:130) | |
at org.reflections.vfs.Vfs$DefaultUrlTypes$1.createDir(Vfs.java:212) | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:99) | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:91) | |
at org.reflections.Reflections.scan(Reflections.java:237) | |
at org.reflections.Reflections.scan(Reflections.java:204) | |
at org.reflections.Reflections.<init>(Reflections.java:129) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.registerSubtypes(NeuralNetConfiguration.java:405) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.configureMapper(NeuralNetConfiguration.java:354) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.initMapper(NeuralNetConfiguration.java:344) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.<clinit>(NeuralNetConfiguration.java:108) | |
at org.deeplearning4j.nn.conf.MultiLayerConfiguration.fromJson(MultiLayerConfiguration.java:119) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:71) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:41) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) | |
at org.apache.spark.scheduler.Task.run(Task.scala:89) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
16/11/24 14:59:04 WARN reflections.Reflections: could not create Vfs.Dir from url. ignoring the exception and continuing | |
org.reflections.ReflectionsException: Could not open url connection | |
at org.reflections.vfs.JarInputDir$1$1.<init>(JarInputDir.java:37) | |
at org.reflections.vfs.JarInputDir$1.iterator(JarInputDir.java:33) | |
at org.reflections.Reflections.scan(Reflections.java:240) | |
at org.reflections.Reflections.scan(Reflections.java:204) | |
at org.reflections.Reflections.<init>(Reflections.java:129) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.registerSubtypes(NeuralNetConfiguration.java:405) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.configureMapper(NeuralNetConfiguration.java:354) | |
325,2-9 73% | |
java.lang.NullPointerException | |
at org.reflections.vfs.Vfs$DefaultUrlTypes$3.matches(Vfs.java:239) | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:98) | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:91) | |
at org.reflections.Reflections.scan(Reflections.java:237) | |
at org.reflections.Reflections.scan(Reflections.java:204) | |
at org.reflections.Reflections.<init>(Reflections.java:129) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.registerSubtypes(NeuralNetConfiguration.java:405) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.configureMapper(NeuralNetConfiguration.java:354) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.initMapper(NeuralNetConfiguration.java:344) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.<clinit>(NeuralNetConfiguration.java:108) | |
at org.deeplearning4j.nn.conf.MultiLayerConfiguration.fromJson(MultiLayerConfiguration.java:119) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:71) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:41) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) | |
at org.apache.spark.scheduler.Task.run(Task.scala:89) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
16/11/24 14:59:04 WARN reflections.Reflections: could not create Vfs.Dir from url. ignoring the exception and continuing | |
org.reflections.ReflectionsException: could not create Vfs.Dir from url, no matching UrlType was found [file:/home/hadoop/hadoop-2.6.0-cdh5.5.2.fixed/share/hadoop/httpfs/*] | |
either use fromURL(final URL url, final List<UrlType> urlTypes) or use the static setDefaultURLTypes(final List<UrlType> urlTypes) or addDefaultURLTypes(UrlType urlType) with your specialized UrlType. | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:109) | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:91) | |
at org.reflections.Reflections.scan(Reflections.java:237) | |
at org.reflections.Reflections.scan(Reflections.java:204) | |
at org.reflections.Reflections.<init>(Reflections.java:129) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.registerSubtypes(NeuralNetConfiguration.java:405) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.configureMapper(NeuralNetConfiguration.java:354) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.initMapper(NeuralNetConfiguration.java:344) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.<clinit>(NeuralNetConfiguration.java:108) | |
at org.deeplearning4j.nn.conf.MultiLayerConfiguration.fromJson(MultiLayerConfiguration.java:119) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:71) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:41) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
277,2-9 61% | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.initMapper(NeuralNetConfiguration.java:344) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.<clinit>(NeuralNetConfiguration.java:108) | |
at org.deeplearning4j.nn.conf.MultiLayerConfiguration.fromJson(MultiLayerConfiguration.java:119) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:71) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:41) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) | |
at org.apache.spark.scheduler.Task.run(Task.scala:89) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
16/11/24 14:58:59 WARN reflections.Reflections: could not create Vfs.Dir from url. ignoring the exception and continuing | |
org.reflections.ReflectionsException: could not create Vfs.Dir from url, no matching UrlType was found [file:/home/hadoop/hadoop-2.6.0-cdh5.5.2.fixed/share/hadoop/tools/*] | |
either use fromURL(final URL url, final List<UrlType> urlTypes) or use the static setDefaultURLTypes(final List<UrlType> urlTypes) or addDefaultURLTypes(UrlType urlType) with your specialized UrlType. | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:109) | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:91) | |
at org.reflections.Reflections.scan(Reflections.java:237) | |
at org.reflections.Reflections.scan(Reflections.java:204) | |
at org.reflections.Reflections.<init>(Reflections.java:129) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.registerSubtypes(NeuralNetConfiguration.java:405) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.configureMapper(NeuralNetConfiguration.java:354) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.initMapper(NeuralNetConfiguration.java:344) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.<clinit>(NeuralNetConfiguration.java:108) | |
at org.deeplearning4j.nn.conf.MultiLayerConfiguration.fromJson(MultiLayerConfiguration.java:119) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:71) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:41) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) | |
at org.apache.spark.scheduler.Task.run(Task.scala:89) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
16/11/24 14:59:04 WARN reflections.Reflections: could not create Dir using directory from url file:/home/hadoop/hadoop-2.6.0-cdh5.5.2.fixed/share/hadoop/httpfs/*. skipping. | |
java.lang.NullPointerException | |
at org.reflections.vfs.Vfs$DefaultUrlTypes$3.matches(Vfs.java:239) | |
231,2-9 48% | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) | |
at org.apache.spark.scheduler.Task.run(Task.scala:89) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
16/11/24 14:58:59 WARN reflections.Reflections: could not create Vfs.Dir from url. ignoring the exception and continuing | |
org.reflections.ReflectionsException: could not create Vfs.Dir from url, no matching UrlType was found [file:/home/hadoop/hadoop-2.6.0-cdh5.5.2.fixed/share/hadoop/httpfs/lib/*] | |
either use fromURL(final URL url, final List<UrlType> urlTypes) or use the static setDefaultURLTypes(final List<UrlType> urlTypes) or addDefaultURLTypes(UrlType urlType) with your specialized UrlType. | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:109) | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:91) | |
at org.reflections.Reflections.scan(Reflections.java:237) | |
at org.reflections.Reflections.scan(Reflections.java:204) | |
at org.reflections.Reflections.<init>(Reflections.java:129) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.registerSubtypes(NeuralNetConfiguration.java:405) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.configureMapper(NeuralNetConfiguration.java:354) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.initMapper(NeuralNetConfiguration.java:344) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.<clinit>(NeuralNetConfiguration.java:108) | |
at org.deeplearning4j.nn.conf.MultiLayerConfiguration.fromJson(MultiLayerConfiguration.java:119) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:71) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:41) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) | |
at org.apache.spark.scheduler.Task.run(Task.scala:89) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
16/11/24 14:58:59 WARN reflections.Reflections: could not create Dir using directory from url file:/home/hadoop/hadoop-2.6.0-cdh5.5.2.fixed/share/hadoop/tools/*. skipping. | |
java.lang.NullPointerException | |
at org.reflections.vfs.Vfs$DefaultUrlTypes$3.matches(Vfs.java:239) | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:98) | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:91) | |
at org.reflections.Reflections.scan(Reflections.java:237) | |
at org.reflections.Reflections.scan(Reflections.java:204) | |
at org.reflections.Reflections.<init>(Reflections.java:129) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.registerSubtypes(NeuralNetConfiguration.java:405) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.configureMapper(NeuralNetConfiguration.java:354) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.initMapper(NeuralNetConfiguration.java:344) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.<clinit>(NeuralNetConfiguration.java:108) | |
184,2-9 36% | |
16/11/24 14:58:58 WARN reflections.Reflections: could not create Vfs.Dir from url. ignoring the exception and continuing | |
org.reflections.ReflectionsException: could not create Vfs.Dir from url, no matching UrlType was found [file:/home/hadoop/hadoop-2.6.0-cdh5.5.2.fixed/*] | |
either use fromURL(final URL url, final List<UrlType> urlTypes) or use the static setDefaultURLTypes(final List<UrlType> urlTypes) or addDefaultURLTypes(UrlType urlType) with your specialized UrlType. | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:109) | |
at org.reflections.vfs.Vfs.fromURL(Vfs.java:91) | |
at org.reflections.Reflections.scan(Reflections.java:237) | |
at org.reflections.Reflections.scan(Reflections.java:204) | |
at org.reflections.Reflections.<init>(Reflections.java:129) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.registerSubtypes(NeuralNetConfiguration.java:405) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.configureMapper(NeuralNetConfiguration.java:354) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.initMapper(NeuralNetConfiguration.java:344) | |
at org.deeplearning4j.nn.conf.NeuralNetConfiguration.<clinit>(NeuralNetConfiguration.java:108) | |
at org.deeplearning4j.nn.conf.MultiLayerConfiguration.fromJson(MultiLayerConfiguration.java:119) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:71) | |
at org.deeplearning4j.spark.impl.multilayer.evaluation.EvaluateFlatMapFunction.call(EvaluateFlatMapFunction.java:41) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:159) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) | |
at org.apache.spark.scheduler.Task.run(Task.scala:89) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment