-
-
Save crockpotveggies/f4eed87aa73be622d84f488128f6e045 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Justins-MacBook-Air:dl4j-ethnicity-test justin$ gradle run | |
:compileJava UP-TO-DATE | |
:compileScala | |
Download https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-core/2.6.3/jackson-core-2.6.3.pom | |
Download https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-databind/2.6.3/jackson-databind-2.6.3.pom | |
Download https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-annotations/2.6.3/jackson-annotations-2.6.3.pom | |
Download https://repo1.maven.org/maven2/com/fasterxml/jackson/module/jackson-module-scala_2.10/2.6.3/jackson-module-scala_2.10-2.6.3.pom | |
Download https://repo1.maven.org/maven2/com/fasterxml/jackson/module/jackson-module-paranamer/2.6.3/jackson-module-paranamer-2.6.3.pom | |
Download https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-core/2.6.3/jackson-core-2.6.3.jar | |
Download https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-databind/2.6.3/jackson-databind-2.6.3.jar | |
Download https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-annotations/2.6.3/jackson-annotations-2.6.3.jar | |
Download https://repo1.maven.org/maven2/com/fasterxml/jackson/module/jackson-module-scala_2.10/2.6.3/jackson-module-scala_2.10-2.6.3.jar | |
Download https://repo1.maven.org/maven2/com/fasterxml/jackson/module/jackson-module-paranamer/2.6.3/jackson-module-paranamer-2.6.3.jar | |
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 | |
:processResources UP-TO-DATE | |
:classes | |
:run | |
o.d.n.c.NeuralNetConfiguration - Layer cnn1 momentum has been set but will not be applied unless the updater is set to NESTEROVS. | |
o.d.n.c.NeuralNetConfiguration - Layer cnn1 regularization is set to true but l1 or l2 has not been added to configuration. | |
o.d.n.c.NeuralNetConfiguration - Layer pool1 momentum has been set but will not be applied unless the updater is set to NESTEROVS. | |
o.d.n.c.NeuralNetConfiguration - Layer pool1 regularization is set to true but l1 or l2 has not been added to configuration. | |
o.d.n.c.NeuralNetConfiguration - Layer cnn2 momentum has been set but will not be applied unless the updater is set to NESTEROVS. | |
o.d.n.c.NeuralNetConfiguration - Layer cnn2 regularization is set to true but l1 or l2 has not been added to configuration. | |
o.d.n.c.NeuralNetConfiguration - Layer pool2 momentum has been set but will not be applied unless the updater is set to NESTEROVS. | |
o.d.n.c.NeuralNetConfiguration - Layer pool2 regularization is set to true but l1 or l2 has not been added to configuration. | |
o.d.n.c.NeuralNetConfiguration - Layer cnn3 momentum has been set but will not be applied unless the updater is set to NESTEROVS. | |
o.d.n.c.NeuralNetConfiguration - Layer cnn3 regularization is set to true but l1 or l2 has not been added to configuration. | |
o.d.n.c.NeuralNetConfiguration - Layer pool3 momentum has been set but will not be applied unless the updater is set to NESTEROVS. | |
o.d.n.c.NeuralNetConfiguration - Layer pool3 regularization is set to true but l1 or l2 has not been added to configuration. | |
o.d.n.c.NeuralNetConfiguration - Layer cnn4 momentum has been set but will not be applied unless the updater is set to NESTEROVS. | |
o.d.n.c.NeuralNetConfiguration - Layer cnn4 regularization is set to true but l1 or l2 has not been added to configuration. | |
o.d.n.c.NeuralNetConfiguration - Layer ffn1 momentum has been set but will not be applied unless the updater is set to NESTEROVS. | |
o.d.n.c.NeuralNetConfiguration - Layer ffn1 regularization is set to true but l1 or l2 has not been added to configuration. | |
o.d.n.c.NeuralNetConfiguration - Layer not named momentum has been set but will not be applied unless the updater is set to NESTEROVS. | |
o.d.n.c.NeuralNetConfiguration - Layer not named regularization is set to true but l1 or l2 has not been added to configuration. | |
INFO [2016-04-27 22:37:25,304] org.eclipse.jetty.util.log: Logging initialized @87589ms | |
INFO [2016-04-27 22:37:25,372] io.dropwizard.assets.AssetsBundle: Registering AssetBundle with name: assets for path /assets/* | |
port: 0 | |
port: 0 | |
WARN [2016-04-27 22:37:25,473] org.glassfish.jersey.internal.Errors: The following warnings have been detected: WARNING: Cannot create new registration for component type class org.deeplearning4j.ui.exception.GenericExceptionMapper: Existing previous registration found for the type. | |
INFO [2016-04-27 22:37:25,492] io.dropwizard.server.ServerFactory: Starting UiServer | |
WARN [2016-04-27 22:37:25,527] org.glassfish.jersey.internal.Errors: The following warnings have been detected: WARNING: Cannot create new registration for component type class io.dropwizard.jersey.jackson.JsonProcessingExceptionMapper: Existing previous registration found for the type. | |
INFO [2016-04-27 22:37:25,613] org.eclipse.jetty.setuid.SetUIDListener: Opened application@48c56b42{HTTP/1.1}{0.0.0.0:58059} | |
INFO [2016-04-27 22:37:25,614] org.eclipse.jetty.setuid.SetUIDListener: Opened admin@5c6fae3c{HTTP/1.1}{0.0.0.0:58060} | |
INFO [2016-04-27 22:37:25,622] org.eclipse.jetty.server.Server: jetty-9.2.9.v20150224 | |
INFO [2016-04-27 22:37:26,463] io.dropwizard.jersey.DropwizardResourceConfig: The following paths were found for the configured resources: | |
GET /tsne (org.deeplearning4j.ui.tsne.TsneDropwiz) | |
GET /tsne/{path} (org.deeplearning4j.ui.tsne.TsneDropwiz) | |
POST /tsne/update (org.deeplearning4j.ui.tsne.TsneDropwiz) | |
POST /tsne/upload (org.deeplearning4j.ui.tsne.TsneDropwiz) | |
POST /tsne/vocab (org.deeplearning4j.ui.tsne.TsneDropwiz) | |
GET /weights (org.deeplearning4j.ui.weights.WeightDropwiz) | |
GET /weights/data (org.deeplearning4j.ui.weights.WeightDropwiz) | |
GET /weights/updated (org.deeplearning4j.ui.weights.WeightDropwiz) | |
POST /weights/update (org.deeplearning4j.ui.weights.WeightDropwiz) | |
GET / (org.deeplearning4j.ui.defaults.DefaultDropwiz) | |
GET /events (org.deeplearning4j.ui.defaults.DefaultDropwiz) | |
GET /sessions (org.deeplearning4j.ui.defaults.DefaultDropwiz) | |
GET /whatsup (org.deeplearning4j.ui.defaults.DefaultDropwiz) | |
GET /word2vec (org.deeplearning4j.ui.nearestneighbors.word2vec.NearestNeighborsDropwiz) | |
GET /word2vec/{path} (org.deeplearning4j.ui.nearestneighbors.word2vec.NearestNeighborsDropwiz) | |
POST /word2vec/upload (org.deeplearning4j.ui.nearestneighbors.word2vec.NearestNeighborsDropwiz) | |
POST /word2vec/vocab (org.deeplearning4j.ui.nearestneighbors.word2vec.NearestNeighborsDropwiz) | |
POST /word2vec/words (org.deeplearning4j.ui.nearestneighbors.word2vec.NearestNeighborsDropwiz) | |
GET /api/coords (org.deeplearning4j.ui.api.ApiResource) | |
GET /api/{path} (org.deeplearning4j.ui.api.ApiResource) | |
POST /api/coords (org.deeplearning4j.ui.api.ApiResource) | |
POST /api/update (org.deeplearning4j.ui.api.ApiResource) | |
POST /api/upload (org.deeplearning4j.ui.api.ApiResource) | |
GET /filters (org.deeplearning4j.ui.renders.RendersDropwiz) | |
GET /filters/img (org.deeplearning4j.ui.renders.RendersDropwiz) | |
POST /filters/update (org.deeplearning4j.ui.renders.RendersDropwiz) | |
GET /flow (org.deeplearning4j.ui.flow.FlowDropwiz) | |
GET /flow/action/{id} (org.deeplearning4j.ui.flow.FlowDropwiz) | |
GET /flow/state (org.deeplearning4j.ui.flow.FlowDropwiz) | |
POST /flow/action/{id} (org.deeplearning4j.ui.flow.FlowDropwiz) | |
POST /flow/state (org.deeplearning4j.ui.flow.FlowDropwiz) | |
GET /activations (org.deeplearning4j.ui.activation.ActivationsDropwiz) | |
GET /activations/img (org.deeplearning4j.ui.activation.ActivationsDropwiz) | |
POST /activations/update (org.deeplearning4j.ui.activation.ActivationsDropwiz) | |
INFO [2016-04-27 22:37:26,465] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@3bd466bd{/,null,AVAILABLE} | |
INFO [2016-04-27 22:37:26,471] io.dropwizard.setup.AdminEnvironment: tasks = | |
POST /tasks/log-level (io.dropwizard.servlets.tasks.LogConfigurationTask) | |
POST /tasks/gc (io.dropwizard.servlets.tasks.GarbageCollectionTask) | |
WARN [2016-04-27 22:37:26,471] io.dropwizard.setup.AdminEnvironment: | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
! THIS APPLICATION HAS NO HEALTHCHECKS. THIS MEANS YOU WILL NEVER KNOW ! | |
! IF IT DIES IN PRODUCTION, WHICH MEANS YOU WILL NEVER KNOW IF YOU'RE ! | |
! LETTING YOUR USERS DOWN. YOU SHOULD ADD A HEALTHCHECK FOR EACH OF YOUR ! | |
! APPLICATION'S DEPENDENCIES WHICH FULLY (BUT LIGHTLY) TESTS IT. ! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | |
INFO [2016-04-27 22:37:26,475] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@21f8693e{/,null,AVAILABLE} | |
INFO [2016-04-27 22:37:26,488] org.eclipse.jetty.server.ServerConnector: Started application@48c56b42{HTTP/1.1}{0.0.0.0:58059} | |
INFO [2016-04-27 22:37:26,488] org.eclipse.jetty.server.ServerConnector: Started admin@5c6fae3c{HTTP/1.1}{0.0.0.0:58060} | |
INFO [2016-04-27 22:37:26,488] org.eclipse.jetty.server.Server: Started @88777ms | |
UI Histogram URL: http://localhost:58059/weights?sid=4ade9fb4-ba5b-4da9-81c4-e2f0283eb972 | |
INFO [2016-04-27 22:37:27,792] org.apache.spark.storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 672.0 B, free 672.0 B) | |
INFO [2016-04-27 22:37:27,813] org.apache.spark.storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 347.0 B, free 1019.0 B) | |
INFO [2016-04-27 22:37:27,816] org.apache.spark.storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:58019 (size: 347.0 B, free: 1140.4 MB) | |
INFO [2016-04-27 22:37:27,819] org.apache.spark.SparkContext: Created broadcast 0 from broadcast at SparkDl4jMultiLayer.java:122 | |
INFO [2016-04-27 22:37:27,831] org.deeplearning4j.spark.earlystopping.BaseSparkEarlyStoppingTrainer: Starting early stopping training | |
INFO [2016-04-27 22:37:27,910] org.deeplearning4j.spark.earlystopping.BaseSparkEarlyStoppingTrainer: Initiating distributed training of subset 1 of 3 | |
INFO [2016-04-27 22:37:27,924] org.deeplearning4j.spark.impl.multilayer.SparkDl4jMultiLayer: Running distributed training: (averaging each iteration = true), (iterations = 5), (num partions = 2) | |
INFO [2016-04-27 22:37:27,924] org.deeplearning4j.spark.impl.multilayer.SparkDl4jMultiLayer: Broadcasting initial parameters of length 1329828 | |
INFO [2016-04-27 22:37:28,003] org.apache.spark.storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 450.7 KB, free 451.7 KB) | |
INFO [2016-04-27 22:37:28,152] org.apache.spark.storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.0 MB, free 4.4 MB) | |
INFO [2016-04-27 22:37:28,153] org.apache.spark.storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:58019 (size: 4.0 MB, free: 1136.4 MB) | |
INFO [2016-04-27 22:37:28,153] org.apache.spark.storage.MemoryStore: Block broadcast_1_piece1 stored as bytes in memory (estimated size 1126.4 KB, free 5.5 MB) | |
INFO [2016-04-27 22:37:28,154] org.apache.spark.storage.BlockManagerInfo: Added broadcast_1_piece1 in memory on localhost:58019 (size: 1126.4 KB, free: 1135.3 MB) | |
INFO [2016-04-27 22:37:28,155] org.apache.spark.SparkContext: Created broadcast 1 from broadcast at SparkDl4jMultiLayer.java:373 | |
INFO [2016-04-27 22:37:28,156] org.apache.spark.storage.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 672.0 B, free 5.5 MB) | |
INFO [2016-04-27 22:37:28,159] org.apache.spark.storage.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 347.0 B, free 5.5 MB) | |
INFO [2016-04-27 22:37:28,160] org.apache.spark.storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:58019 (size: 347.0 B, free: 1135.3 MB) | |
INFO [2016-04-27 22:37:28,161] org.apache.spark.SparkContext: Created broadcast 2 from broadcast at SparkDl4jMultiLayer.java:380 | |
INFO [2016-04-27 22:37:28,515] org.apache.spark.storage.BlockManagerInfo: Removed broadcast_0_piece0 on localhost:58019 in memory (size: 347.0 B, free: 1135.3 MB) | |
INFO [2016-04-27 22:37:28,543] org.deeplearning4j.spark.impl.multilayer.SparkDl4jMultiLayer: Running iterative reduce and averaging parameters | |
INFO [2016-04-27 22:37:28,698] org.apache.spark.SparkContext: Starting job: foreach at SparkDl4jMultiLayer.java:430 | |
INFO [2016-04-27 22:37:28,728] org.apache.spark.scheduler.DAGScheduler: Got job 0 (foreach at SparkDl4jMultiLayer.java:430) with 2 output partitions | |
INFO [2016-04-27 22:37:28,728] org.apache.spark.scheduler.DAGScheduler: Final stage: ResultStage 0 (foreach at SparkDl4jMultiLayer.java:430) | |
INFO [2016-04-27 22:37:28,729] org.apache.spark.scheduler.DAGScheduler: Parents of final stage: List() | |
INFO [2016-04-27 22:37:28,734] org.apache.spark.scheduler.DAGScheduler: Missing parents: List() | |
INFO [2016-04-27 22:37:28,742] org.apache.spark.scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[5] at map at SparkDl4jMultiLayer.java:426), which has no missing parents | |
INFO [2016-04-27 22:37:28,812] org.apache.spark.storage.MemoryStore: Block broadcast_3 stored as values in memory (estimated size 5.1 MB, free 10.7 MB) | |
INFO [2016-04-27 22:37:28,818] org.apache.spark.storage.MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 252.6 KB, free 10.9 MB) | |
INFO [2016-04-27 22:37:28,818] org.apache.spark.storage.BlockManagerInfo: Added broadcast_3_piece0 in memory on localhost:58019 (size: 252.6 KB, free: 1135.0 MB) | |
INFO [2016-04-27 22:37:28,819] org.apache.spark.SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:1006 | |
INFO [2016-04-27 22:37:28,824] org.apache.spark.scheduler.DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[5] at map at SparkDl4jMultiLayer.java:426) | |
INFO [2016-04-27 22:37:28,825] org.apache.spark.scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks | |
[Stage 0:> (0 + 0) / 2] WARN [2016-04-27 22:37:30,841] org.apache.spark.scheduler.TaskSetManager: Stage 0 contains a task of very large size (184378 KB). The maximum recommended task size is 100 KB. | |
INFO [2016-04-27 22:37:30,843] org.apache.spark.scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 188803238 bytes) | |
[Stage 0:> (0 + 1) / 2] INFO [2016-04-27 22:37:33,881] org.apache.spark.scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1,PROCESS_LOCAL, 188924206 bytes) | |
INFO [2016-04-27 22:37:33,889] org.apache.spark.executor.Executor: Running task 1.0 in stage 0.0 (TID 1) | |
INFO [2016-04-27 22:37:33,889] org.apache.spark.executor.Executor: Running task 0.0 in stage 0.0 (TID 0) | |
[Stage 0:> (0 + 2) / 2] INFO [2016-04-27 22:37:37,743] org.apache.spark.CacheManager: Partition rdd_4_1 not found, computing it | |
INFO [2016-04-27 22:37:37,744] org.apache.spark.CacheManager: Partition rdd_0_1 not found, computing it | |
INFO [2016-04-27 22:37:37,859] org.apache.spark.CacheManager: Partition rdd_4_0 not found, computing it | |
INFO [2016-04-27 22:37:37,860] org.apache.spark.CacheManager: Partition rdd_0_0 not found, computing it | |
INFO [2016-04-27 22:37:40,298] org.apache.spark.storage.MemoryStore: Block rdd_0_1 stored as values in memory (estimated size 1842.0 KB, free 12.7 MB) | |
INFO [2016-04-27 22:37:40,298] org.apache.spark.storage.BlockManagerInfo: Added rdd_0_1 in memory on localhost:58019 (size: 1842.0 KB, free: 1133.2 MB) | |
ERROR [2016-04-27 22:37:40,319] org.apache.spark.executor.Executor: Exception in task 1.0 in stage 0.0 (TID 1) | |
! java.lang.IllegalStateException: Unable to get number of of columns for a non 2d matrix | |
! at org.nd4j.linalg.api.ndarray.BaseNDArray.columns(BaseNDArray.java:3443) ~[nd4j-api-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.nd4j.linalg.dataset.DataSet.merge(DataSet.java:117) ~[nd4j-api-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:85) ~[dl4j-spark-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:49) ~[dl4j-spark-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:170) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:170) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.iterator(RDD.scala:268) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.Task.run(Task.scala:89) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_65] | |
! at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_65] | |
! at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65] | |
INFO [2016-04-27 22:37:40,354] org.apache.spark.storage.MemoryStore: Block rdd_0_0 stored as values in memory (estimated size 1842.0 KB, free 14.5 MB) | |
INFO [2016-04-27 22:37:40,354] org.apache.spark.storage.BlockManagerInfo: Added rdd_0_0 in memory on localhost:58019 (size: 1842.0 KB, free: 1131.4 MB) | |
ERROR [2016-04-27 22:37:40,360] org.apache.spark.executor.Executor: Exception in task 0.0 in stage 0.0 (TID 0) | |
! java.lang.IllegalStateException: Unable to get number of of columns for a non 2d matrix | |
! at org.nd4j.linalg.api.ndarray.BaseNDArray.columns(BaseNDArray.java:3443) ~[nd4j-api-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.nd4j.linalg.dataset.DataSet.merge(DataSet.java:117) ~[nd4j-api-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:85) ~[dl4j-spark-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:49) ~[dl4j-spark-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:170) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:170) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.iterator(RDD.scala:268) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.Task.run(Task.scala:89) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_65] | |
! at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_65] | |
! at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65] | |
WARN [2016-04-27 22:37:40,380] org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, localhost): java.lang.IllegalStateException: Unable to get number of of columns for a non 2d matrix | |
at org.nd4j.linalg.api.ndarray.BaseNDArray.columns(BaseNDArray.java:3443) | |
at org.nd4j.linalg.dataset.DataSet.merge(DataSet.java:117) | |
at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:85) | |
at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:49) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:170) | |
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:170) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:268) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) | |
at org.apache.spark.scheduler.Task.run(Task.scala:89) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:745) | |
ERROR [2016-04-27 22:37:40,383] org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 0.0 failed 1 times; aborting job | |
INFO [2016-04-27 22:37:40,394] org.apache.spark.scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool | |
INFO [2016-04-27 22:37:40,398] org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) on executor localhost: java.lang.IllegalStateException (Unable to get number of of columns for a non 2d matrix) [duplicate 1] | |
INFO [2016-04-27 22:37:40,398] org.apache.spark.scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool | |
INFO [2016-04-27 22:37:40,406] org.apache.spark.scheduler.TaskSchedulerImpl: Cancelling stage 0 | |
INFO [2016-04-27 22:37:40,410] org.apache.spark.scheduler.DAGScheduler: ResultStage 0 (foreach at SparkDl4jMultiLayer.java:430) failed in 11.572 s | |
INFO [2016-04-27 22:37:40,412] org.apache.spark.scheduler.DAGScheduler: Job 0 failed: foreach at SparkDl4jMultiLayer.java:430, took 11.713141 s | |
WARN [2016-04-27 22:37:40,421] org.deeplearning4j.spark.earlystopping.BaseSparkEarlyStoppingTrainer: Early stopping training terminated due to exception at epoch 0, iteration 0 | |
! java.lang.IllegalStateException: Unable to get number of of columns for a non 2d matrix | |
! at org.nd4j.linalg.api.ndarray.BaseNDArray.columns(BaseNDArray.java:3443) ~[nd4j-api-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.nd4j.linalg.dataset.DataSet.merge(DataSet.java:117) ~[nd4j-api-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:85) ~[dl4j-spark-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:49) ~[dl4j-spark-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:170) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:170) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.iterator(RDD.scala:268) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.Task.run(Task.scala:89) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_65] | |
! at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_65] | |
! at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_65] | |
! Causing: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 1 times, most recent failure: Lost task 1.0 in stage 0.0 (TID 1, localhost): java.lang.IllegalStateException: Unable to get number of of columns for a non 2d matrix | |
! at org.nd4j.linalg.api.ndarray.BaseNDArray.columns(BaseNDArray.java:3443) | |
! at org.nd4j.linalg.dataset.DataSet.merge(DataSet.java:117) | |
! at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:85) | |
! at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:49) | |
! at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:170) | |
! at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:170) | |
! at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
! at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710) | |
! at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
! at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
! at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69) | |
! at org.apache.spark.rdd.RDD.iterator(RDD.scala:268) | |
! at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
! at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) | |
! at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) | |
! at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) | |
! at org.apache.spark.scheduler.Task.run(Task.scala:89) | |
! at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) | |
! at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
! at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
! at java.lang.Thread.run(Thread.java:745) | |
! | |
! Driver stacktrace: | |
! at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) ~[scala-library-2.11.7.jar:na] | |
! at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) ~[scala-library-2.11.7.jar:na] | |
! at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at scala.Option.foreach(Option.scala:257) ~[scala-library-2.11.7.jar:na] | |
! at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:912) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:910) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.rdd.RDD.foreach(RDD.scala:910) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.api.java.JavaRDDLike$class.foreach(JavaRDDLike.scala:332) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.apache.spark.api.java.AbstractJavaRDDLike.foreach(JavaRDDLike.scala:46) ~[spark-core_2.11-1.6.1.jar:1.6.1] | |
! at org.deeplearning4j.spark.impl.multilayer.SparkDl4jMultiLayer.runIteration(SparkDl4jMultiLayer.java:430) ~[dl4j-spark-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.deeplearning4j.spark.impl.multilayer.SparkDl4jMultiLayer.fitDataSet(SparkDl4jMultiLayer.java:350) ~[dl4j-spark-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.deeplearning4j.spark.impl.multilayer.SparkDl4jMultiLayer.fitDataSet(SparkDl4jMultiLayer.java:316) ~[dl4j-spark-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.deeplearning4j.spark.earlystopping.SparkEarlyStoppingTrainer.fit(SparkEarlyStoppingTrainer.java:67) ~[dl4j-spark-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at org.deeplearning4j.spark.earlystopping.BaseSparkEarlyStoppingTrainer.fit(BaseSparkEarlyStoppingTrainer.java:155) ~[dl4j-spark-0.4-rc3.9-SNAPSHOT.jar:na] | |
! at ai.bernie.researchtests.TrainNet$.main(TrainNet.scala:98) [main/:na] | |
! at ai.bernie.researchtests.TrainNet.main(TrainNet.scala) [main/:na] |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment