Skip to content

Instantly share code, notes, and snippets.

@Habitats
Created March 2, 2016 10:40
Show Gist options
  • Save Habitats/a703837951c6823340cd to your computer and use it in GitHub Desktop.
Save Habitats/a703837951c6823340cd to your computer and use it in GitHub Desktop.
11:30:15.453 WARN - Stage 1 contains a task of very large size (430 KB). The maximum recommended task size is 100 KB.
11:30:50.981 WARN - Stage 3 contains a task of very large size (430 KB). The maximum recommended task size is 100 KB.
11:34:44.207 CORPUS - Starting training ...
11:34:44.332 WARN - Removing executor driver with no recent heartbeats: 200691 ms exceeds timeout 120000 ms
11:34:44.863 ERROR - Lost executor driver on localhost: Executor heartbeat timed out after 200691 ms
11:34:44.869 WARN - Killing executors is only supported in coarse-grained mode
11:34:44.934 WARN - Stage 4 contains a task of very large size (12814 KB). The maximum recommended task size is 100 KB.
11:34:45.422 INFO - Running distributed training: (averaging each iteration = true), (iterations =1), (num partions = 8)
11:34:45.422 INFO - Broadcasting initial parameters of length 965018
11:34:54.785 INFO - Ran iterative reduce... averaging parameters now.
11:35:06.025 WARN - Stage 5 contains a task of very large size (12814 KB). The maximum recommended task size is 100 KB.
[Stage 5:> (0 + 8) / 8]11:35:06.648 DEBUG - Training on 50 examples with data {0=3150.0, 1=12.0, 2=2.0, 4=1.0, 5=29.0, 10=1.0, 11=3.0, 12=1.0, 14=1.0}
11:35:06.683 DEBUG - Training on 50 examples with data {0=2700.0, 16=6.0, 17=1.0, 1=13.0, 2=2.0, 3=3.0, 4=2.0, 5=15.0, 11=4.0, 12=3.0, 14=1.0}
11:35:06.735 ERROR - Exception in task 2.0 in stage 5.0 (TID 174)
java.lang.IllegalArgumentException: Illegal index 4900000
at org.nd4j.linalg.api.buffer.BaseDataBuffer.put(BaseDataBuffer.java:702) ~[nd4j-api-0.4-rc3.8.jar:na]
at org.nd4j.linalg.api.buffer.BaseDataBuffer.put(BaseDataBuffer.java:696) ~[nd4j-api-0.4-rc3.8.jar:na]
at org.nd4j.linalg.api.buffer.BaseDataBuffer.copyAtStride(BaseDataBuffer.java:282) ~[nd4j-api-0.4-rc3.8.jar:na]
at org.nd4j.linalg.api.ndarray.BaseNDArray.assign(BaseNDArray.java:1027) ~[nd4j-api-0.4-rc3.8.jar:na]
at org.nd4j.linalg.dataset.DataSet.mergeTimeSeries(DataSet.java:154) ~[nd4j-api-0.4-rc3.8.jar:na]
at org.nd4j.linalg.dataset.DataSet.merge(DataSet.java:114) ~[nd4j-api-0.4-rc3.8.jar:na]
at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:85) ~[dl4j-spark-0.4-rc3.8.jar:na]
at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:48) ~[dl4j-spark-0.4-rc3.8.jar:na]
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:167) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:167) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.RDD.iterator(RDD.scala:262) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.scheduler.Task.run(Task.scala:88) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
11:35:06.736 DEBUG - Training on 100 examples with data {0=4084.0, 1=15.0, 2=4.0, 3=11.0, 4=13.0, 5=17.0, 6=4.0, 7=3.0, 8=1.0, 9=4.0, 10=3.0, 11=3.0, 12=8.0, 13=4.0, 14=5.0, 15=3.0, 16=15.0, 17=3.0}
11:35:06.736 ERROR - Exception in task 6.0 in stage 5.0 (TID 178)
java.lang.IllegalArgumentException: Illegal index 4600000
at org.nd4j.linalg.api.buffer.BaseDataBuffer.put(BaseDataBuffer.java:702) ~[nd4j-api-0.4-rc3.8.jar:na]
at org.nd4j.linalg.api.buffer.BaseDataBuffer.put(BaseDataBuffer.java:696) ~[nd4j-api-0.4-rc3.8.jar:na]
at org.nd4j.linalg.api.buffer.BaseDataBuffer.copyAtStride(BaseDataBuffer.java:282) ~[nd4j-api-0.4-rc3.8.jar:na]
at org.nd4j.linalg.api.ndarray.BaseNDArray.assign(BaseNDArray.java:1027) ~[nd4j-api-0.4-rc3.8.jar:na]
at org.nd4j.linalg.dataset.DataSet.mergeTimeSeries(DataSet.java:154) ~[nd4j-api-0.4-rc3.8.jar:na]
at org.nd4j.linalg.dataset.DataSet.merge(DataSet.java:114) ~[nd4j-api-0.4-rc3.8.jar:na]
at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:85) ~[dl4j-spark-0.4-rc3.8.jar:na]
at org.deeplearning4j.spark.impl.multilayer.IterativeReduceFlatMap.call(IterativeReduceFlatMap.java:48) ~[dl4j-spark-0.4-rc3.8.jar:na]
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:167) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$5$1.apply(JavaRDDLike.scala:167) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.RDD.iterator(RDD.scala:262) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.scheduler.Task.run(Task.scala:88) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) ~[spark-core_2.10-1.5.2.jar:1.5.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment