Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Stack trace of the exception while reading from ES using Spark connector.
Connected to the target VM, address: '127.0.0.1:64009', transport: 'socket'
log4j: reset attribute= "false".
log4j: Threshold ="null".
log4j: Retreiving an instance of org.apache.log4j.Logger.
log4j: Setting [com.trgr.platform.riptide] additivity to [true].
log4j: Level value for com.trgr.platform.riptide is [WARN].
log4j: com.trgr.platform.riptide level set to WARN
log4j: Retreiving an instance of org.apache.log4j.Logger.
log4j: Setting [log4j.logger.org.eclipse.jetty] additivity to [true].
log4j: Level value for log4j.logger.org.eclipse.jetty is [ERROR].
log4j: log4j.logger.org.eclipse.jetty level set to ERROR
log4j: Retreiving an instance of org.apache.log4j.Logger.
log4j: Setting [log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle] additivity to [true].
log4j: Level value for log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle is [ERROR].
log4j: log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle level set to ERROR
log4j: Retreiving an instance of org.apache.log4j.Logger.
log4j: Setting [log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper] additivity to [true].
log4j: Level value for log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper is [ERROR].
log4j: log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper level set to ERROR
log4j: Retreiving an instance of org.apache.log4j.Logger.
log4j: Setting [log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter] additivity to [true].
log4j: Level value for log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter is [ERROR].
log4j: log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter level set to ERROR
log4j: Level value for root is [INFO].
log4j: root level set to INFO
log4j: Class name: [org.apache.log4j.ConsoleAppender]
log4j: Parsing layout of class: "org.apache.log4j.PatternLayout"
log4j: Setting property [conversionPattern] to [%d{yyyy-MM-dd HH:mm:ss.SSS zzz '(UTC'Z')'} %-5p %-30.30c{1} [%-25M:%3L] -- -- %m%n].
log4j: Adding appender named [STDOUT] to category [root].
2015-10-21 09:46:32.745 CDT (UTC-0500) INFO Remoting [apply$mcV$sp : 74] -- -- Starting remoting
2015-10-21 09:46:33.005 CDT (UTC-0500) INFO Remoting [apply$mcV$sp : 74] -- -- Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.208.8.28:64013]
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot invoke method public org.joda.time.DateTime org.joda.time.format.DateTimeFormatter.parseDateTime(java.lang.String)
at org.elasticsearch.hadoop.util.ReflectionUtils.invoke(ReflectionUtils.java:93)
at org.elasticsearch.hadoop.util.DateUtils$JodaTime.parseDate(DateUtils.java:105)
at org.elasticsearch.hadoop.util.DateUtils.parseDate(DateUtils.java:122)
at org.elasticsearch.spark.serialization.ScalaValueReader.createDate(ScalaValueReader.scala:134)
at org.elasticsearch.spark.serialization.ScalaValueReader.parseDate(ScalaValueReader.scala:125)
at org.elasticsearch.spark.serialization.ScalaValueReader$$anonfun$date$1.apply(ScalaValueReader.scala:118)
at org.elasticsearch.spark.serialization.ScalaValueReader$$anonfun$date$1.apply(ScalaValueReader.scala:118)
at org.elasticsearch.spark.serialization.ScalaValueReader.checkNull(ScalaValueReader.scala:70)
at org.elasticsearch.spark.serialization.ScalaValueReader.date(ScalaValueReader.scala:118)
at org.elasticsearch.spark.serialization.ScalaValueReader.readValue(ScalaValueReader.scala:58)
at org.elasticsearch.hadoop.serialization.ScrollReader.parseValue(ScrollReader.java:580)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:568)
at org.elasticsearch.hadoop.serialization.ScrollReader.map(ScrollReader.java:636)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:559)
at org.elasticsearch.hadoop.serialization.ScrollReader.map(ScrollReader.java:636)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:559)
at org.elasticsearch.hadoop.serialization.ScrollReader.readHitAsMap(ScrollReader.java:358)
at org.elasticsearch.hadoop.serialization.ScrollReader.readHit(ScrollReader.java:293)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:188)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:167)
at org.elasticsearch.hadoop.rest.RestRepository.scroll(RestRepository.java:406)
at org.elasticsearch.hadoop.rest.ScrollQuery.hasNext(ScrollQuery.java:76)
at org.elasticsearch.spark.rdd.AbstractEsRDDIterator.hasNext(AbstractEsRDDIterator.scala:43)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1298)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1298)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.elasticsearch.hadoop.util.ReflectionUtils.invoke(ReflectionUtils.java:91)
... 44 more
Caused by: java.lang.IllegalArgumentException: Invalid format: "Fri Oct 09 02:06:12 +0000 2015"
at org.joda.time.format.DateTimeFormatter.parseDateTime(DateTimeFormatter.java:899)
... 49 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1848)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1298)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
at org.apache.spark.rdd.RDD.take(RDD.scala:1272)
at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1312)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
at org.apache.spark.rdd.RDD.first(RDD.scala:1311)
at com.trgr.rd.sdp.spark.util.SearchEsAndRetrieve$.main(SearchEsAndRetrieve.scala:83)
at com.trgr.rd.sdp.spark.util.SearchEsAndRetrieve.main(SearchEsAndRetrieve.scala)
Caused by: org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot invoke method public org.joda.time.DateTime org.joda.time.format.DateTimeFormatter.parseDateTime(java.lang.String)
at org.elasticsearch.hadoop.util.ReflectionUtils.invoke(ReflectionUtils.java:93)
at org.elasticsearch.hadoop.util.DateUtils$JodaTime.parseDate(DateUtils.java:105)
at org.elasticsearch.hadoop.util.DateUtils.parseDate(DateUtils.java:122)
at org.elasticsearch.spark.serialization.ScalaValueReader.createDate(ScalaValueReader.scala:134)
at org.elasticsearch.spark.serialization.ScalaValueReader.parseDate(ScalaValueReader.scala:125)
at org.elasticsearch.spark.serialization.ScalaValueReader$$anonfun$date$1.apply(ScalaValueReader.scala:118)
at org.elasticsearch.spark.serialization.ScalaValueReader$$anonfun$date$1.apply(ScalaValueReader.scala:118)
at org.elasticsearch.spark.serialization.ScalaValueReader.checkNull(ScalaValueReader.scala:70)
at org.elasticsearch.spark.serialization.ScalaValueReader.date(ScalaValueReader.scala:118)
at org.elasticsearch.spark.serialization.ScalaValueReader.readValue(ScalaValueReader.scala:58)
at org.elasticsearch.hadoop.serialization.ScrollReader.parseValue(ScrollReader.java:580)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:568)
at org.elasticsearch.hadoop.serialization.ScrollReader.map(ScrollReader.java:636)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:559)
at org.elasticsearch.hadoop.serialization.ScrollReader.map(ScrollReader.java:636)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:559)
at org.elasticsearch.hadoop.serialization.ScrollReader.readHitAsMap(ScrollReader.java:358)
at org.elasticsearch.hadoop.serialization.ScrollReader.readHit(ScrollReader.java:293)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:188)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:167)
at org.elasticsearch.hadoop.rest.RestRepository.scroll(RestRepository.java:406)
at org.elasticsearch.hadoop.rest.ScrollQuery.hasNext(ScrollQuery.java:76)
at org.elasticsearch.spark.rdd.AbstractEsRDDIterator.hasNext(AbstractEsRDDIterator.scala:43)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1298)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1298)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.elasticsearch.hadoop.util.ReflectionUtils.invoke(ReflectionUtils.java:91)
... 44 more
Caused by: java.lang.IllegalArgumentException: Invalid format: "Fri Oct 09 02:06:12 +0000 2015"
at org.joda.time.format.DateTimeFormatter.parseDateTime(DateTimeFormatter.java:899)
... 49 more
Disconnected from the target VM, address: '127.0.0.1:64009', transport: 'socket'
Process finished with exit code 1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment