Skip to content

Instantly share code, notes, and snippets.

@amiorin
Created November 9, 2015 15:22
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save amiorin/d84dcdcbbdf29fa36079 to your computer and use it in GitHub Desktop.
Save amiorin/d84dcdcbbdf29fa36079 to your computer and use it in GitHub Desktop.
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [GET] on [_nodes/http] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:Internal Server Error
╰─$ sbt/sbt console
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256M; support was removed in 8.0
[info] Loading global plugins from /Users/alberto.miorin/.sbt/0.13/plugins
[info] Loading project definition from /Users/alberto.miorin/Code/n4-spark-metrics/project
[info] Set current project to n4-spark-metrics (in build file:/Users/alberto.miorin/Code/n4-spark-metrics/)
[info] Starting scala interpreter...
[info]
Welcome to Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_51).
Type in expressions to have them evaluated.
Type :help for more information.
scala> import org.apache.spark.SparkConf
import org.apache.spark.SparkConf
scala> import org.apache.spark.SparkContext
import org.apache.spark.SparkContext
scala> import org.apache.spark.SparkContext._
import org.apache.spark.SparkContext._
scala> import org.elasticsearch.spark._
import org.elasticsearch.spark._
scala>
scala> val conf = new SparkConf().setAppName("foo").setMaster("local[8]")
conf: org.apache.spark.SparkConf = org.apache.spark.SparkConf@3ff6dc97
scala> conf.set("es.index.auto.create", "true")
res0: org.apache.spark.SparkConf = org.apache.spark.SparkConf@3ff6dc97
scala> conf.set("es.nodes", "elasticsearch.service.bohr.consul")
res1: org.apache.spark.SparkConf = org.apache.spark.SparkConf@3ff6dc97
scala> val sc = new SparkContext(conf)
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/11/09 16:19:01 INFO SparkContext: Running Spark version 1.5.1
15/11/09 16:19:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/09 16:19:02 INFO SecurityManager: Changing view acls to: alberto.miorin
15/11/09 16:19:02 INFO SecurityManager: Changing modify acls to: alberto.miorin
15/11/09 16:19:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(alberto.miorin); users with modify permissions: Set(alberto.miorin)
15/11/09 16:19:02 INFO Slf4jLogger: Slf4jLogger started
15/11/09 16:19:02 INFO Remoting: Starting remoting
15/11/09 16:19:02 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.88.48:50426]
15/11/09 16:19:02 INFO Utils: Successfully started service 'sparkDriver' on port 50426.
15/11/09 16:19:02 INFO SparkEnv: Registering MapOutputTracker
15/11/09 16:19:02 INFO SparkEnv: Registering BlockManagerMaster
15/11/09 16:19:02 INFO DiskBlockManager: Created local directory at /private/var/folders/9v/ghrd0dls1ks60kly9z0nvbjr0000gp/T/blockmgr-bd0f774d-44a5-4da9-99b4-711e8119f119
15/11/09 16:19:02 INFO MemoryStore: MemoryStore started with capacity 737.4 MB
15/11/09 16:19:02 INFO HttpFileServer: HTTP File server directory is /private/var/folders/9v/ghrd0dls1ks60kly9z0nvbjr0000gp/T/spark-9a775b1e-d456-4be1-981e-8d4aa435d02a/httpd-6bdf78d2-da2c-4dbf-bfac-0e5b165b1015
15/11/09 16:19:02 INFO HttpServer: Starting HTTP Server
15/11/09 16:19:02 INFO Utils: Successfully started service 'HTTP file server' on port 50427.
15/11/09 16:19:02 INFO SparkEnv: Registering OutputCommitCoordinator
15/11/09 16:19:02 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/11/09 16:19:02 INFO SparkUI: Started SparkUI at http://192.168.88.48:4040
15/11/09 16:19:03 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
15/11/09 16:19:03 INFO Executor: Starting executor ID driver on host localhost
15/11/09 16:19:03 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50428.
15/11/09 16:19:03 INFO NettyBlockTransferService: Server created on 50428
15/11/09 16:19:03 INFO BlockManagerMaster: Trying to register BlockManager
15/11/09 16:19:03 INFO BlockManagerMasterEndpoint: Registering block manager localhost:50428 with 737.4 MB RAM, BlockManagerId(driver, localhost, 50428)
15/11/09 16:19:03 INFO BlockManagerMaster: Registered BlockManager
sc: org.apache.spark.SparkContext = org.apache.spark.SparkContext@73f6dcac
scala>
scala> val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
numbers: scala.collection.immutable.Map[String,Int] = Map(one -> 1, two -> 2, three -> 3)
scala> val airports = Map("arrival" -> "Otopeni", "SFO" -> "San Fran")
airports: scala.collection.immutable.Map[String,String] = Map(arrival -> Otopeni, SFO -> San Fran)
scala>
scala> sc.makeRDD(Seq(numbers, airports)).saveToEs("foo/bar")
15/11/09 16:19:04 INFO SparkContext: Starting job: take at EsSpark.scala:59
15/11/09 16:19:04 INFO DAGScheduler: Got job 0 (take at EsSpark.scala:59) with 1 output partitions
15/11/09 16:19:04 INFO DAGScheduler: Final stage: ResultStage 0(take at EsSpark.scala:59)
15/11/09 16:19:04 INFO DAGScheduler: Parents of final stage: List()
15/11/09 16:19:04 INFO DAGScheduler: Missing parents: List()
15/11/09 16:19:04 INFO DAGScheduler: Submitting ResultStage 0 (ParallelCollectionRDD[0] at makeRDD at <console>:20), which has no missing parents
15/11/09 16:19:04 INFO MemoryStore: ensureFreeSpace(1264) called with curMem=0, maxMem=773188485
15/11/09 16:19:04 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1264.0 B, free 737.4 MB)
15/11/09 16:19:04 INFO MemoryStore: ensureFreeSpace(810) called with curMem=1264, maxMem=773188485
15/11/09 16:19:04 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 810.0 B, free 737.4 MB)
15/11/09 16:19:04 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:50428 (size: 810.0 B, free: 737.4 MB)
15/11/09 16:19:04 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861
15/11/09 16:19:04 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (ParallelCollectionRDD[0] at makeRDD at <console>:20)
15/11/09 16:19:04 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
15/11/09 16:19:04 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 2101 bytes)
15/11/09 16:19:04 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
15/11/09 16:19:04 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 929 bytes result sent to driver
15/11/09 16:19:04 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 56 ms on localhost (1/1)
15/11/09 16:19:04 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/11/09 16:19:04 INFO DAGScheduler: ResultStage 0 (take at EsSpark.scala:59) finished in 0.069 s
15/11/09 16:19:04 INFO DAGScheduler: Job 0 finished: take at EsSpark.scala:59, took 0.205630 s
15/11/09 16:19:04 INFO SparkContext: Starting job: take at EsSpark.scala:59
15/11/09 16:19:04 INFO DAGScheduler: Got job 1 (take at EsSpark.scala:59) with 4 output partitions
15/11/09 16:19:04 INFO DAGScheduler: Final stage: ResultStage 1(take at EsSpark.scala:59)
15/11/09 16:19:04 INFO DAGScheduler: Parents of final stage: List()
15/11/09 16:19:04 INFO DAGScheduler: Missing parents: List()
15/11/09 16:19:04 INFO DAGScheduler: Submitting ResultStage 1 (ParallelCollectionRDD[0] at makeRDD at <console>:20), which has no missing parents
15/11/09 16:19:04 INFO MemoryStore: ensureFreeSpace(1264) called with curMem=2074, maxMem=773188485
15/11/09 16:19:04 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 1264.0 B, free 737.4 MB)
15/11/09 16:19:04 INFO MemoryStore: ensureFreeSpace(810) called with curMem=3338, maxMem=773188485
15/11/09 16:19:04 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 810.0 B, free 737.4 MB)
15/11/09 16:19:04 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:50428 (size: 810.0 B, free: 737.4 MB)
15/11/09 16:19:04 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:861
15/11/09 16:19:04 INFO DAGScheduler: Submitting 4 missing tasks from ResultStage 1 (ParallelCollectionRDD[0] at makeRDD at <console>:20)
15/11/09 16:19:04 INFO TaskSchedulerImpl: Adding task set 1.0 with 4 tasks
15/11/09 16:19:04 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, PROCESS_LOCAL, 2101 bytes)
15/11/09 16:19:04 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 2, localhost, PROCESS_LOCAL, 2101 bytes)
15/11/09 16:19:04 INFO TaskSetManager: Starting task 2.0 in stage 1.0 (TID 3, localhost, PROCESS_LOCAL, 2321 bytes)
15/11/09 16:19:04 INFO TaskSetManager: Starting task 3.0 in stage 1.0 (TID 4, localhost, PROCESS_LOCAL, 2101 bytes)
15/11/09 16:19:04 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
15/11/09 16:19:04 INFO Executor: Running task 1.0 in stage 1.0 (TID 2)
15/11/09 16:19:04 INFO Executor: Running task 2.0 in stage 1.0 (TID 3)
15/11/09 16:19:04 INFO Executor: Running task 3.0 in stage 1.0 (TID 4)
15/11/09 16:19:04 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 929 bytes result sent to driver
15/11/09 16:19:04 INFO Executor: Finished task 1.0 in stage 1.0 (TID 2). 929 bytes result sent to driver
15/11/09 16:19:04 INFO Executor: Finished task 3.0 in stage 1.0 (TID 4). 929 bytes result sent to driver
15/11/09 16:19:04 INFO Executor: Finished task 2.0 in stage 1.0 (TID 3). 1195 bytes result sent to driver
15/11/09 16:19:04 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 8 ms on localhost (1/4)
15/11/09 16:19:04 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 2) in 9 ms on localhost (2/4)
15/11/09 16:19:04 INFO TaskSetManager: Finished task 3.0 in stage 1.0 (TID 4) in 7 ms on localhost (3/4)
15/11/09 16:19:04 INFO TaskSetManager: Finished task 2.0 in stage 1.0 (TID 3) in 8 ms on localhost (4/4)
15/11/09 16:19:04 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
15/11/09 16:19:04 INFO DAGScheduler: ResultStage 1 (take at EsSpark.scala:59) finished in 0.011 s
15/11/09 16:19:04 INFO DAGScheduler: Job 1 finished: take at EsSpark.scala:59, took 0.015790 s
15/11/09 16:19:04 INFO SparkContext: Starting job: runJob at EsSpark.scala:67
15/11/09 16:19:04 INFO DAGScheduler: Got job 2 (runJob at EsSpark.scala:67) with 8 output partitions
15/11/09 16:19:04 INFO DAGScheduler: Final stage: ResultStage 2(runJob at EsSpark.scala:67)
15/11/09 16:19:04 INFO DAGScheduler: Parents of final stage: List()
15/11/09 16:19:04 INFO DAGScheduler: Missing parents: List()
15/11/09 16:19:04 INFO DAGScheduler: Submitting ResultStage 2 (ParallelCollectionRDD[0] at makeRDD at <console>:20), which has no missing parents
15/11/09 16:19:04 INFO MemoryStore: ensureFreeSpace(1888) called with curMem=4148, maxMem=773188485
15/11/09 16:19:04 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 1888.0 B, free 737.4 MB)
15/11/09 16:19:04 INFO MemoryStore: ensureFreeSpace(1328) called with curMem=6036, maxMem=773188485
15/11/09 16:19:04 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1328.0 B, free 737.4 MB)
15/11/09 16:19:04 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:50428 (size: 1328.0 B, free: 737.4 MB)
15/11/09 16:19:04 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:861
15/11/09 16:19:04 INFO DAGScheduler: Submitting 8 missing tasks from ResultStage 2 (ParallelCollectionRDD[0] at makeRDD at <console>:20)
15/11/09 16:19:04 INFO TaskSchedulerImpl: Adding task set 2.0 with 8 tasks
15/11/09 16:19:04 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 5, localhost, PROCESS_LOCAL, 2101 bytes)
15/11/09 16:19:04 INFO TaskSetManager: Starting task 1.0 in stage 2.0 (TID 6, localhost, PROCESS_LOCAL, 2101 bytes)
15/11/09 16:19:04 INFO TaskSetManager: Starting task 2.0 in stage 2.0 (TID 7, localhost, PROCESS_LOCAL, 2101 bytes)
15/11/09 16:19:04 INFO TaskSetManager: Starting task 3.0 in stage 2.0 (TID 8, localhost, PROCESS_LOCAL, 2321 bytes)
15/11/09 16:19:04 INFO TaskSetManager: Starting task 4.0 in stage 2.0 (TID 9, localhost, PROCESS_LOCAL, 2101 bytes)
15/11/09 16:19:04 INFO TaskSetManager: Starting task 5.0 in stage 2.0 (TID 10, localhost, PROCESS_LOCAL, 2101 bytes)
15/11/09 16:19:04 INFO TaskSetManager: Starting task 6.0 in stage 2.0 (TID 11, localhost, PROCESS_LOCAL, 2101 bytes)
15/11/09 16:19:04 INFO TaskSetManager: Starting task 7.0 in stage 2.0 (TID 12, localhost, PROCESS_LOCAL, 2242 bytes)
15/11/09 16:19:04 INFO Executor: Running task 0.0 in stage 2.0 (TID 5)
15/11/09 16:19:04 INFO Executor: Running task 3.0 in stage 2.0 (TID 8)
15/11/09 16:19:04 INFO Executor: Running task 1.0 in stage 2.0 (TID 6)
15/11/09 16:19:04 INFO Executor: Running task 2.0 in stage 2.0 (TID 7)
15/11/09 16:19:04 INFO Executor: Running task 4.0 in stage 2.0 (TID 9)
15/11/09 16:19:04 INFO Executor: Running task 5.0 in stage 2.0 (TID 10)
15/11/09 16:19:04 INFO Executor: Running task 6.0 in stage 2.0 (TID 11)
15/11/09 16:19:04 INFO Executor: Running task 7.0 in stage 2.0 (TID 12)
15/11/09 16:19:04 INFO Version: Elasticsearch Hadoop v2.2.0-beta1 [66539b4e3f]
15/11/09 16:19:04 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:04 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:04 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:04 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:04 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:04 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:04 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:04 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:04 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][] w/ payload [null]
15/11/09 16:19:04 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][] w/ payload [null]
15/11/09 16:19:04 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][] w/ payload [null]
15/11/09 16:19:04 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][] w/ payload [null]
15/11/09 16:19:04 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][] w/ payload [null]
15/11/09 16:19:04 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][] w/ payload [null]
15/11/09 16:19:04 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][] w/ payload [null]
15/11/09 16:19:04 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{
"name" : "Insane Poincare",
"cluster_name" : "es-bohr",
"version" : {
"number" : "2.0.0",
"build_hash" : "de54438d6af8f9340d50c5c786151783ce7d6be5",
"build_timestamp" : "2015-10-22T08:09:48Z",
"build_snapshot" : false,
"lucene_version" : "5.2.1"
},
"tagline" : "You Know, for Search"
}
]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{
"name" : "Insane Poincare",
"cluster_name" : "es-bohr",
"version" : {
"number" : "2.0.0",
"build_hash" : "de54438d6af8f9340d50c5c786151783ce7d6be5",
"build_timestamp" : "2015-10-22T08:09:48Z",
"build_snapshot" : false,
"lucene_version" : "5.2.1"
},
"tagline" : "You Know, for Search"
}
]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{
"name" : "Insane Poincare",
"cluster_name" : "es-bohr",
"version" : {
"number" : "2.0.0",
"build_hash" : "de54438d6af8f9340d50c5c786151783ce7d6be5",
"build_timestamp" : "2015-10-22T08:09:48Z",
"build_snapshot" : false,
"lucene_version" : "5.2.1"
},
"tagline" : "You Know, for Search"
}
]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{
"name" : "Insane Poincare",
"cluster_name" : "es-bohr",
"version" : {
"number" : "2.0.0",
"build_hash" : "de54438d6af8f9340d50c5c786151783ce7d6be5",
"build_timestamp" : "2015-10-22T08:09:48Z",
"build_snapshot" : false,
"lucene_version" : "5.2.1"
},
"tagline" : "You Know, for Search"
}
]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{
"name" : "Insane Poincare",
"cluster_name" : "es-bohr",
"version" : {
"number" : "2.0.0",
"build_hash" : "de54438d6af8f9340d50c5c786151783ce7d6be5",
"build_timestamp" : "2015-10-22T08:09:48Z",
"build_snapshot" : false,
"lucene_version" : "5.2.1"
},
"tagline" : "You Know, for Search"
}
]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{
"name" : "Insane Poincare",
"cluster_name" : "es-bohr",
"version" : {
"number" : "2.0.0",
"build_hash" : "de54438d6af8f9340d50c5c786151783ce7d6be5",
"build_timestamp" : "2015-10-22T08:09:48Z",
"build_snapshot" : false,
"lucene_version" : "5.2.1"
},
"tagline" : "You Know, for Search"
}
]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{
"name" : "Insane Poincare",
"cluster_name" : "es-bohr",
"version" : {
"number" : "2.0.0",
"build_hash" : "de54438d6af8f9340d50c5c786151783ce7d6be5",
"build_timestamp" : "2015-10-22T08:09:48Z",
"build_snapshot" : false,
"lucene_version" : "5.2.1"
},
"tagline" : "You Know, for Search"
}
]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{
"name" : "Insane Poincare",
"cluster_name" : "es-bohr",
"version" : {
"number" : "2.0.0",
"build_hash" : "de54438d6af8f9340d50c5c786151783ce7d6be5",
"build_timestamp" : "2015-10-22T08:09:48Z",
"build_snapshot" : false,
"lucene_version" : "5.2.1"
},
"tagline" : "You Know, for Search"
}
]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][_nodes/transport] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][_nodes/transport] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][_nodes/transport] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][_nodes/transport] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][_nodes/transport] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][_nodes/transport] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][_nodes/transport] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][_nodes/transport] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{"cluster_name":"es-bohr","nodes":{"dLOoXzGPQlaS91C4ffgmtA":{"name":"Insane Poincare","transport_address":"els1.node.bohr.consul/172.16.0.104:9300","host":"172.16.0.104","ip":"172.16.0.104","version":"2.0.0","build":"de54438","http_address":"els1.node.bohr.consul/172.16.0.104:9200","transport":{"bound_address":["172.16.0.104:9300"],"publish_address":"els1.node.bohr.consul/172.16.0.104:9300","profiles":{}}}}}]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{"cluster_name":"es-bohr","nodes":{"dLOoXzGPQlaS91C4ffgmtA":{"name":"Insane Poincare","transport_address":"els1.node.bohr.consul/172.16.0.104:9300","host":"172.16.0.104","ip":"172.16.0.104","version":"2.0.0","build":"de54438","http_address":"els1.node.bohr.consul/172.16.0.104:9200","transport":{"bound_address":["172.16.0.104:9300"],"publish_address":"els1.node.bohr.consul/172.16.0.104:9300","profiles":{}}}}}]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{"cluster_name":"es-bohr","nodes":{"dLOoXzGPQlaS91C4ffgmtA":{"name":"Insane Poincare","transport_address":"els1.node.bohr.consul/172.16.0.104:9300","host":"172.16.0.104","ip":"172.16.0.104","version":"2.0.0","build":"de54438","http_address":"els1.node.bohr.consul/172.16.0.104:9200","transport":{"bound_address":["172.16.0.104:9300"],"publish_address":"els1.node.bohr.consul/172.16.0.104:9300","profiles":{}}}}}]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{"cluster_name":"es-bohr","nodes":{"dLOoXzGPQlaS91C4ffgmtA":{"name":"Insane Poincare","transport_address":"els1.node.bohr.consul/172.16.0.104:9300","host":"172.16.0.104","ip":"172.16.0.104","version":"2.0.0","build":"de54438","http_address":"els1.node.bohr.consul/172.16.0.104:9200","transport":{"bound_address":["172.16.0.104:9300"],"publish_address":"els1.node.bohr.consul/172.16.0.104:9300","profiles":{}}}}}]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{"cluster_name":"es-bohr","nodes":{"dLOoXzGPQlaS91C4ffgmtA":{"name":"Insane Poincare","transport_address":"els1.node.bohr.consul/172.16.0.104:9300","host":"172.16.0.104","ip":"172.16.0.104","version":"2.0.0","build":"de54438","http_address":"els1.node.bohr.consul/172.16.0.104:9200","transport":{"bound_address":["172.16.0.104:9300"],"publish_address":"els1.node.bohr.consul/172.16.0.104:9300","profiles":{}}}}}]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{"cluster_name":"es-bohr","nodes":{"dLOoXzGPQlaS91C4ffgmtA":{"name":"Insane Poincare","transport_address":"els1.node.bohr.consul/172.16.0.104:9300","host":"172.16.0.104","ip":"172.16.0.104","version":"2.0.0","build":"de54438","http_address":"els1.node.bohr.consul/172.16.0.104:9200","transport":{"bound_address":["172.16.0.104:9300"],"publish_address":"els1.node.bohr.consul/172.16.0.104:9300","profiles":{}}}}}]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{"cluster_name":"es-bohr","nodes":{"dLOoXzGPQlaS91C4ffgmtA":{"name":"Insane Poincare","transport_address":"els1.node.bohr.consul/172.16.0.104:9300","host":"172.16.0.104","ip":"172.16.0.104","version":"2.0.0","build":"de54438","http_address":"els1.node.bohr.consul/172.16.0.104:9200","transport":{"bound_address":["172.16.0.104:9300"],"publish_address":"els1.node.bohr.consul/172.16.0.104:9300","profiles":{}}}}}]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{"cluster_name":"es-bohr","nodes":{"dLOoXzGPQlaS91C4ffgmtA":{"name":"Insane Poincare","transport_address":"els1.node.bohr.consul/172.16.0.104:9300","host":"172.16.0.104","ip":"172.16.0.104","version":"2.0.0","build":"de54438","http_address":"els1.node.bohr.consul/172.16.0.104:9200","transport":{"bound_address":["172.16.0.104:9300"],"publish_address":"els1.node.bohr.consul/172.16.0.104:9300","profiles":{}}}}}]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[els1.node.bohr.consul/172.16.0.104:9200][_nodes/http] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][_nodes/http] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][_nodes/http] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[els1.node.bohr.consul/172.16.0.104:9200][_nodes/http] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[els1.node.bohr.consul/172.16.0.104:9200][_nodes/http] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[elasticsearch.service.bohr.consul:9200][_nodes/http] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[els1.node.bohr.consul/172.16.0.104:9200][_nodes/http] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [GET]@[els1.node.bohr.consul/172.16.0.104:9200][_nodes/http] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{"cluster_name":"es-bohr","nodes":{"dLOoXzGPQlaS91C4ffgmtA":{"name":"Insane Poincare","transport_address":"els1.node.bohr.consul/172.16.0.104:9300","host":"172.16.0.104","ip":"172.16.0.104","version":"2.0.0","build":"de54438","http_address":"els1.node.bohr.consul/172.16.0.104:9200","http":{"bound_address":["172.16.0.104:9200"],"publish_address":"els1.node.bohr.consul/172.16.0.104:9200","max_content_length_in_bytes":104857600}}}}]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{"cluster_name":"es-bohr","nodes":{"dLOoXzGPQlaS91C4ffgmtA":{"name":"Insane Poincare","transport_address":"els1.node.bohr.consul/172.16.0.104:9300","host":"172.16.0.104","ip":"172.16.0.104","version":"2.0.0","build":"de54438","http_address":"els1.node.bohr.consul/172.16.0.104:9200","http":{"bound_address":["172.16.0.104:9200"],"publish_address":"els1.node.bohr.consul/172.16.0.104:9200","max_content_length_in_bytes":104857600}}}}]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [200-OK] [{"cluster_name":"es-bohr","nodes":{"dLOoXzGPQlaS91C4ffgmtA":{"name":"Insane Poincare","transport_address":"els1.node.bohr.consul/172.16.0.104:9300","host":"172.16.0.104","ip":"172.16.0.104","version":"2.0.0","build":"de54438","http_address":"els1.node.bohr.consul/172.16.0.104:9200","http":{"bound_address":["172.16.0.104:9200"],"publish_address":"els1.node.bohr.consul/172.16.0.104:9200","max_content_length_in_bytes":104857600}}}}]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to elasticsearch.service.bohr.consul:9200
15/11/09 16:19:05 INFO EsRDDWriter: Writing to [foo/bar]
15/11/09 16:19:05 INFO EsRDDWriter: Writing to [foo/bar]
15/11/09 16:19:05 INFO EsRDDWriter: Writing to [foo/bar]
15/11/09 16:19:05 DEBUG NetworkClient: Opening (pinned) network client to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 DEBUG NetworkClient: Opening (pinned) network client to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 DEBUG NetworkClient: Opening (pinned) network client to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Opening HTTP transport to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [HEAD]@[els1.node.bohr.consul/172.16.0.104:9200][foo] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [HEAD]@[els1.node.bohr.consul/172.16.0.104:9200][foo] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Tx [HEAD]@[els1.node.bohr.consul/172.16.0.104:9200][foo] w/ payload [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [500-Internal Server Error] [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [500-Internal Server Error] [Internal Server Error
]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [500-Internal Server Error] [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [500-Internal Server Error] [null]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [500-Internal Server Error] [Internal Server Error
]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [500-Internal Server Error] [Internal Server Error
]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [500-Internal Server Error] [Internal Server Error
]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Rx @[192.168.88.80] [500-Internal Server Error] [Internal Server Error
]
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 5)
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [GET] on [_nodes/http] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:Internal Server Error
]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:427)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:385)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:363)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:367)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:121)
at org.elasticsearch.hadoop.rest.RestClient.getHttpDataNodes(RestClient.java:336)
at org.elasticsearch.hadoop.rest.InitializationUtils.filterNonDataNodesIfNeeded(InitializationUtils.java:121)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:381)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:40)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/11/09 16:19:05 ERROR Executor: Exception in task 6.0 in stage 2.0 (TID 11)
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [HEAD] on [foo] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:427)
at org.elasticsearch.hadoop.rest.RestClient.executeNotFoundAllowed(RestClient.java:398)
at org.elasticsearch.hadoop.rest.RestClient.exists(RestClient.java:467)
at org.elasticsearch.hadoop.rest.RestClient.touch(RestClient.java:473)
at org.elasticsearch.hadoop.rest.RestRepository.touch(RestRepository.java:473)
at org.elasticsearch.hadoop.rest.RestService.initSingleIndex(RestService.java:411)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:399)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:40)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/11/09 16:19:05 ERROR Executor: Exception in task 4.0 in stage 2.0 (TID 9)
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [HEAD] on [foo] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:427)
at org.elasticsearch.hadoop.rest.RestClient.executeNotFoundAllowed(RestClient.java:398)
at org.elasticsearch.hadoop.rest.RestClient.exists(RestClient.java:467)
at org.elasticsearch.hadoop.rest.RestClient.touch(RestClient.java:473)
at org.elasticsearch.hadoop.rest.RestRepository.touch(RestRepository.java:473)
at org.elasticsearch.hadoop.rest.RestService.initSingleIndex(RestService.java:411)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:399)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:40)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/11/09 16:19:05 ERROR Executor: Exception in task 7.0 in stage 2.0 (TID 12)
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [GET] on [_nodes/http] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:Internal Server Error
]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:427)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:385)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:363)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:367)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:121)
at org.elasticsearch.hadoop.rest.RestClient.getHttpDataNodes(RestClient.java:336)
at org.elasticsearch.hadoop.rest.InitializationUtils.filterNonDataNodesIfNeeded(InitializationUtils.java:121)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:381)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:40)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/11/09 16:19:05 TRACE CommonsHttpTransport: Closing HTTP transport to els1.node.bohr.consul/172.16.0.104:9200
15/11/09 16:19:05 ERROR Executor: Exception in task 1.0 in stage 2.0 (TID 6)
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [GET] on [_nodes/http] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:Internal Server Error
]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:427)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:385)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:363)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:367)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:121)
at org.elasticsearch.hadoop.rest.RestClient.getHttpDataNodes(RestClient.java:336)
at org.elasticsearch.hadoop.rest.InitializationUtils.filterNonDataNodesIfNeeded(InitializationUtils.java:121)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:381)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:40)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/11/09 16:19:05 ERROR Executor: Exception in task 5.0 in stage 2.0 (TID 10)
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [HEAD] on [foo] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:427)
at org.elasticsearch.hadoop.rest.RestClient.executeNotFoundAllowed(RestClient.java:398)
at org.elasticsearch.hadoop.rest.RestClient.exists(RestClient.java:467)
at org.elasticsearch.hadoop.rest.RestClient.touch(RestClient.java:473)
at org.elasticsearch.hadoop.rest.RestRepository.touch(RestRepository.java:473)
at org.elasticsearch.hadoop.rest.RestService.initSingleIndex(RestService.java:411)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:399)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:40)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/11/09 16:19:05 ERROR Executor: Exception in task 2.0 in stage 2.0 (TID 7)
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [GET] on [_nodes/http] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:Internal Server Error
]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:427)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:385)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:363)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:367)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:121)
at org.elasticsearch.hadoop.rest.RestClient.getHttpDataNodes(RestClient.java:336)
at org.elasticsearch.hadoop.rest.InitializationUtils.filterNonDataNodesIfNeeded(InitializationUtils.java:121)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:381)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:40)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/11/09 16:19:05 ERROR Executor: Exception in task 3.0 in stage 2.0 (TID 8)
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [GET] on [_nodes/http] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:Internal Server Error
]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:427)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:385)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:363)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:367)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:121)
at org.elasticsearch.hadoop.rest.RestClient.getHttpDataNodes(RestClient.java:336)
at org.elasticsearch.hadoop.rest.InitializationUtils.filterNonDataNodesIfNeeded(InitializationUtils.java:121)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:381)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:40)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/11/09 16:19:06 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 5, localhost): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [GET] on [_nodes/http] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:Internal Server Error
]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:427)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:385)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:363)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:367)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:121)
at org.elasticsearch.hadoop.rest.RestClient.getHttpDataNodes(RestClient.java:336)
at org.elasticsearch.hadoop.rest.InitializationUtils.filterNonDataNodesIfNeeded(InitializationUtils.java:121)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:381)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:40)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/11/09 16:19:06 ERROR TaskSetManager: Task 0 in stage 2.0 failed 1 times; aborting job
15/11/09 16:19:06 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
15/11/09 16:19:06 INFO TaskSetManager: Lost task 1.0 in stage 2.0 (TID 6) on executor localhost: org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest ([GET] on [_nodes/http] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:Internal Server Error
]) [duplicate 1]
15/11/09 16:19:06 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
15/11/09 16:19:06 WARN TaskSetManager: Lost task 6.0 in stage 2.0 (TID 11, localhost): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [HEAD] on [foo] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:427)
at org.elasticsearch.hadoop.rest.RestClient.executeNotFoundAllowed(RestClient.java:398)
at org.elasticsearch.hadoop.rest.RestClient.exists(RestClient.java:467)
at org.elasticsearch.hadoop.rest.RestClient.touch(RestClient.java:473)
at org.elasticsearch.hadoop.rest.RestRepository.touch(RestRepository.java:473)
at org.elasticsearch.hadoop.rest.RestService.initSingleIndex(RestService.java:411)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:399)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:40)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/11/09 16:19:06 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
15/11/09 16:19:06 INFO TaskSetManager: Lost task 3.0 in stage 2.0 (TID 8) on executor localhost: org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest ([GET] on [_nodes/http] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:Internal Server Error
]) [duplicate 2]
15/11/09 16:19:06 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
15/11/09 16:19:06 INFO TaskSetManager: Lost task 2.0 in stage 2.0 (TID 7) on executor localhost: org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest ([GET] on [_nodes/http] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:Internal Server Error
]) [duplicate 3]
15/11/09 16:19:06 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
15/11/09 16:19:06 INFO TaskSetManager: Lost task 7.0 in stage 2.0 (TID 12) on executor localhost: org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest ([GET] on [_nodes/http] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:Internal Server Error
]) [duplicate 4]
15/11/09 16:19:06 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
15/11/09 16:19:06 INFO TaskSetManager: Lost task 4.0 in stage 2.0 (TID 9) on executor localhost: org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest ([HEAD] on [foo] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:]) [duplicate 1]
15/11/09 16:19:06 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
15/11/09 16:19:06 INFO TaskSchedulerImpl: Cancelling stage 2
15/11/09 16:19:06 INFO TaskSetManager: Lost task 5.0 in stage 2.0 (TID 10) on executor localhost: org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest ([HEAD] on [foo] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:]) [duplicate 2]
15/11/09 16:19:06 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
15/11/09 16:19:06 INFO DAGScheduler: ResultStage 2 (runJob at EsSpark.scala:67) failed in 1.252 s
15/11/09 16:19:06 INFO DAGScheduler: Job 2 failed: runJob at EsSpark.scala:67, took 1.266267 s
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 5, localhost): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [GET] on [_nodes/http] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:Internal Server Error
]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:427)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:385)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:363)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:367)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:121)
at org.elasticsearch.hadoop.rest.RestClient.getHttpDataNodes(RestClient.java:336)
at org.elasticsearch.hadoop.rest.InitializationUtils.filterNonDataNodesIfNeeded(InitializationUtils.java:121)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:381)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:40)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1912)
at org.elasticsearch.spark.rdd.EsSpark$.saveToEs(EsSpark.scala:67)
at org.elasticsearch.spark.rdd.EsSpark$.saveToEs(EsSpark.scala:52)
at org.elasticsearch.spark.package$SparkRDDFunctions.saveToEs(package.scala:35)
at .<init>(<console>:20)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:734)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:983)
at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:573)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:604)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:568)
at scala.tools.nsc.interpreter.ILoop.reallyInterpret$1(ILoop.scala:760)
at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:805)
at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:717)
at scala.tools.nsc.interpreter.ILoop.processLine$1(ILoop.scala:581)
at scala.tools.nsc.interpreter.ILoop.innerLoop$1(ILoop.scala:588)
at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:591)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:882)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:837)
at scala.tools.nsc.interpreter.ILoop.main(ILoop.scala:904)
at xsbt.ConsoleInterface.run(ConsoleInterface.scala:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sbt.compiler.AnalyzingCompiler.call(AnalyzingCompiler.scala:101)
at sbt.compiler.AnalyzingCompiler.console(AnalyzingCompiler.scala:76)
at sbt.Console.sbt$Console$$console0$1(Console.scala:22)
at sbt.Console$$anonfun$apply$2$$anonfun$apply$1.apply$mcV$sp(Console.scala:23)
at sbt.Console$$anonfun$apply$2$$anonfun$apply$1.apply(Console.scala:23)
at sbt.Console$$anonfun$apply$2$$anonfun$apply$1.apply(Console.scala:23)
at sbt.Logger$$anon$4.apply(Logger.scala:85)
at sbt.TrapExit$App.run(TrapExit.scala:248)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [GET] on [_nodes/http] failed; server[els1.node.bohr.consul/172.16.0.104:9200] returned [500|Internal Server Error:Internal Server Error
]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:427)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:385)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:363)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:367)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:121)
at org.elasticsearch.hadoop.rest.RestClient.getHttpDataNodes(RestClient.java:336)
at org.elasticsearch.hadoop.rest.InitializationUtils.filterNonDataNodesIfNeeded(InitializationUtils.java:121)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:381)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:40)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
scala>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment