Skip to content

Instantly share code, notes, and snippets.

@bsikander
Last active March 5, 2018 12:14
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bsikander/697d85e2352a650437a922752328a90f to your computer and use it in GitHub Desktop.
Save bsikander/697d85e2352a650437a922752328a90f to your computer and use it in GitHub Desktop.
[2018-03-02 15:49:18,256] ERROR .jobserver.JobManagerActor [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Exception from job f373c504-cb25-49f6-a78d-226340db7cf8:
org.apache.spark.SparkException: org.apache.spark.SparkException: Error getting partition metadata for 'energyEvent'. Does the topic exist?
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:385)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:385)
at scala.util.Either.fold(Either.scala:98)
at org.apache.spark.streaming.kafka.KafkaCluster$.checkErrors(KafkaCluster.scala:384)
at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:222)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:484)
at com.test.KafkaReceiver$.getKafkaStream(KafkaReceiver.scala:57)
at com.test.KafkaReceiver$.runJob(KafkaReceiver.scala:44)
at com.test.KafkaReceiver$.runJob(KafkaReceiver.scala:21)
at spark.jobserver.SparkJobBase$class.runJob(SparkJob.scala:31)
at com.test.KafkaReceiver$.runJob(KafkaReceiver.scala:21)
at com.test.KafkaReceiver$.runJob(KafkaReceiver.scala:21)
at spark.jobserver.JobManagerActor$$anonfun$getJobFuture$4.apply(JobManagerActor.scala:385)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-03-02 15:49:18,258] INFO k.jobserver.JobStatusActor [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651/$a] - Job f373c504-cb25-49f6-a78d-226340db7cf8 finished with an error
[2018-03-02 15:56:13,916] INFO xt.StreamingContextFactory [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Loading class com.test.KafkaReceiver for app kafkaR
[2018-03-02 15:56:13,916] INFO .jobserver.JobManagerActor [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Starting Spark job d1365c0f-7393-44a8-b2cb-7e65d752cb8d [com.test.KafkaReceiver]...
[2018-03-02 15:56:13,917] INFO .jobserver.JobManagerActor [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Starting job future thread
[2018-03-02 15:56:13,917] INFO k.jobserver.JobStatusActor [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651/$a] - Job d1365c0f-7393-44a8-b2cb-7e65d752cb8d started
[2018-03-02 15:56:13,917] INFO utils.VerifiableProperties [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Verifying properties
[2018-03-02 15:56:13,918] INFO utils.VerifiableProperties [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Property fetch.wait.max.ms is overridden to 500
[2018-03-02 15:56:13,918] INFO utils.VerifiableProperties [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Property group.id is overridden to
[2018-03-02 15:56:13,918] INFO utils.VerifiableProperties [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Property zookeeper.connect is overridden to
[2018-03-02 15:56:14,146] INFO ts.producer.ProducerConfig [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
acks = 1
batch.size = 16384
reconnect.backoff.ms = 10
bootstrap.servers = [127.0.0.1:9092]
receive.buffer.bytes = 32768
retry.backoff.ms = 100
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
retries = 0
max.request.size = 1048576
block.on.buffer.full = true
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
metrics.sample.window.ms = 30000
send.buffer.bytes = 131072
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
linger.ms = 0
client.id =
[2018-03-02 15:56:14,228] INFO ts.producer.ProducerConfig [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
acks = 1
batch.size = 16384
reconnect.backoff.ms = 10
bootstrap.servers = [127.0.0.1:9092]
receive.buffer.bytes = 32768
retry.backoff.ms = 100
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
retries = 0
max.request.size = 1048576
block.on.buffer.full = true
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
metrics.sample.window.ms = 30000
send.buffer.bytes = 131072
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
linger.ms = 0
client.id =
[2018-03-02 15:56:14,256] INFO ts.producer.ProducerConfig [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
acks = 1
batch.size = 16384
reconnect.backoff.ms = 10
bootstrap.servers = [127.0.0.1:9092]
receive.buffer.bytes = 32768
retry.backoff.ms = 100
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
retries = 0
max.request.size = 1048576
block.on.buffer.full = true
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
metrics.sample.window.ms = 30000
send.buffer.bytes = 131072
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
linger.ms = 0
client.id =
[2018-03-02 15:56:14,388] INFO ver.handler.ContextHandler [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Started o.s.j.s.ServletContextHandler@1b8f315b{/streaming,null,AVAILABLE,@Spark}
[2018-03-02 15:56:14,389] INFO ver.handler.ContextHandler [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Started o.s.j.s.ServletContextHandler@4df878f8{/streaming/json,null,AVAILABLE,@Spark}
[2018-03-02 15:56:14,389] INFO ver.handler.ContextHandler [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Started o.s.j.s.ServletContextHandler@918a84{/streaming/batch,null,AVAILABLE,@Spark}
[2018-03-02 15:56:14,389] INFO ver.handler.ContextHandler [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Started o.s.j.s.ServletContextHandler@15bed1a0{/streaming/batch/json,null,AVAILABLE,@Spark}
[2018-03-02 15:56:14,390] INFO ver.handler.ContextHandler [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Started o.s.j.s.ServletContextHandler@1fe85953{/static/streaming,null,AVAILABLE,@Spark}
[2018-03-02 15:56:14,390] INFO mingContextFactory$$anon$1 [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - StreamingContext started
[2018-03-02 15:56:15,031] INFO utils.VerifiableProperties [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Verifying properties
[2018-03-02 15:56:15,031] INFO utils.VerifiableProperties [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Property fetch.wait.max.ms is overridden to 500
[2018-03-02 15:56:15,031] INFO utils.VerifiableProperties [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Property group.id is overridden to
[2018-03-02 15:56:15,031] INFO utils.VerifiableProperties [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Property zookeeper.connect is overridden to
[2018-03-02 15:56:58,540] INFO k.jobserver.JobStatusActor [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651/$a] - Job d1365c0f-7393-44a8-b2cb-7e65d752cb8d killed
[2018-03-02 15:59:59,135] WARN spark.storage.BlockManager [] [] - Block rdd_13011_0 already exists on this machine; not re-adding it
[2018-03-02 16:01:47,133] WARN spark.storage.BlockManager [] [] - Block rdd_19275_0 already exists on this machine; not re-adding it
[2018-03-02 16:06:49,129] WARN spark.storage.BlockManager [] [] - Block rdd_36791_0 already exists on this machine; not re-adding it
[2018-03-02 16:07:35,134] WARN spark.storage.BlockManager [] [] - Block rdd_39459_0 already exists on this machine; not re-adding it
[2018-03-02 16:08:32,247] WARN spark.storage.BlockManager [] [] - Block rdd_42765_1 already exists on this machine; not re-adding it
[2018-03-02 16:09:04,126] WARN spark.storage.BlockManager [] [] - Block rdd_44621_0 already exists on this machine; not re-adding it
[2018-03-02 16:10:12,127] WARN spark.storage.BlockManager [] [] - Block rdd_48565_0 already exists on this machine; not re-adding it
[2018-03-02 16:10:41,137] WARN spark.storage.BlockManager [] [] - Block rdd_50247_0 already exists on this machine; not re-adding it
[2018-03-02 16:11:14,223] WARN ka.common.network.Selector [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Error in I/O with localhost/127.0.0.1
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
at java.lang.Thread.run(Thread.java:745)
[2018-03-02 16:11:14,444] WARN ka.common.network.Selector [] [akka://JobServer/user/jobManager-1f-9f09-200f964be651] - Error in I/O with localhost/127.0.0.1
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment