Skip to content

Instantly share code, notes, and snippets.

@rmoff
Created September 8, 2016 06:55
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save rmoff/44d42940f64225a068dd6c0209d94a5a to your computer and use it in GitHub Desktop.
Save rmoff/44d42940f64225a068dd6c0209d94a5a to your computer and use it in GitHub Desktop.
CLASSPATH: /opt/kafka-connect-elasticsearch/*:/usr/share/java/confluent-common/*:/usr/share/java/kafka-serde-tools/*:/usr/share/java/monitoring-interceptors/*:/usr/share/java/kafka-connect-hdfs/*:/usr/share/java/kafka-connect-jdbc/*
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/kafka-connect-elasticsearch/slf4j-simple-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/share/java/kafka-serde-tools/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
[main] INFO org.apache.kafka.connect.runtime.standalone.StandaloneConfig - StandaloneConfig values:
cluster = connect
rest.advertised.host.name = null
task.shutdown.graceful.timeout.ms = 5000
rest.host.name = null
rest.advertised.port = null
bootstrap.servers = [localhost:9092]
offset.flush.timeout.ms = 5000
offset.flush.interval.ms = 10000
rest.port = 8083
internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
access.control.allow.methods =
access.control.allow.origin =
offset.storage.file.filename = /tmp/connect.offsets
internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
value.converter = class org.apache.kafka.connect.json.JsonConverter
key.converter = class org.apache.kafka.connect.json.JsonConverter
[main] INFO org.eclipse.jetty.util.log - Logging initialized @812ms
[main] INFO org.apache.kafka.connect.runtime.Connect - Kafka Connect starting
[main] INFO org.apache.kafka.connect.runtime.standalone.StandaloneHerder - Herder starting
[main] INFO org.apache.kafka.connect.runtime.Worker - Worker starting
[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 9223372036854775807
interceptor.classes = null
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 2147483647
acks = all
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 2147483647
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 60000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
max.in.flight.requests.per.connection = 1
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
batch.size = 16384
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 0
[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 9223372036854775807
interceptor.classes = null
ssl.truststore.password = null
client.id = producer-1
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 2147483647
acks = all
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 2147483647
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 60000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
max.in.flight.requests.per.connection = 1
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
batch.size = 16384
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.1-cp1
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 4d8c5c6c2b05bdbc
[main] INFO org.apache.kafka.connect.storage.FileOffsetBackingStore - Starting FileOffsetBackingStore with file /tmp/connect.offsets
[main] INFO org.apache.kafka.connect.runtime.Worker - Worker started
[main] INFO org.apache.kafka.connect.runtime.standalone.StandaloneHerder - Herder started
[main] INFO org.apache.kafka.connect.runtime.rest.RestServer - Starting REST server
[main] INFO org.eclipse.jetty.server.Server - jetty-9.2.15.v20160210
Sep 08, 2016 7:28:57 AM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@47db5fa5{/,null,AVAILABLE}
[main] INFO org.eclipse.jetty.server.ServerConnector - Started ServerConnector@60214d06{HTTP/1.1}{0.0.0.0:8083}
[main] INFO org.eclipse.jetty.server.Server - Started @5503ms
[main] INFO org.apache.kafka.connect.runtime.rest.RestServer - REST server listening at http://127.0.0.1:8083/, advertising URL http://127.0.0.1:8083/
[main] INFO org.apache.kafka.connect.runtime.Connect - Kafka Connect started
[main] INFO org.apache.kafka.connect.runtime.ConnectorConfig - ConnectorConfig values:
connector.class = io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max = 1
name = elasticsearch-sink
[main] INFO org.apache.kafka.connect.runtime.Worker - Creating connector elasticsearch-sink of type io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
[main] INFO org.apache.kafka.connect.runtime.Worker - Instantiated connector elasticsearch-sink with version 3.1.0-SNAPSHOT of type io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
[main] INFO io.confluent.connect.elasticsearch.ElasticsearchSinkConnectorConfig - ElasticsearchSinkConnectorConfig values:
type.name = kafka-connect
batch.size = 2000
max.retries = 5
key.ignore = false
max.in.flight.requests = 5
retry.backoff.ms = 100
max.buffered.records = 20000
schema.ignore = false
flush.timeout.ms = 10000
topic.index.map = [ORCL.SOE.LOGON:soe.logon]
topic.key.ignore = [ORCL.SOE.LOGON]
connection.url = http://localhost:9200
topic.schema.ignore = []
linger.ms = 1
[main] INFO org.apache.kafka.connect.runtime.Worker - Finished creating connector elasticsearch-sink
[main] ERROR org.apache.kafka.connect.cli.ConnectStandalone - Failed to create job for /opt/elasticsearch-2.4.0/config/elasticsearch-kafka-connect.properties
[main] ERROR org.apache.kafka.connect.cli.ConnectStandalone - Stopping after connector error
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Connector elasticsearch-sink not found in this worker.
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:80)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:67)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:97)
Caused by: org.apache.kafka.connect.errors.ConnectException: Connector elasticsearch-sink not found in this worker.
at org.apache.kafka.connect.runtime.Worker.isRunning(Worker.java:300)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.updateConnectorTasks(StandaloneHerder.java:297)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:165)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:94)
[main] INFO org.apache.kafka.connect.runtime.Connect - Kafka Connect stopping
[main] INFO org.apache.kafka.connect.runtime.rest.RestServer - Stopping REST server
[main] INFO org.eclipse.jetty.server.ServerConnector - Stopped ServerConnector@60214d06{HTTP/1.1}{0.0.0.0:8083}
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler@47db5fa5{/,null,UNAVAILABLE}
[main] INFO org.apache.kafka.connect.runtime.rest.RestServer - REST server stopped
[main] INFO org.apache.kafka.connect.runtime.standalone.StandaloneHerder - Herder stopping
[main] INFO org.apache.kafka.connect.runtime.Worker - Stopping connector elasticsearch-sink
[main] INFO org.apache.kafka.connect.runtime.Worker - Stopped connector elasticsearch-sink
[main] INFO org.apache.kafka.connect.runtime.Worker - Worker stopping
[main] WARN org.apache.kafka.connect.runtime.Worker - Shutting down tasks [] uncleanly; herder should have shut down tasks before the Worker is stopped.
[main] INFO org.apache.kafka.connect.storage.FileOffsetBackingStore - Stopped FileOffsetBackingStore
[main] INFO org.apache.kafka.connect.runtime.Worker - Worker stopped
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment