Skip to content

Instantly share code, notes, and snippets.

@LlorandoLau
Created May 6, 2020 15:42
Show Gist options
  • Save LlorandoLau/3af6bb7055ee6827a3a455b7aecb8853 to your computer and use it in GitHub Desktop.
Save LlorandoLau/3af6bb7055ee6827a3a455b7aecb8853 to your computer and use it in GitHub Desktop.
===> ENV Variables ...
ALLOW_UNSIGNED=false
COMPONENT=kafka
CONFLUENT_DEB_VERSION=1
CONFLUENT_MAJOR_VERSION=5
CONFLUENT_MINOR_VERSION=2
CONFLUENT_MVN_LABEL=
CONFLUENT_PATCH_VERSION=4
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_VERSION=5.2.4
CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
HOME=/root
HOSTNAME=336ddf93f63b
KAFKA_ADVERTISED_LISTENERS=LC://kafka:29092,LX://192.168.253.4:9092
KAFKA_INTER_BROKER_LISTENER_NAME=LC
KAFKA_LISTENERS=LC://kafka:29092,LX://kafka:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=LC:PLAINTEXT,LX:PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
KAFKA_VERSION=2.2.2cp3
KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.11
SHLVL=1
ZULU_OPENJDK_VERSION=8=8.30.0.1
_=/usr/bin/env
===> User
uid=0(root) gid=0(root) groups=0(root)
===> Configuring ...
===> Running preflight checks ...
===> Check if /var/lib/kafka/data is writable ...
===> Check if Zookeeper is healthy ...
===> Launching ...
===> Launching kafka ...
[2020-05-06 04:03:50,786] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-05-06 04:03:58,444] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = LC://kafka:29092,LX://192.168.253.4:9092
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.interceptor.class = class org.apache.kafka.server.interceptor.DefaultBrokerInterceptor
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = LC
inter.broker.protocol.version = 2.2-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = LC:PLAINTEXT,LX:PLAINTEXT
listeners = LC://kafka:29092,LX://kafka:9092
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /var/lib/kafka/data
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.2-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = zookeeper:2181
zookeeper.connection.timeout.ms = null
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2020-05-06 04:04:03,325] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
[2020-05-06 04:04:03,710] WARN Please note that the support metrics collection feature ("Metrics") of Proactive Support is enabled. With Metrics enabled, this broker is configured to collect and report certain broker and cluster metadata ("Metadata") about your use of the Confluent Platform (including without limitation, your remote internet protocol address) to Confluent, Inc. ("Confluent") or its parent, subsidiaries, affiliates or service providers every 24hours. This Metadata may be transferred to any country in which Confluent maintains facilities. For a more in depth discussion of how Confluent processes such information, please read our Privacy Policy located at http://www.confluent.io/privacy. By proceeding with `confluent.support.metrics.enable=true`, you agree to all such collection, transfer, storage and use of Metadata by Confluent. You can turn the Metrics feature off by setting `confluent.support.metrics.enable=false` in the broker configuration and restarting the broker. See the Confluent Platform documentation for further information. (io.confluent.support.metrics.SupportedServerStartable)
[2020-05-06 04:04:03,738] INFO starting (kafka.server.KafkaServer)
[2020-05-06 04:04:03,743] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
[2020-05-06 04:04:04,028] INFO [ZooKeeperClient] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
[2020-05-06 04:04:04,103] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,103] INFO Client environment:host.name=336ddf93f63b (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,103] INFO Client environment:java.version=1.8.0_172 (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,103] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,103] INFO Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,103] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/jetty-server-9.4.14.v20181114.jar:/usr/bin/../share/java/kafka/support-metrics-common-5.2.4.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.2.2-cp3-test-sources.jar:/usr/bin/../share/java/kafka/connect-transforms-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.10.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.1.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.14.v20181114.jar:/usr/bin/../share/java/kafka/jersey-common-2.27.jar:/usr/bin/../share/java/kafka/rocksdbjni-5.15.10.jar:/usr/bin/../share/java/kafka/jackson-core-asl-1.9.13.jar:/usr/bin/../share/java/kafka/lz4-java-1.5.0.jar:/usr/bin/../share/java/kafka/zkclient-0.10.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/plexus-utils-3.1.0.jar:/usr/bin/../share/java/kafka/hk2-utils-2.5.0-b42.jar:/usr/bin/../share/java/kafka/audience-annotations-0.5.0.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.2.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.1.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.2.2-cp3-test.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/validation-api-1.1.0.Final.jar:/usr/bin/../share/java/kafka/javax.inject-2.5.0-b42.jar:/usr/bin/../share/java/kafka/guava-20.0.jar:/usr/bin/../share/java/kafka/zkclient-0.11.jar:/usr/bin/../share/java/kafka/jline-0.9.94.jar:/usr/bin/../share/java/kafka/slf4j-log4j12-1.7.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.27.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.2.2-cp3-javadoc.jar:/usr/bin/../share/java/kafka/jackson-mapper-asl-1.9.13.jar:/usr/bin/../share/java/kafka/connect-json-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.0.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.7.2.jar:/usr/bin/../share/java/kafka/connect-api-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.14.v20181114.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.1.jar:/usr/bin/../share/java/kafka/scala-library-2.11.12.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.14.v20181114.jar:/usr/bin/../share/java/kafka/javassist-3.22.0-CR2.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.jar:/usr/bin/../share/java/kafka/commons-compress-1.8.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.14.v20181114.jar:/usr/bin/../share/java/kafka/httpmime-4.5.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-tools-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/xz-1.5.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.25.jar:/usr/bin/../share/java/kafka/zstd-jni-1.3.8-1.jar:/usr/bin/../share/java/kafka/jersey-client-2.27.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.2.2-cp3-scaladoc.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.27.jar:/usr/bin/../share/java/kafka/kafka-clients-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/jackson-databind-2.10.0.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jersey-media-jaxb-2.27.jar:/usr/bin/../share/java/kafka/log4j-1.2.17.jar:/usr/bin/../share/java/kafka/common-utils-5.2.4.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.10.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.10.0.jar:/usr/bin/../share/java/kafka/support-metrics-client-5.2.4.jar:/usr/bin/../share/java/kafka/netty-3.10.6.Final.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/connect-file-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.2.jar:/usr/bin/../share/java/kafka/avro-1.8.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.27.jar:/usr/bin/../share/java/kafka/httpclient-4.5.2.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.5.0-b42.jar:/usr/bin/../share/java/kafka/jersey-server-2.27.jar:/usr/bin/../share/java/kafka/hk2-api-2.5.0-b42.jar:/usr/bin/../share/java/kafka/scala-reflect-2.11.12.jar:/usr/bin/../share/java/kafka/httpcore-4.4.4.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/paranamer-2.7.jar:/usr/bin/../share/java/kafka/jackson-core-2.10.0.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.14.v20181114.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.11-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/zookeeper-3.4.13.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/maven-artifact-3.6.0.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/reflections-0.9.11.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.14.v20181114.jar:/usr/bin/../share/java/kafka/hk2-locator-2.5.0-b42.jar:/usr/bin/../share/java/kafka/kafka-streams-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/connect-runtime-2.2.2-cp3.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/javax.inject-1.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.2.2-cp3-sources.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.14.v20181114.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.2.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.14.v20181114.jar:/usr/bin/../share/java/kafka/scala-logging_2.11-3.9.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.10.0.jar:/usr/bin/../share/java/kafka/commons-validator-1.5.1.jar:/usr/bin/../share/java/kafka/commons-codec-1.9.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.10.0.jar:/usr/bin/../share/java/kafka/commons-digester-1.8.1.jar:/usr/bin/../share/java/confluent-support-metrics/*:/usr/share/java/confluent-support-metrics/* (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,107] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,109] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,109] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,110] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,110] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,110] INFO Client environment:os.version=4.19.76-linuxkit (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,110] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,111] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,111] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,119] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@3eb738bb (org.apache.zookeeper.ZooKeeper)
[2020-05-06 04:04:04,262] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2020-05-06 04:04:04,331] INFO Opening socket connection to server zookeeper/172.18.0.6:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:04:04,387] INFO Socket connection established to zookeeper/172.18.0.6:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:04:04,430] INFO Session establishment complete on server zookeeper/172.18.0.6:2181, sessionid = 0x1000001970c0001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:04:04,472] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2020-05-06 04:04:09,521] INFO Cluster ID = Ys10wEB1QfmeXOAGc8dGFQ (kafka.server.KafkaServer)
[2020-05-06 04:04:09,673] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2020-05-06 04:04:10,785] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = LC://kafka:29092,LX://192.168.253.4:9092
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.interceptor.class = class org.apache.kafka.server.interceptor.DefaultBrokerInterceptor
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = LC
inter.broker.protocol.version = 2.2-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = LC:PLAINTEXT,LX:PLAINTEXT
listeners = LC://kafka:29092,LX://kafka:9092
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /var/lib/kafka/data
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.2-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = zookeeper:2181
zookeeper.connection.timeout.ms = null
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2020-05-06 04:04:11,126] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = LC://kafka:29092,LX://192.168.253.4:9092
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.interceptor.class = class org.apache.kafka.server.interceptor.DefaultBrokerInterceptor
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = LC
inter.broker.protocol.version = 2.2-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = LC:PLAINTEXT,LX:PLAINTEXT
listeners = LC://kafka:29092,LX://kafka:9092
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /var/lib/kafka/data
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.2-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = zookeeper:2181
zookeeper.connection.timeout.ms = null
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2020-05-06 04:04:11,663] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-05-06 04:04:11,663] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-05-06 04:04:11,681] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-05-06 04:04:11,910] INFO Loading logs. (kafka.log.LogManager)
[2020-05-06 04:04:12,023] INFO Logs loading complete in 112 ms. (kafka.log.LogManager)
[2020-05-06 04:04:12,665] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2020-05-06 04:04:12,720] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2020-05-06 04:04:12,723] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2020-05-06 04:04:14,199] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
[2020-05-06 04:04:22,944] INFO Awaiting socket connections on skafka:29092. (kafka.network.Acceptor)
[2020-05-06 04:04:24,786] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(kafka,29092,ListenerName(LC),PLAINTEXT) (kafka.network.SocketServer)
[2020-05-06 04:04:24,787] INFO Awaiting socket connections on skafka:9092. (kafka.network.Acceptor)
[2020-05-06 04:04:25,326] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(kafka,9092,ListenerName(LX),PLAINTEXT) (kafka.network.SocketServer)
[2020-05-06 04:04:25,411] INFO [SocketServer brokerId=1001] Started 2 acceptor threads for data-plane (kafka.network.SocketServer)
[2020-05-06 04:04:25,737] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-05-06 04:04:25,775] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-05-06 04:04:25,801] INFO [ExpirationReaper-1001-ElectPreferredLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-05-06 04:04:25,802] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-05-06 04:04:26,143] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2020-05-06 04:04:27,292] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
[2020-05-06 04:04:28,258] INFO Stat of the created znode at /brokers/ids/1001 is: 27,27,1588737867945,1588737867945,1,0,0,72057600867041281,211,0,27
(kafka.zk.KafkaZkClient)
[2020-05-06 04:04:28,304] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,29092,ListenerName(LC),PLAINTEXT), EndPoint(192.168.253.4,9092,ListenerName(LX),PLAINTEXT)), czxid (broker epoch): 27 (kafka.zk.KafkaZkClient)
[2020-05-06 04:04:28,426] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2020-05-06 04:04:30,923] INFO [ControllerEventThread controllerId=1001] Starting (kafka.controller.ControllerEventManager$ControllerEventThread)
[2020-05-06 04:04:31,238] INFO [ExpirationReaper-1001-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-05-06 04:04:31,351] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
[2020-05-06 04:04:31,495] INFO [ExpirationReaper-1001-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-05-06 04:04:31,516] INFO [ExpirationReaper-1001-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-05-06 04:04:31,529] INFO [Controller id=1001] 1001 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController)
[2020-05-06 04:04:31,546] INFO [Controller id=1001] Registering handlers (kafka.controller.KafkaController)
[2020-05-06 04:04:31,647] INFO [Controller id=1001] Deleting log dir event notifications (kafka.controller.KafkaController)
[2020-05-06 04:04:32,055] INFO [Controller id=1001] Deleting isr change notifications (kafka.controller.KafkaController)
[2020-05-06 04:04:32,123] INFO [Controller id=1001] Initializing controller context (kafka.controller.KafkaController)
[2020-05-06 04:04:32,320] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2020-05-06 04:04:32,346] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2020-05-06 04:04:32,796] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 445 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-05-06 04:04:33,789] INFO [ProducerId Manager 1001]: Acquired new producerId block (brokerId:1001,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2020-05-06 04:04:35,226] INFO [Controller id=1001] Initialized broker epochs cache: Map(1001 -> 27) (kafka.controller.KafkaController)
[2020-05-06 04:04:35,471] DEBUG [Controller id=1001] Register BrokerModifications handler for Set(1001) (kafka.controller.KafkaController)
[2020-05-06 04:04:35,510] INFO [TransactionCoordinator id=1001] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-05-06 04:04:35,558] INFO [TransactionCoordinator id=1001] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-05-06 04:04:35,577] INFO [Transaction Marker Channel Manager 1001]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2020-05-06 04:04:35,720] DEBUG [Channel manager on controller 1001]: Controller 1001 trying to connect to broker 1001 (kafka.controller.ControllerChannelManager)
[2020-05-06 04:04:36,185] INFO [RequestSendThread controllerId=1001] Starting (kafka.controller.RequestSendThread)
[2020-05-06 04:04:36,211] INFO [Controller id=1001] Partitions being reassigned: Map() (kafka.controller.KafkaController)
[2020-05-06 04:04:36,260] INFO [Controller id=1001] Currently active brokers in the cluster: Set(1001) (kafka.controller.KafkaController)
[2020-05-06 04:04:36,282] INFO [Controller id=1001] Currently shutting brokers in the cluster: Set() (kafka.controller.KafkaController)
[2020-05-06 04:04:36,446] INFO [Controller id=1001] Current list of topics in the cluster: Set() (kafka.controller.KafkaController)
[2020-05-06 04:04:36,478] INFO [Controller id=1001] Fetching topic deletions in progress (kafka.controller.KafkaController)
[2020-05-06 04:04:36,625] INFO [Controller id=1001] List of topics to be deleted: (kafka.controller.KafkaController)
[2020-05-06 04:04:36,626] INFO [Controller id=1001] List of topics ineligible for deletion: (kafka.controller.KafkaController)
[2020-05-06 04:04:36,636] INFO [Controller id=1001] Initializing topic deletion manager (kafka.controller.KafkaController)
[2020-05-06 04:04:36,640] INFO [Controller id=1001] Sending update metadata request (kafka.controller.KafkaController)
[2020-05-06 04:04:37,104] INFO [ReplicaStateMachine controllerId=1001] Initializing replica state (kafka.controller.ReplicaStateMachine)
[2020-05-06 04:04:37,157] INFO [ReplicaStateMachine controllerId=1001] Triggering online replica state changes (kafka.controller.ReplicaStateMachine)
[2020-05-06 04:04:37,352] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2020-05-06 04:04:37,807] INFO [ReplicaStateMachine controllerId=1001] Started replica state machine with initial state -> Map() (kafka.controller.ReplicaStateMachine)
[2020-05-06 04:04:37,840] INFO [PartitionStateMachine controllerId=1001] Initializing partition state (kafka.controller.PartitionStateMachine)
[2020-05-06 04:04:37,866] INFO [PartitionStateMachine controllerId=1001] Triggering online partition state changes (kafka.controller.PartitionStateMachine)
[2020-05-06 04:04:38,142] INFO [PartitionStateMachine controllerId=1001] Started partition state machine with initial state -> Map() (kafka.controller.PartitionStateMachine)
[2020-05-06 04:04:38,177] INFO [Controller id=1001] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController)
[2020-05-06 04:04:38,362] INFO [Controller id=1001] Removing partitions Set() from the list of reassigned partitions in zookeeper (kafka.controller.KafkaController)
[2020-05-06 04:04:38,372] INFO [SocketServer brokerId=1001] Started data-plane processors for 2 acceptors (kafka.network.SocketServer)
[2020-05-06 04:04:38,440] INFO [Controller id=1001] No more partitions need to be reassigned. Deleting zk path /admin/reassign_partitions (kafka.controller.KafkaController)
[2020-05-06 04:04:38,485] INFO [RequestSendThread controllerId=1001] Controller 1001 connected to kafka:29092 (id: 1001 rack: null) for sending state change requests (kafka.controller.RequestSendThread)
[2020-05-06 04:04:39,006] INFO [Controller id=1001] Partitions undergoing preferred replica election: (kafka.controller.KafkaController)
[2020-05-06 04:04:39,014] INFO [Controller id=1001] Partitions that completed preferred replica election: (kafka.controller.KafkaController)
[2020-05-06 04:04:39,027] INFO [Controller id=1001] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController)
[2020-05-06 04:04:39,035] INFO [Controller id=1001] Resuming preferred replica election for partitions: (kafka.controller.KafkaController)
[2020-05-06 04:04:39,048] INFO [Controller id=1001] Starting preferred replica leader election for partitions (kafka.controller.KafkaController)
[2020-05-06 04:04:39,088] INFO Kafka version: 2.2.2-cp3 (org.apache.kafka.common.utils.AppInfoParser)
[2020-05-06 04:04:39,093] INFO Kafka commitId: 602b2e2e105b4d34 (org.apache.kafka.common.utils.AppInfoParser)
[2020-05-06 04:04:39,182] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer)
[2020-05-06 04:04:39,204] INFO Waiting until monitored service is ready for metrics collection (io.confluent.support.metrics.BaseMetricsReporter)
[2020-05-06 04:04:39,254] INFO Monitored service is now ready (io.confluent.support.metrics.BaseMetricsReporter)
[2020-05-06 04:04:39,255] INFO Attempting to collect and submit metrics (io.confluent.support.metrics.BaseMetricsReporter)
[2020-05-06 04:04:39,760] INFO [Controller id=1001] Starting the controller scheduler (kafka.controller.KafkaController)
[2020-05-06 04:04:41,099] TRACE [Controller id=1001 epoch=1] Received response {error_code=0} for request UPDATE_METADATA with correlation id 0 sent to broker kafka:29092 (id: 1001 rack: null) (state.change.logger)
[2020-05-06 04:04:44,785] TRACE [Controller id=1001] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2020-05-06 04:04:44,847] DEBUG [Controller id=1001] Preferred replicas by broker Map() (kafka.controller.KafkaController)
[2020-05-06 04:04:47,407] WARN The replication factor of topic __confluent.support.metrics will be set to 1, which is less than the desired replication factor of 3 (reason: this cluster contains only 1 brokers). If you happen to add more brokers to this cluster, then it is important to increase the replication factor of the topic to eventually 3 to ensure reliable and durable metrics collection. (io.confluent.support.metrics.common.kafka.KafkaUtilities)
[2020-05-06 04:04:47,426] INFO Attempting to create topic __confluent.support.metrics with 1 replicas, assuming 1 total brokers (io.confluent.support.metrics.common.kafka.KafkaUtilities)
[2020-05-06 04:04:48,038] INFO Creating topic __confluent.support.metrics with configuration {retention.ms=31536000000} and initial partition assignment Map(0 -> ArrayBuffer(1001)) (kafka.zk.AdminZkClient)
[2020-05-06 04:04:48,652] INFO [Controller id=1001] New topics: [Set(__confluent.support.metrics)], deleted topics: [Set()], new partition replica assignment [Map(__confluent.support.metrics-0 -> Vector(1001))] (kafka.controller.KafkaController)
[2020-05-06 04:04:48,969] INFO [Controller id=1001] New partition creation callback for __confluent.support.metrics-0 (kafka.controller.KafkaController)
[2020-05-06 04:04:49,078] TRACE [Controller id=1001 epoch=1] Changed partition __confluent.support.metrics-0 state from NonExistentPartition to NewPartition with assigned replicas 1001 (state.change.logger)
[2020-05-06 04:04:49,297] TRACE [Controller id=1001 epoch=1] Changed state of replica 1001 for partition __confluent.support.metrics-0 from NonExistentReplica to NewReplica (state.change.logger)
[2020-05-06 04:04:50,431] TRACE [Controller id=1001 epoch=1] Changed partition __confluent.support.metrics-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1001, leaderEpoch=0, isr=List(1001), zkVersion=0) (state.change.logger)
[2020-05-06 04:04:50,489] TRACE [Controller id=1001 epoch=1] Sending become-leader LeaderAndIsr request PartitionState(controllerEpoch=1, leader=1001, leaderEpoch=0, isr=1001, zkVersion=0, replicas=1001, isNew=true) to broker 1001 for partition __confluent.support.metrics-0 (state.change.logger)
[2020-05-06 04:04:50,735] TRACE [Broker id=1001] Received LeaderAndIsr request PartitionState(controllerEpoch=1, leader=1001, leaderEpoch=0, isr=1001, zkVersion=0, replicas=1001, isNew=true) correlation id 1 from controller 1001 epoch 1 for partition __confluent.support.metrics-0 (state.change.logger)
[2020-05-06 04:04:50,740] INFO ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = []
buffer.memory = 33554432
client.dns.lookup = default
client.id =
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 10000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
(org.apache.kafka.clients.producer.ProducerConfig)
[2020-05-06 04:04:50,751] TRACE [Controller id=1001 epoch=1] Sending UpdateMetadata request PartitionState(controllerEpoch=1, leader=1001, leaderEpoch=0, isr=[1001], zkVersion=0, replicas=[1001], offlineReplicas=[]) to brokers Set(1001) for partition __confluent.support.metrics-0 (state.change.logger)
[2020-05-06 04:04:50,836] TRACE [Controller id=1001 epoch=1] Changed state of replica 1001 for partition __confluent.support.metrics-0 from NewReplica to OnlineReplica (state.change.logger)
[2020-05-06 04:04:50,854] TRACE [Broker id=1001] Handling LeaderAndIsr request correlationId 1 from controller 1001 epoch 1 starting the become-leader transition for partition __confluent.support.metrics-0 (state.change.logger)
[2020-05-06 04:04:50,943] INFO [ReplicaFetcherManager on broker 1001] Removed fetcher for partitions Set(__confluent.support.metrics-0) (kafka.server.ReplicaFetcherManager)
[2020-05-06 04:04:51,285] INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2020-05-06 04:04:51,341] ERROR Could not submit metrics to Kafka topic __confluent.support.metrics: Failed to construct kafka producer (io.confluent.support.metrics.BaseMetricsReporter)
[2020-05-06 04:04:54,530] INFO [Log partition=__confluent.support.metrics-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-05-06 04:04:54,789] INFO [Log partition=__confluent.support.metrics-0, dir=/var/lib/kafka/data] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2885 ms (kafka.log.Log)
[2020-05-06 04:04:54,868] INFO Created log for partition __confluent.support.metrics-0 in /var/lib/kafka/data with properties {compression.type -> producer, message.format.version -> 2.2-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 31536000000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-06 04:04:54,943] INFO [Partition __confluent.support.metrics-0 broker=1001] No checkpointed highwatermark is found for partition __confluent.support.metrics-0 (kafka.cluster.Partition)
[2020-05-06 04:04:54,981] INFO Replica loaded for partition __confluent.support.metrics-0 with initial high watermark 0 (kafka.cluster.Replica)
[2020-05-06 04:04:55,027] INFO [Partition __confluent.support.metrics-0 broker=1001] __confluent.support.metrics-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-05-06 04:04:55,396] TRACE [Broker id=1001] Stopped fetchers as part of become-leader request from controller 1001 epoch 1 with correlation id 1 for partition __confluent.support.metrics-0 (last update controller epoch 1) (state.change.logger)
[2020-05-06 04:04:55,423] TRACE [Broker id=1001] Completed LeaderAndIsr request correlationId 1 from controller 1001 epoch 1 for the become-leader transition for partition __confluent.support.metrics-0 (state.change.logger)
[2020-05-06 04:04:55,704] TRACE [Controller id=1001 epoch=1] Received response {error_code=0,partitions=[{topic=__confluent.support.metrics,partition=0,error_code=0}]} for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:29092 (id: 1001 rack: null) (state.change.logger)
[2020-05-06 04:04:55,996] TRACE [Broker id=1001] Cached leader info PartitionState(controllerEpoch=1, leader=1001, leaderEpoch=0, isr=[1001], zkVersion=0, replicas=[1001], offlineReplicas=[]) for partition __confluent.support.metrics-0 in response to UpdateMetadata request sent by controller 1001 epoch 1 with correlation id 2 (state.change.logger)
[2020-05-06 04:04:56,008] TRACE [Controller id=1001 epoch=1] Received response {error_code=0} for request UPDATE_METADATA with correlation id 2 sent to broker kafka:29092 (id: 1001 rack: null) (state.change.logger)
[2020-05-06 04:05:11,280] INFO Successfully submitted metrics to Confluent via secure endpoint (io.confluent.support.metrics.submitters.ConfluentSubmitter)
[2020-05-06 04:07:57,995] WARN Client session timed out, have not heard from server in 13558ms for sessionid 0x1000001970c0001 (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:08:27,498] INFO Client session timed out, have not heard from server in 13558ms for sessionid 0x1000001970c0001, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:09:08,858] INFO Opening socket connection to server zookeeper/172.18.0.6:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:09:10,052] INFO Socket connection established to zookeeper/172.18.0.6:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:12:08,755] TRACE [Controller id=1001] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2020-05-06 04:12:26,325] WARN Client session timed out, have not heard from server in 189412ms for sessionid 0x1000001970c0001 (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:12:32,570] INFO Client session timed out, have not heard from server in 189412ms for sessionid 0x1000001970c0001, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:12:47,218] DEBUG [Controller id=1001] Preferred replicas by broker Map(1001 -> Map(__confluent.support.metrics-0 -> Vector(1001))) (kafka.controller.KafkaController)
[2020-05-06 04:13:30,114] INFO Opening socket connection to server zookeeper/172.18.0.6:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:13:30,257] INFO Socket connection established to zookeeper/172.18.0.6:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:13:32,672] WARN Unable to reconnect to ZooKeeper service, session 0x1000001970c0001 has expired (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:13:33,528] INFO Unable to reconnect to ZooKeeper service, session 0x1000001970c0001 has expired, closing socket connection (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:14:27,399] DEBUG [Controller id=1001] Topics not in preferred replica for broker 1001 Map() (kafka.controller.KafkaController)
[2020-05-06 04:14:46,841] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 12766 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-05-06 04:14:47,292] INFO EventThread shut down for session: 0x1000001970c0001 (org.apache.zookeeper.ClientCnxn)
[2020-05-06 04:14:59,339] INFO [ZooKeeperClient] Session expired. (kafka.zookeeper.ZooKeeperClient)
[2020-05-06 04:16:14,107] TRACE [Controller id=1001] Leader imbalance ratio for broker 1001 is 0.0 (kafka.controller.KafkaController)
[2020-05-06 04:24:38,249] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 5807 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-05-06 04:25:45,911] DEBUG [Controller id=1001] Resigning (kafka.controller.KafkaController)
[2020-05-06 04:26:10,232] INFO [ZooKeeperClient] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
[2020-05-06 04:26:10,253] DEBUG [Controller id=1001] Unregister BrokerModifications handler for Set(1001) (kafka.controller.KafkaController)
[2020-05-06 04:26:10,340] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@3eb738bb (org.apache.zookeeper.ZooKeeper)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment