Created
January 24, 2017 16:41
-
-
Save trisberg/bf942a6b2c818aec11efeadfc26d8164 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[2017-01-24 11:07:56,951] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) | |
[2017-01-24 11:07:56,953] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) | |
[2017-01-24 11:07:56,953] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) | |
[2017-01-24 11:07:56,953] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) | |
[2017-01-24 11:07:56,953] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) | |
[2017-01-24 11:07:56,977] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) | |
[2017-01-24 11:07:56,977] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) | |
[2017-01-24 11:07:56,996] INFO Server environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,996] INFO Server environment:host.name=Carbon (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,996] INFO Server environment:java.version=1.8.0_111 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,996] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,996] INFO Server environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,996] INFO Server environment:java.class.path=:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/argparse4j-0.5.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/connect-api-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/connect-file-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/connect-json-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/connect-runtime-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/guava-18.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/hk2-api-2.4.0-b34.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/hk2-locator-2.4.0-b34.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/hk2-utils-2.4.0-b34.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jackson-annotations-2.6.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jackson-core-2.6.3.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jackson-databind-2.6.3.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/javassist-3.18.2-GA.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/javax.annotation-api-1.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/javax.inject-1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/javax.inject-2.4.0-b34.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/javax.servlet-api-3.1.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/javax.ws.rs-api-2.0.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-client-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-common-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-container-servlet-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-guava-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-media-jaxb-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-server-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-http-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-io-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-security-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-server-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-util-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jopt-simple-4.9.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka_2.11-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka_2.11-0.10.1.1-sources.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka_2.11-0.10.1.1-test-sources.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka-clients-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka-log4j-appender-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka-streams-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka-streams-examples-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka-tools-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/log4j-1.2.17.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/lz4-1.3.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/metrics-core-2.2.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/osgi-resource-locator-1.0.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/reflections-0.9.10.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/rocksdbjni-4.9.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/scala-library-2.11.8.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/slf4j-api-1.7.21.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/slf4j-log4j12-1.7.21.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/snappy-java-1.1.2.6.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/validation-api-1.1.0.Final.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/zkclient-0.9.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/zookeeper-3.4.8.jar (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,997] INFO Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,997] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,997] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,997] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,997] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,997] INFO Server environment:os.version=4.4.0-59-generic (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,997] INFO Server environment:user.name=trisberg (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,998] INFO Server environment:user.home=/home/trisberg (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:56,998] INFO Server environment:user.dir=/home/trisberg/Developer/kafka_2.11-0.10.1.1 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:57,008] INFO tickTime set to 3000 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:57,008] INFO minSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:57,009] INFO maxSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:57,028] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) | |
[2017-01-24 11:07:57,258] INFO KafkaConfig values: | |
advertised.host.name = null | |
advertised.listeners = null | |
advertised.port = null | |
authorizer.class.name = | |
auto.create.topics.enable = true | |
auto.leader.rebalance.enable = true | |
background.threads = 10 | |
broker.id = 0 | |
broker.id.generation.enable = true | |
broker.rack = null | |
compression.type = producer | |
connections.max.idle.ms = 600000 | |
controlled.shutdown.enable = true | |
controlled.shutdown.max.retries = 3 | |
controlled.shutdown.retry.backoff.ms = 5000 | |
controller.socket.timeout.ms = 30000 | |
default.replication.factor = 1 | |
delete.topic.enable = false | |
fetch.purgatory.purge.interval.requests = 1000 | |
group.max.session.timeout.ms = 300000 | |
group.min.session.timeout.ms = 6000 | |
host.name = | |
inter.broker.protocol.version = 0.10.1-IV2 | |
leader.imbalance.check.interval.seconds = 300 | |
leader.imbalance.per.broker.percentage = 10 | |
listeners = null | |
log.cleaner.backoff.ms = 15000 | |
log.cleaner.dedupe.buffer.size = 134217728 | |
log.cleaner.delete.retention.ms = 86400000 | |
log.cleaner.enable = true | |
log.cleaner.io.buffer.load.factor = 0.9 | |
log.cleaner.io.buffer.size = 524288 | |
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 | |
log.cleaner.min.cleanable.ratio = 0.5 | |
log.cleaner.min.compaction.lag.ms = 0 | |
log.cleaner.threads = 1 | |
log.cleanup.policy = [delete] | |
log.dir = /tmp/kafka-logs | |
log.dirs = /tmp/kafka-logs | |
log.flush.interval.messages = 9223372036854775807 | |
log.flush.interval.ms = null | |
log.flush.offset.checkpoint.interval.ms = 60000 | |
log.flush.scheduler.interval.ms = 9223372036854775807 | |
log.index.interval.bytes = 4096 | |
log.index.size.max.bytes = 10485760 | |
log.message.format.version = 0.10.1-IV2 | |
log.message.timestamp.difference.max.ms = 9223372036854775807 | |
log.message.timestamp.type = CreateTime | |
log.preallocate = false | |
log.retention.bytes = -1 | |
log.retention.check.interval.ms = 300000 | |
log.retention.hours = 168 | |
log.retention.minutes = null | |
log.retention.ms = null | |
log.roll.hours = 168 | |
log.roll.jitter.hours = 0 | |
log.roll.jitter.ms = null | |
log.roll.ms = null | |
log.segment.bytes = 1073741824 | |
log.segment.delete.delay.ms = 60000 | |
max.connections.per.ip = 2147483647 | |
max.connections.per.ip.overrides = | |
message.max.bytes = 1000012 | |
metric.reporters = [] | |
metrics.num.samples = 2 | |
metrics.sample.window.ms = 30000 | |
min.insync.replicas = 1 | |
num.io.threads = 8 | |
num.network.threads = 3 | |
num.partitions = 1 | |
num.recovery.threads.per.data.dir = 1 | |
num.replica.fetchers = 1 | |
offset.metadata.max.bytes = 4096 | |
offsets.commit.required.acks = -1 | |
offsets.commit.timeout.ms = 5000 | |
offsets.load.buffer.size = 5242880 | |
offsets.retention.check.interval.ms = 600000 | |
offsets.retention.minutes = 1440 | |
offsets.topic.compression.codec = 0 | |
offsets.topic.num.partitions = 50 | |
offsets.topic.replication.factor = 3 | |
offsets.topic.segment.bytes = 104857600 | |
port = 9092 | |
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder | |
producer.purgatory.purge.interval.requests = 1000 | |
queued.max.requests = 500 | |
quota.consumer.default = 9223372036854775807 | |
quota.producer.default = 9223372036854775807 | |
quota.window.num = 11 | |
quota.window.size.seconds = 1 | |
replica.fetch.backoff.ms = 1000 | |
replica.fetch.max.bytes = 1048576 | |
replica.fetch.min.bytes = 1 | |
replica.fetch.response.max.bytes = 10485760 | |
replica.fetch.wait.max.ms = 500 | |
replica.high.watermark.checkpoint.interval.ms = 5000 | |
replica.lag.time.max.ms = 10000 | |
replica.socket.receive.buffer.bytes = 65536 | |
replica.socket.timeout.ms = 30000 | |
replication.quota.window.num = 11 | |
replication.quota.window.size.seconds = 1 | |
request.timeout.ms = 30000 | |
reserved.broker.max.id = 1000 | |
sasl.enabled.mechanisms = [GSSAPI] | |
sasl.kerberos.kinit.cmd = /usr/bin/kinit | |
sasl.kerberos.min.time.before.relogin = 60000 | |
sasl.kerberos.principal.to.local.rules = [DEFAULT] | |
sasl.kerberos.service.name = null | |
sasl.kerberos.ticket.renew.jitter = 0.05 | |
sasl.kerberos.ticket.renew.window.factor = 0.8 | |
sasl.mechanism.inter.broker.protocol = GSSAPI | |
security.inter.broker.protocol = PLAINTEXT | |
socket.receive.buffer.bytes = 102400 | |
socket.request.max.bytes = 104857600 | |
socket.send.buffer.bytes = 102400 | |
ssl.cipher.suites = null | |
ssl.client.auth = none | |
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] | |
ssl.endpoint.identification.algorithm = null | |
ssl.key.password = null | |
ssl.keymanager.algorithm = SunX509 | |
ssl.keystore.location = null | |
ssl.keystore.password = null | |
ssl.keystore.type = JKS | |
ssl.protocol = TLS | |
ssl.provider = null | |
ssl.secure.random.implementation = null | |
ssl.trustmanager.algorithm = PKIX | |
ssl.truststore.location = null | |
ssl.truststore.password = null | |
ssl.truststore.type = JKS | |
unclean.leader.election.enable = true | |
zookeeper.connect = localhost:2181 | |
zookeeper.connection.timeout.ms = 6000 | |
zookeeper.session.timeout.ms = 6000 | |
zookeeper.set.acl = false | |
zookeeper.sync.time.ms = 2000 | |
(kafka.server.KafkaConfig) | |
[2017-01-24 11:07:57,304] INFO starting (kafka.server.KafkaServer) | |
[2017-01-24 11:07:57,316] INFO [ThrottledRequestReaper-Fetch], Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper) | |
[2017-01-24 11:07:57,318] INFO [ThrottledRequestReaper-Produce], Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper) | |
[2017-01-24 11:07:57,321] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer) | |
[2017-01-24 11:07:57,334] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread) | |
[2017-01-24 11:07:57,339] INFO Client environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,339] INFO Client environment:host.name=Carbon (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,339] INFO Client environment:java.version=1.8.0_111 (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,339] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,339] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,339] INFO Client environment:java.class.path=:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/argparse4j-0.5.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/connect-api-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/connect-file-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/connect-json-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/connect-runtime-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/guava-18.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/hk2-api-2.4.0-b34.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/hk2-locator-2.4.0-b34.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/hk2-utils-2.4.0-b34.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jackson-annotations-2.6.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jackson-core-2.6.3.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jackson-databind-2.6.3.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/javassist-3.18.2-GA.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/javax.annotation-api-1.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/javax.inject-1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/javax.inject-2.4.0-b34.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/javax.servlet-api-3.1.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/javax.ws.rs-api-2.0.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-client-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-common-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-container-servlet-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-guava-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-media-jaxb-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jersey-server-2.22.2.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-http-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-io-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-security-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-server-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jetty-util-9.2.15.v20160210.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/jopt-simple-4.9.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka_2.11-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka_2.11-0.10.1.1-sources.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka_2.11-0.10.1.1-test-sources.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka-clients-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka-log4j-appender-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka-streams-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka-streams-examples-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/kafka-tools-0.10.1.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/log4j-1.2.17.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/lz4-1.3.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/metrics-core-2.2.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/osgi-resource-locator-1.0.1.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/reflections-0.9.10.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/rocksdbjni-4.9.0.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/scala-library-2.11.8.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/slf4j-api-1.7.21.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/slf4j-log4j12-1.7.21.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/snappy-java-1.1.2.6.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/validation-api-1.1.0.Final.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/zkclient-0.9.jar:/home/trisberg/Developer/kafka_2.11-0.10.1.1/bin/../libs/zookeeper-3.4.8.jar (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,339] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,339] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,339] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,339] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,339] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,339] INFO Client environment:os.version=4.4.0-59-generic (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,339] INFO Client environment:user.name=trisberg (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,339] INFO Client environment:user.home=/home/trisberg (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,340] INFO Client environment:user.dir=/home/trisberg/Developer/kafka_2.11-0.10.1.1 (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,340] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@5e82df6a (org.apache.zookeeper.ZooKeeper) | |
[2017-01-24 11:07:57,352] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient) | |
[2017-01-24 11:07:57,359] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) | |
[2017-01-24 11:07:57,422] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn) | |
[2017-01-24 11:07:57,423] INFO Accepted socket connection from /127.0.0.1:60726 (org.apache.zookeeper.server.NIOServerCnxnFactory) | |
[2017-01-24 11:07:57,477] INFO Client attempting to establish new session at /127.0.0.1:60726 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:57,479] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) | |
[2017-01-24 11:07:57,497] INFO Established session 0x159d13bd7750000 with negotiated timeout 6000 for client /127.0.0.1:60726 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:07:57,499] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x159d13bd7750000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn) | |
[2017-01-24 11:07:57,500] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient) | |
[2017-01-24 11:07:57,529] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x5 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:07:57,543] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xb zxid:0x7 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:07:57,556] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x13 zxid:0xc txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:07:57,597] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x1b zxid:0x11 txntype:-1 reqpath:n/a Error Path:/cluster Error:KeeperErrorCode = NoNode for /cluster (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:07:57,603] INFO Cluster ID = xSfv8zugQjeaY0yY0Aj2TA (kafka.server.KafkaServer) | |
[2017-01-24 11:07:57,631] INFO Log directory '/tmp/kafka-logs' not found, creating it. (kafka.log.LogManager) | |
[2017-01-24 11:07:57,639] INFO Loading logs. (kafka.log.LogManager) | |
[2017-01-24 11:07:57,654] INFO Logs loading complete in 15 ms. (kafka.log.LogManager) | |
[2017-01-24 11:07:57,686] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) | |
[2017-01-24 11:07:57,687] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) | |
[2017-01-24 11:07:57,691] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint) | |
[2017-01-24 11:07:57,726] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor) | |
[2017-01-24 11:07:57,729] INFO [Socket Server on Broker 0], Started 1 acceptor threads (kafka.network.SocketServer) | |
[2017-01-24 11:07:57,745] INFO [ExpirationReaper-0], Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) | |
[2017-01-24 11:07:57,746] INFO [ExpirationReaper-0], Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) | |
[2017-01-24 11:07:57,774] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral) | |
[2017-01-24 11:07:57,780] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral) | |
[2017-01-24 11:07:57,781] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector) | |
[2017-01-24 11:07:57,784] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:setData cxid:0x25 zxid:0x15 txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:07:57,829] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:delete cxid:0x34 zxid:0x17 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:07:57,840] INFO [ExpirationReaper-0], Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) | |
[2017-01-24 11:07:57,849] INFO [ExpirationReaper-0], Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) | |
[2017-01-24 11:07:57,857] INFO [ExpirationReaper-0], Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) | |
[2017-01-24 11:07:57,866] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.GroupCoordinator) | |
[2017-01-24 11:07:57,867] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.GroupCoordinator) | |
[2017-01-24 11:07:57,868] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:07:57,897] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$) | |
[2017-01-24 11:07:57,934] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener) | |
[2017-01-24 11:07:57,936] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.utils.ZKCheckedEphemeral) | |
[2017-01-24 11:07:57,937] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x3f zxid:0x18 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:07:57,937] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x40 zxid:0x19 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:07:57,940] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral) | |
[2017-01-24 11:07:57,942] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(Carbon,9092,PLAINTEXT) (kafka.utils.ZkUtils) | |
[2017-01-24 11:07:57,943] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint) | |
[2017-01-24 11:07:57,976] INFO Kafka version : 0.10.1.1 (org.apache.kafka.common.utils.AppInfoParser) | |
[2017-01-24 11:07:57,976] INFO Kafka commitId : f10ef2720b03b247 (org.apache.kafka.common.utils.AppInfoParser) | |
[2017-01-24 11:07:57,978] INFO [Kafka Server 0], started (kafka.server.KafkaServer) | |
[2017-01-24 11:17:57,867] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:18:58,263] INFO Accepted socket connection from /127.0.0.1:60874 (org.apache.zookeeper.server.NIOServerCnxnFactory) | |
[2017-01-24 11:18:58,270] INFO Client attempting to establish new session at /127.0.0.1:60874 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:18:58,290] INFO Established session 0x159d13bd7750001 with negotiated timeout 10000 for client /127.0.0.1:60874 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:18:58,583] INFO Accepted socket connection from /127.0.0.1:60876 (org.apache.zookeeper.server.NIOServerCnxnFactory) | |
[2017-01-24 11:18:58,589] INFO Client attempting to establish new session at /127.0.0.1:60876 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:18:58,590] INFO Established session 0x159d13bd7750002 with negotiated timeout 10000 for client /127.0.0.1:60876 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:18:59,685] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750001 type:setData cxid:0x6 zxid:0x1d txntype:-1 reqpath:n/a Error Path:/config/topics/ticktock.time Error:KeeperErrorCode = NoNode for /config/topics/ticktock.time (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:18:59,704] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750001 type:create cxid:0x8 zxid:0x1e txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:18:59,739] INFO Processed session termination for sessionid: 0x159d13bd7750001 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:18:59,742] INFO Closed socket connection for client /127.0.0.1:60874 which had sessionid 0x159d13bd7750001 (org.apache.zookeeper.server.NIOServerCnxn) | |
[2017-01-24 11:18:59,779] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x4a zxid:0x22 txntype:-1 reqpath:n/a Error Path:/brokers/topics/ticktock.time/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/ticktock.time/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:18:59,781] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x4b zxid:0x23 txntype:-1 reqpath:n/a Error Path:/brokers/topics/ticktock.time/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/ticktock.time/partitions (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:18:59,847] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions ticktock.time-0 (kafka.server.ReplicaFetcherManager) | |
[2017-01-24 11:18:59,979] INFO Completed load of log ticktock.time-0 with 1 log segments and log end offset 0 in 65 ms (kafka.log.Log) | |
[2017-01-24 11:19:00,001] INFO Created log for partition [ticktock.time,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:00,004] INFO Partition [ticktock.time,0] on broker 0: No checkpointed highwatermark is found for partition [ticktock.time,0] (kafka.cluster.Partition) | |
[2017-01-24 11:19:00,403] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:setData cxid:0x57 zxid:0x27 txntype:-1 reqpath:n/a Error Path:/config/topics/__consumer_offsets Error:KeeperErrorCode = NoNode for /config/topics/__consumer_offsets (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:00,408] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x58 zxid:0x28 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:00,415] INFO Topic creation {"version":1,"partitions":{"45":[0],"34":[0],"12":[0],"8":[0],"19":[0],"23":[0],"4":[0],"40":[0],"15":[0],"11":[0],"9":[0],"44":[0],"33":[0],"22":[0],"26":[0],"37":[0],"13":[0],"46":[0],"24":[0],"35":[0],"16":[0],"5":[0],"10":[0],"48":[0],"21":[0],"43":[0],"32":[0],"49":[0],"6":[0],"36":[0],"1":[0],"39":[0],"17":[0],"25":[0],"14":[0],"47":[0],"31":[0],"42":[0],"0":[0],"20":[0],"27":[0],"2":[0],"38":[0],"18":[0],"30":[0],"7":[0],"29":[0],"41":[0],"3":[0],"28":[0]}} (kafka.admin.AdminUtils$) | |
[2017-01-24 11:19:00,420] INFO [KafkaApi-0] Auto creation of topic __consumer_offsets with 50 partitions and replication factor 1 is successful (kafka.server.KafkaApis) | |
[2017-01-24 11:19:00,689] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x97 zxid:0x2b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/32 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/32 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:00,695] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x99 zxid:0x2c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:00,748] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x9f zxid:0x30 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/16 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/16 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:00,780] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xa2 zxid:0x33 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/49 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/49 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:00,794] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xa5 zxid:0x36 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/44 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/44 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:00,827] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xaa zxid:0x39 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/28 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/28 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:00,856] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xad zxid:0x3c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/17 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/17 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:00,877] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xb1 zxid:0x3f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/23 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/23 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:00,919] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xb6 zxid:0x42 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/7 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/7 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:00,964] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xba zxid:0x45 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/4 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/4 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,003] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xbd zxid:0x48 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/29 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/29 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,041] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xc2 zxid:0x4b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/35 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/35 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,077] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xc6 zxid:0x4e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/3 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/3 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,118] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xc9 zxid:0x51 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/24 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/24 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,163] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xcf zxid:0x54 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/41 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/41 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,204] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xd2 zxid:0x57 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,211] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xd5 zxid:0x5a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/38 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/38 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,221] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xd8 zxid:0x5d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/13 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/13 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,243] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xde zxid:0x60 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/8 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/8 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,252] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xe1 zxid:0x63 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/5 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/5 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,257] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xe4 zxid:0x66 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/39 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/39 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,265] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xe7 zxid:0x69 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/36 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/36 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,269] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xea zxid:0x6c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/40 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/40 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,275] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xed zxid:0x6f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/45 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/45 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,283] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xf0 zxid:0x72 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/15 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/15 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,289] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xf3 zxid:0x75 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/33 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/33 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,295] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xf6 zxid:0x78 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/37 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/37 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,299] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xf9 zxid:0x7b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/21 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/21 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,304] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xfc zxid:0x7e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/6 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/6 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,310] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0xff zxid:0x81 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/11 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/11 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,315] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x102 zxid:0x84 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/20 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/20 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,320] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x105 zxid:0x87 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/47 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/47 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,331] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x10a zxid:0x8a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/2 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/2 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,340] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x10e zxid:0x8d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/27 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/27 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,344] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x111 zxid:0x90 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/34 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/34 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,349] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x114 zxid:0x93 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/9 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/9 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,355] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x117 zxid:0x96 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/22 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/22 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,362] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x11a zxid:0x99 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/42 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/42 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,379] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x11d zxid:0x9c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/14 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/14 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,394] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x120 zxid:0x9f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/25 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/25 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,403] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x123 zxid:0xa2 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/10 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/10 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,410] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x126 zxid:0xa5 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/48 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/48 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,417] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x129 zxid:0xa8 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/31 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/31 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,428] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x12c zxid:0xab txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/18 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/18 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,436] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x131 zxid:0xae txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/19 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/19 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,441] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x135 zxid:0xb1 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/12 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/12 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,453] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x138 zxid:0xb4 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/46 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/46 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,460] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x13b zxid:0xb7 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/43 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/43 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,471] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x13e zxid:0xba txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/1 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/1 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,479] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x141 zxid:0xbd txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/26 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/26 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,485] INFO Got user-level KeeperException when processing sessionid:0x159d13bd7750000 type:create cxid:0x144 zxid:0xc0 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/30 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/30 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:01,553] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.server.ReplicaFetcherManager) | |
[2017-01-24 11:19:01,568] INFO Completed load of log __consumer_offsets-0 with 1 log segments and log end offset 0 in 9 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,569] INFO Created log for partition [__consumer_offsets,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,573] INFO Partition [__consumer_offsets,0] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,0] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,582] INFO Completed load of log __consumer_offsets-29 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,584] INFO Created log for partition [__consumer_offsets,29] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,584] INFO Partition [__consumer_offsets,29] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,29] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,590] INFO Completed load of log __consumer_offsets-48 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,591] INFO Created log for partition [__consumer_offsets,48] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,592] INFO Partition [__consumer_offsets,48] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,48] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,598] INFO Completed load of log __consumer_offsets-10 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,600] INFO Created log for partition [__consumer_offsets,10] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,600] INFO Partition [__consumer_offsets,10] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,10] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,605] INFO Completed load of log __consumer_offsets-45 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,606] INFO Created log for partition [__consumer_offsets,45] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,607] INFO Partition [__consumer_offsets,45] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,45] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,611] INFO Completed load of log __consumer_offsets-26 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,613] INFO Created log for partition [__consumer_offsets,26] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,613] INFO Partition [__consumer_offsets,26] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,26] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,619] INFO Completed load of log __consumer_offsets-7 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,620] INFO Created log for partition [__consumer_offsets,7] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,620] INFO Partition [__consumer_offsets,7] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,7] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,625] INFO Completed load of log __consumer_offsets-42 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,626] INFO Created log for partition [__consumer_offsets,42] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,627] INFO Partition [__consumer_offsets,42] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,42] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,632] INFO Completed load of log __consumer_offsets-4 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,633] INFO Created log for partition [__consumer_offsets,4] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,633] INFO Partition [__consumer_offsets,4] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,4] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,639] INFO Completed load of log __consumer_offsets-23 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,640] INFO Created log for partition [__consumer_offsets,23] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,640] INFO Partition [__consumer_offsets,23] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,23] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,645] INFO Completed load of log __consumer_offsets-1 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,647] INFO Created log for partition [__consumer_offsets,1] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,647] INFO Partition [__consumer_offsets,1] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,1] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,663] INFO Completed load of log __consumer_offsets-20 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,665] INFO Created log for partition [__consumer_offsets,20] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,665] INFO Partition [__consumer_offsets,20] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,20] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,671] INFO Completed load of log __consumer_offsets-39 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,672] INFO Created log for partition [__consumer_offsets,39] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,673] INFO Partition [__consumer_offsets,39] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,39] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,681] INFO Completed load of log __consumer_offsets-17 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,682] INFO Created log for partition [__consumer_offsets,17] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,683] INFO Partition [__consumer_offsets,17] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,17] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,688] INFO Completed load of log __consumer_offsets-36 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,689] INFO Created log for partition [__consumer_offsets,36] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,689] INFO Partition [__consumer_offsets,36] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,36] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,694] INFO Completed load of log __consumer_offsets-14 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,695] INFO Created log for partition [__consumer_offsets,14] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,696] INFO Partition [__consumer_offsets,14] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,14] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,701] INFO Completed load of log __consumer_offsets-33 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,702] INFO Created log for partition [__consumer_offsets,33] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,703] INFO Partition [__consumer_offsets,33] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,33] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,708] INFO Completed load of log __consumer_offsets-49 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,709] INFO Created log for partition [__consumer_offsets,49] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,710] INFO Partition [__consumer_offsets,49] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,49] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,714] INFO Completed load of log __consumer_offsets-11 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,715] INFO Created log for partition [__consumer_offsets,11] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,716] INFO Partition [__consumer_offsets,11] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,11] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,720] INFO Completed load of log __consumer_offsets-30 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,722] INFO Created log for partition [__consumer_offsets,30] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,722] INFO Partition [__consumer_offsets,30] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,30] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,727] INFO Completed load of log __consumer_offsets-46 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,728] INFO Created log for partition [__consumer_offsets,46] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,728] INFO Partition [__consumer_offsets,46] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,46] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,732] INFO Completed load of log __consumer_offsets-27 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,734] INFO Created log for partition [__consumer_offsets,27] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,734] INFO Partition [__consumer_offsets,27] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,27] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,740] INFO Completed load of log __consumer_offsets-8 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,742] INFO Created log for partition [__consumer_offsets,8] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,742] INFO Partition [__consumer_offsets,8] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,8] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,747] INFO Completed load of log __consumer_offsets-24 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,749] INFO Created log for partition [__consumer_offsets,24] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,749] INFO Partition [__consumer_offsets,24] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,24] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,753] INFO Completed load of log __consumer_offsets-43 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,755] INFO Created log for partition [__consumer_offsets,43] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,755] INFO Partition [__consumer_offsets,43] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,43] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,759] INFO Completed load of log __consumer_offsets-5 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,760] INFO Created log for partition [__consumer_offsets,5] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,761] INFO Partition [__consumer_offsets,5] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,5] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,765] INFO Completed load of log __consumer_offsets-21 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,766] INFO Created log for partition [__consumer_offsets,21] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,766] INFO Partition [__consumer_offsets,21] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,21] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,770] INFO Completed load of log __consumer_offsets-2 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,771] INFO Created log for partition [__consumer_offsets,2] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,772] INFO Partition [__consumer_offsets,2] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,2] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,776] INFO Completed load of log __consumer_offsets-40 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,777] INFO Created log for partition [__consumer_offsets,40] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,777] INFO Partition [__consumer_offsets,40] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,40] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,781] INFO Completed load of log __consumer_offsets-37 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,782] INFO Created log for partition [__consumer_offsets,37] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,782] INFO Partition [__consumer_offsets,37] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,37] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,786] INFO Completed load of log __consumer_offsets-18 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,787] INFO Created log for partition [__consumer_offsets,18] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,787] INFO Partition [__consumer_offsets,18] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,18] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,791] INFO Completed load of log __consumer_offsets-34 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,792] INFO Created log for partition [__consumer_offsets,34] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,793] INFO Partition [__consumer_offsets,34] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,34] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,797] INFO Completed load of log __consumer_offsets-15 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,798] INFO Created log for partition [__consumer_offsets,15] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,799] INFO Partition [__consumer_offsets,15] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,15] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,803] INFO Completed load of log __consumer_offsets-12 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,804] INFO Created log for partition [__consumer_offsets,12] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,804] INFO Partition [__consumer_offsets,12] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,12] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,809] INFO Completed load of log __consumer_offsets-31 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,812] INFO Created log for partition [__consumer_offsets,31] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,813] INFO Partition [__consumer_offsets,31] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,31] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,816] INFO Completed load of log __consumer_offsets-9 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,817] INFO Created log for partition [__consumer_offsets,9] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,817] INFO Partition [__consumer_offsets,9] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,9] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,821] INFO Completed load of log __consumer_offsets-47 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,822] INFO Created log for partition [__consumer_offsets,47] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,823] INFO Partition [__consumer_offsets,47] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,47] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,826] INFO Completed load of log __consumer_offsets-19 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,827] INFO Created log for partition [__consumer_offsets,19] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,827] INFO Partition [__consumer_offsets,19] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,19] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,831] INFO Completed load of log __consumer_offsets-28 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,833] INFO Created log for partition [__consumer_offsets,28] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,833] INFO Partition [__consumer_offsets,28] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,28] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,843] INFO Completed load of log __consumer_offsets-38 with 1 log segments and log end offset 0 in 4 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,844] INFO Created log for partition [__consumer_offsets,38] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,844] INFO Partition [__consumer_offsets,38] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,38] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,849] INFO Completed load of log __consumer_offsets-35 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,850] INFO Created log for partition [__consumer_offsets,35] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,850] INFO Partition [__consumer_offsets,35] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,35] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,855] INFO Completed load of log __consumer_offsets-44 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,856] INFO Created log for partition [__consumer_offsets,44] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,856] INFO Partition [__consumer_offsets,44] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,44] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,861] INFO Completed load of log __consumer_offsets-6 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,862] INFO Created log for partition [__consumer_offsets,6] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,862] INFO Partition [__consumer_offsets,6] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,6] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,866] INFO Completed load of log __consumer_offsets-25 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,868] INFO Created log for partition [__consumer_offsets,25] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,868] INFO Partition [__consumer_offsets,25] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,25] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,872] INFO Completed load of log __consumer_offsets-16 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,874] INFO Created log for partition [__consumer_offsets,16] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,874] INFO Partition [__consumer_offsets,16] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,16] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,878] INFO Completed load of log __consumer_offsets-22 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,879] INFO Created log for partition [__consumer_offsets,22] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,879] INFO Partition [__consumer_offsets,22] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,22] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,883] INFO Completed load of log __consumer_offsets-41 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,884] INFO Created log for partition [__consumer_offsets,41] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,884] INFO Partition [__consumer_offsets,41] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,41] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,888] INFO Completed load of log __consumer_offsets-32 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,889] INFO Created log for partition [__consumer_offsets,32] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,889] INFO Partition [__consumer_offsets,32] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,32] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,892] INFO Completed load of log __consumer_offsets-3 with 1 log segments and log end offset 0 in 0 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,893] INFO Created log for partition [__consumer_offsets,3] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,894] INFO Partition [__consumer_offsets,3] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,3] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,898] INFO Completed load of log __consumer_offsets-13 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log) | |
[2017-01-24 11:19:01,899] INFO Created log for partition [__consumer_offsets,13] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager) | |
[2017-01-24 11:19:01,899] INFO Partition [__consumer_offsets,13] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,13] (kafka.cluster.Partition) | |
[2017-01-24 11:19:01,907] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,22] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,923] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,22] in 14 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,923] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,25] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,926] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,25] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,926] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,28] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,929] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,28] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,929] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,31] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,932] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,31] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,932] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,34] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,934] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,34] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,934] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,37] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,937] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,37] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,937] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,40] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,942] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,40] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,942] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,43] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,946] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,43] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,946] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,46] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,949] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,46] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,950] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,49] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,952] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,49] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,953] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,41] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,956] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,41] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,956] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,44] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,960] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,44] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,960] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,47] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,963] INFO [GroupCoordinator 0]: Preparing to restabilize group ticktock with old generation 0 (kafka.coordinator.GroupCoordinator) | |
[2017-01-24 11:19:01,964] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,47] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,964] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,1] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,974] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,1] in 9 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,974] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,4] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,975] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,4] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,975] INFO [GroupCoordinator 0]: Stabilized group ticktock generation 1 (kafka.coordinator.GroupCoordinator) | |
[2017-01-24 11:19:01,975] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,7] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,976] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,7] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,976] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,10] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,977] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,10] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,977] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,13] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,978] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,13] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,979] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,16] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,980] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,16] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,980] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,19] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,981] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,19] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,982] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,2] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,983] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,2] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,983] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,5] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,985] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,5] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,985] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,8] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,987] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,8] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,987] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,11] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,989] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,11] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,989] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,14] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,991] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,14] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,991] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,17] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,993] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,17] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,993] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,20] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,994] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,20] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,994] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,23] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,999] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,23] in 5 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:01,999] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,26] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,003] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,26] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,003] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,29] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,005] INFO [GroupCoordinator 0]: Assignment received from leader for group ticktock for generation 1 (kafka.coordinator.GroupCoordinator) | |
[2017-01-24 11:19:02,007] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,29] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,007] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,32] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,011] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,32] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,011] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,35] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,016] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,35] in 5 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,016] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,38] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,020] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,38] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,020] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,0] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,024] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,0] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,024] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,3] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,028] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,3] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,028] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,6] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,032] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,6] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,032] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,9] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,036] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,9] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,036] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,12] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,039] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,12] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,039] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,15] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,043] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,15] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,043] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,18] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,048] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,18] in 5 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,048] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,21] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,054] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,21] in 6 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,054] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,24] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,059] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,24] in 5 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,059] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,27] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,063] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,27] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,064] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,30] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,068] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,30] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,068] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,33] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,073] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,33] in 5 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,073] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,36] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,078] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,36] in 5 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,078] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,39] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,085] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,39] in 7 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,085] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,42] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,089] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,42] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,089] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,45] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,094] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,45] in 5 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,094] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,48] (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:02,099] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,48] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:19:06,471] INFO Processed session termination for sessionid: 0x159d13bd7750002 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:19:06,478] INFO Closed socket connection for client /127.0.0.1:60876 which had sessionid 0x159d13bd7750002 (org.apache.zookeeper.server.NIOServerCnxn) | |
[2017-01-24 11:27:57,868] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 2 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:28:37,330] INFO Accepted socket connection from /127.0.0.1:32786 (org.apache.zookeeper.server.NIOServerCnxnFactory) | |
[2017-01-24 11:28:37,333] INFO Client attempting to establish new session at /127.0.0.1:32786 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:28:37,335] INFO Established session 0x159d13bd7750003 with negotiated timeout 30000 for client /127.0.0.1:32786 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:28:37,360] INFO Processed session termination for sessionid: 0x159d13bd7750003 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:28:37,361] INFO Closed socket connection for client /127.0.0.1:32786 which had sessionid 0x159d13bd7750003 (org.apache.zookeeper.server.NIOServerCnxn) | |
[2017-01-24 11:37:31,395] INFO [GroupCoordinator 0]: Preparing to restabilize group ticktock with old generation 1 (kafka.coordinator.GroupCoordinator) | |
[2017-01-24 11:37:31,397] INFO [GroupCoordinator 0]: Group ticktock with generation 2 is now empty (kafka.coordinator.GroupCoordinator) | |
[2017-01-24 11:37:57,867] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) | |
[2017-01-24 11:38:00,000] INFO Accepted socket connection from /127.0.0.1:32930 (org.apache.zookeeper.server.NIOServerCnxnFactory) | |
[2017-01-24 11:38:00,005] INFO Client attempting to establish new session at /127.0.0.1:32930 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:38:00,025] INFO Established session 0x159d13bd7750004 with negotiated timeout 10000 for client /127.0.0.1:32930 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:38:01,223] INFO Accepted socket connection from /127.0.0.1:32934 (org.apache.zookeeper.server.NIOServerCnxnFactory) | |
[2017-01-24 11:38:01,234] INFO Client attempting to establish new session at /127.0.0.1:32934 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:38:01,253] INFO Established session 0x159d13bd7750005 with negotiated timeout 10000 for client /127.0.0.1:32934 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:38:01,398] INFO Processed session termination for sessionid: 0x159d13bd7750004 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:38:01,399] INFO Closed socket connection for client /127.0.0.1:32930 which had sessionid 0x159d13bd7750004 (org.apache.zookeeper.server.NIOServerCnxn) | |
[2017-01-24 11:38:02,035] INFO [GroupCoordinator 0]: Preparing to restabilize group ticktock with old generation 2 (kafka.coordinator.GroupCoordinator) | |
[2017-01-24 11:38:02,036] INFO [GroupCoordinator 0]: Stabilized group ticktock generation 3 (kafka.coordinator.GroupCoordinator) | |
[2017-01-24 11:38:02,064] INFO [GroupCoordinator 0]: Assignment received from leader for group ticktock for generation 3 (kafka.coordinator.GroupCoordinator) | |
[2017-01-24 11:38:02,536] INFO Processed session termination for sessionid: 0x159d13bd7750005 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:38:02,561] INFO Closed socket connection for client /127.0.0.1:32934 which had sessionid 0x159d13bd7750005 (org.apache.zookeeper.server.NIOServerCnxn) | |
[2017-01-24 11:38:02,852] INFO Accepted socket connection from /127.0.0.1:32946 (org.apache.zookeeper.server.NIOServerCnxnFactory) | |
[2017-01-24 11:38:02,852] INFO Client attempting to establish new session at /127.0.0.1:32946 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:38:02,854] INFO Established session 0x159d13bd7750006 with negotiated timeout 10000 for client /127.0.0.1:32946 (org.apache.zookeeper.server.ZooKeeperServer) | |
[2017-01-24 11:38:02,935] INFO Processed session termination for sessionid: 0x159d13bd7750006 (org.apache.zookeeper.server.PrepRequestProcessor) | |
[2017-01-24 11:38:02,937] INFO Closed socket connection for client /127.0.0.1:32946 which had sessionid 0x159d13bd7750006 (org.apache.zookeeper.server.NIOServerCnxn) | |
[2017-01-24 11:40:27,668] INFO [GroupCoordinator 0]: Preparing to restabilize group ticktock with old generation 3 (kafka.coordinator.GroupCoordinator) | |
[2017-01-24 11:40:27,668] INFO [GroupCoordinator 0]: Group ticktock with generation 4 is now empty (kafka.coordinator.GroupCoordinator) | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment