Skip to content

Instantly share code, notes, and snippets.

@emetsger
Created October 3, 2017 18:52
Show Gist options
  • Save emetsger/05ef47ce15ca54b2735a65627992a2d0 to your computer and use it in GitHub Desktop.
Save emetsger/05ef47ce15ca54b2735a65627992a2d0 to your computer and use it in GitHub Desktop.
esm:~ $ cat Desktop/kafka-build-tail.txt
13:54:40.961 [-client-thread | adminclient-1] TRACE [KafkaAdminClient$AdminClientRu] - adminclient-1: entering KafkaClient#poll(timeout=2)
13:54:40.961 [-client-thread | adminclient-1] TRACE [ NetworkClient] - Found least loaded node 127.0.0.1:55383 (id: -1 rack: null)
13:54:40.961 [-client-thread | adminclient-1] DEBUG [NetworkClient$DefaultMetadataU] - Initialize connection to node -1 for sending metadata request
13:54:40.961 [-client-thread | adminclient-1] DEBUG [ NetworkClient] - Initiating connection to node -1 at 127.0.0.1:55383.
13:54:40.962 [-client-thread | adminclient-1] DEBUG [ Selector] - Connection with /127.0.0.1 disconnected
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:95)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:359)
at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:432)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:927)
at java.lang.Thread.run(Thread.java:748)
13:54:40.962 [-client-thread | adminclient-1] DEBUG [ NetworkClient] - Node -1 disconnected.
13:54:40.962 [-client-thread | adminclient-1] WARN [ NetworkClient] - Connection to node -1 could not be established. Broker may not be available.
13:54:40.962 [-client-thread | adminclient-1] TRACE [KafkaAdminClient$AdminClientRu] - adminclient-1: KafkaClient#poll retrieved 0 response(s)
13:54:40.962 [-client-thread | adminclient-1] TRACE [KafkaAdminClient$AdminClientRu] - adminclient-1: entering KafkaClient#poll(timeout=1)
13:54:40.962 [-client-thread | adminclient-1] TRACE [ NetworkClient] - Removing node 127.0.0.1:55383 (id: -1 rack: null) from least loaded node selection: is-blacked-out: true, in-flight-requests: 0
13:54:40.962 [-client-thread | adminclient-1] TRACE [ NetworkClient] - Least loaded node selection failed to find an available node
13:54:40.962 [-client-thread | adminclient-1] DEBUG [NetworkClient$DefaultMetadataU] - Give up sending metadata request since no node is available
13:54:40.964 [-client-thread | adminclient-1] TRACE [KafkaAdminClient$AdminClientRu] - adminclient-1: KafkaClient#poll retrieved 0 response(s)
13:54:40.964 [-client-thread | adminclient-1] DEBUG [ KafkaAdminClient$Call] - Call(callName=createTopics, deadlineMs=1507053280963) timed out at 1507053280964 after 1 attempt(s)
java.lang.Exception: TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.clients.admin.KafkaAdminClient$Call.fail(KafkaAdminClient.java:475)
at org.apache.kafka.clients.admin.KafkaAdminClient$TimeoutProcessor.handleTimeouts(KafkaAdminClient.java:591)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.timeoutNewCalls(KafkaAdminClient.java:663)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:908)
at java.lang.Thread.run(Thread.java:748)
13:54:40.964 [-client-thread | adminclient-1] DEBUG [KafkaAdminClient$AdminClientRu] - adminclient-1: timed out 1 new calls.
13:54:40.964 [-client-thread | adminclient-1] TRACE [KafkaAdminClient$AdminClientRu] - adminclient-1: entering KafkaClient#poll(timeout=1200000)
13:54:40.964 [-client-thread | adminclient-1] TRACE [ NetworkClient] - Removing node 127.0.0.1:55383 (id: -1 rack: null) from least loaded node selection: is-blacked-out: true, in-flight-requests: 0
13:54:40.964 [-client-thread | adminclient-1] TRACE [ NetworkClient] - Least loaded node selection failed to find an available node
13:54:40.964 [-client-thread | adminclient-1] DEBUG [NetworkClient$DefaultMetadataU] - Give up sending metadata request since no node is available
13:54:40.966 [ main] ERROR [ TestContextManager] - Caught exception while allowing TestExecutionListener [org.springframework.test.context.support.DependencyInjectionTestExecutionListener@72ccd81a] to prepare test instance [info.rmapproject.core.model.impl.openrdf.ORMapEventCreationTest@6d8792db]
java.lang.IllegalStateException: Failed to load ApplicationContext
at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:125)
at org.springframework.test.context.support.DefaultTestContext.getApplicationContext(DefaultTestContext.java:107)
at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:117)
at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:83)
at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:242)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:227)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:289)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:291)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:246)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:97)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:190)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'kafkaEmbedded': Invocation of init method failed; nested exception is java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1702)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:414)
at org.springframework.kafka.test.context.EmbeddedKafkaContextCustomizer.customizeContext(EmbeddedKafkaContextCustomizer.java:76)
at org.springframework.test.context.support.AbstractContextLoader.customizeContext(AbstractContextLoader.java:187)
at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:127)
at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:60)
at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.delegateLoading(AbstractDelegatingSmartContextLoader.java:109)
at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.loadContext(AbstractDelegatingSmartContextLoader.java:246)
at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:99)
at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:117)
... 29 common frames omitted
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:213)
at org.springframework.kafka.test.rule.KafkaEmbedded.before(KafkaEmbedded.java:293)
at org.springframework.kafka.test.rule.KafkaEmbedded.afterPropertiesSet(KafkaEmbedded.java:218)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1761)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1698)
... 38 common frames omitted
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 123.117 s <<< FAILURE! - in info.rmapproject.core.model.impl.openrdf.ORMapEventCreationTest
[ERROR] testORMapEventCreationRMapIriRMapEventTargetTypeRMapValueListOfRMapIri(info.rmapproject.core.model.impl.openrdf.ORMapEventCreationTest) Time elapsed: 0.005 s <<< ERROR!
java.lang.IllegalStateException: Failed to load ApplicationContext
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'kafkaEmbedded': Invocation of init method failed; nested exception is java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
13:54:41.017 [-client-thread | adminclient-1] TRACE [KafkaAdminClient$AdminClientRu] - adminclient-1: KafkaClient#poll retrieved 0 response(s)
13:54:41.017 [-client-thread | adminclient-1] TRACE [KafkaAdminClient$AdminClientRu] - adminclient-1: entering KafkaClient#poll(timeout=1200000)
13:54:41.017 [-client-thread | adminclient-1] TRACE [ NetworkClient] - Found least loaded node 127.0.0.1:55383 (id: -1 rack: null)
13:54:41.017 [-client-thread | adminclient-1] DEBUG [NetworkClient$DefaultMetadataU] - Initialize connection to node -1 for sending metadata request
13:54:41.018 [-client-thread | adminclient-1] DEBUG [ NetworkClient] - Initiating connection to node -1 at 127.0.0.1:55383.
13:54:41.018 [-client-thread | adminclient-1] DEBUG [ Selector] - Connection with /127.0.0.1 disconnected
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:95)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:359)
at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:432)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:927)
at java.lang.Thread.run(Thread.java:748)
13:54:41.018 [-client-thread | adminclient-1] DEBUG [ NetworkClient] - Node -1 disconnected.
13:54:41.018 [-client-thread | adminclient-1] WARN [ NetworkClient] - Connection to node -1 could not be established. Broker may not be available.
13:54:41.018 [-client-thread | adminclient-1] TRACE [KafkaAdminClient$AdminClientRu] - adminclient-1: KafkaClient#poll retrieved 0 response(s)
13:54:41.018 [-client-thread | adminclient-1] TRACE [KafkaAdminClient$AdminClientRu] - adminclient-1: entering KafkaClient#poll(timeout=1200000)
13:54:41.018 [-client-thread | adminclient-1] TRACE [ NetworkClient] - Removing node 127.0.0.1:55383 (id: -1 rack: null) from least loaded node selection: is-blacked-out: true, in-flight-requests: 0
13:54:41.018 [-client-thread | adminclient-1] TRACE [ NetworkClient] - Least loaded node selection failed to find an available node
13:54:41.018 [-client-thread | adminclient-1] DEBUG [NetworkClient$DefaultMetadataU] - Give up sending metadata request since no node is available
13:54:41.070 [-client-thread | adminclient-1] TRACE [KafkaAdminClient$AdminClientRu] - adminclient-1: KafkaClient#poll retrieved 0 response(s)
13:54:41.071 [-client-thread | adminclient-1] TRACE [KafkaAdminClient$AdminClientRu] - adminclient-1: entering KafkaClient#poll(timeout=1200000)
13:54:41.071 [-client-thread | adminclient-1] TRACE [ NetworkClient] - Found least loaded node 127.0.0.1:55383 (id: -1 rack: null)
13:54:41.071 [-client-thread | adminclient-1] DEBUG [NetworkClient$DefaultMetadataU] - Initialize connection to node -1 for sending metadata request
13:54:41.071 [-client-thread | adminclient-1] DEBUG [ NetworkClient] - Initiating connection to node -1 at 127.0.0.1:55383.
13:54:41.071 [-client-thread | adminclient-1] DEBUG [ Selector] - Connection with /127.0.0.1 disconnected
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:95)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:359)
at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:432)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:927)
at java.lang.Thread.run(Thread.java:748)
13:54:41.071 [-client-thread | adminclient-1] DEBUG [ NetworkClient] - Node -1 disconnected.
13:54:41.072 [-client-thread | adminclient-1] WARN [ NetworkClient] - Connection to node -1 could not be established. Broker may not be available.
13:54:41.072 [-client-thread | adminclient-1] TRACE [KafkaAdminClient$AdminClientRu] - adminclient-1: KafkaClient#poll retrieved 0 response(s)
13:54:41.072 [-client-thread | adminclient-1] TRACE [KafkaAdminClient$AdminClientRu] - adminclient-1: entering KafkaClient#poll(timeout=1200000)
13:54:41.073 [-client-thread | adminclient-1] TRACE [ NetworkClient] - Removing node 127.0.0.1:55383 (id: -1 rack: null) from least loaded node selection: is-blacked-out: true, in-flight-requests: 0
13:54:41.073 [-client-thread | adminclient-1] TRACE [ NetworkClient] - Least loaded node selection failed to find an available node
13:54:41.073 [-client-thread | adminclient-1] DEBUG [NetworkClient$DefaultMetadataU] - Give up sending metadata request since no node is available
[INFO]
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR] ORMapEventCreationTest.testORMapEventCreationRMapIriRMapEventTargetTypeRMapValueListOfRMapIri » IllegalState
[INFO]
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:12 min
[INFO] Finished at: 2017-10-03T13:54:41-04:00
[INFO] Final Memory: 31M/310M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.20.1:test (default-test) on project rmap-core: There are test failures.
[ERROR]
[ERROR] Please refer to /Users/esm/workspaces/rmap/rmap/core/target/surefire-reports for the individual test results.
[ERROR] Please refer to dump files (if any exist) [date]-jvmRun[N].dump, [date].dumpstream and [date]-jvmRun[N].dumpstream.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
esm:~/workspaces/rmap/rmap/core (spring-data-snapshot)*$
esm:~/workspaces/rmap/rmap/core (spring-data-snapshot)*$ mvn dependency:tree -Pspring|grep kafka
[INFO] +- org.springframework.kafka:spring-kafka:jar:2.0.0.RC1:compile
[INFO] | \- org.apache.kafka:kafka-clients:jar:0.11.0.0:compile
[INFO] +- info.rmapproject:rmap-kafka-shared:jar:1.0.1-beta-SNAPSHOT:compile
[INFO] +- info.rmapproject:rmap-kafka-shared:jar:tests:1.0.1-beta-SNAPSHOT:compile
[INFO] +- org.springframework.kafka:spring-kafka-test:jar:2.0.0.RC1:test
[INFO] | +- org.apache.kafka:kafka-clients:jar:test:0.11.0.0:test
[INFO] | +- org.apache.kafka:kafka_2.11:jar:0.11.0.0:test
[INFO] | \- org.apache.kafka:kafka_2.11:jar:test:0.11.0.0:test
esm:~/workspaces/rmap/rmap/core (spring-data-snapshot)*$ cat target/surefire-reports/info.rmapproject.core.model.impl.openrdf.ORMapEventCreationTest.txt
-------------------------------------------------------------------------------
Test set: info.rmapproject.core.model.impl.openrdf.ORMapEventCreationTest
-------------------------------------------------------------------------------
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 123.117 s <<< FAILURE! - in info.rmapproject.core.model.impl.openrdf.ORMapEventCreationTest
testORMapEventCreationRMapIriRMapEventTargetTypeRMapValueListOfRMapIri(info.rmapproject.core.model.impl.openrdf.ORMapEventCreationTest) Time elapsed: 0.005 s <<< ERROR!
java.lang.IllegalStateException: Failed to load ApplicationContext
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'kafkaEmbedded': Invocation of init method failed; nested exception is java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
esm:~/workspaces/rmap/rmap/core (spring-data-snapshot)*$
esm:~ $
@emetsger
Copy link
Author

emetsger commented Oct 3, 2017

Broker properties:

13:52:39.854 [                          main] INFO  [                AbstractConfig] - KafkaConfig values:
    advertised.host.name = null
    advertised.listeners = null
    advertised.port = null
    alter.config.policy.class.name = null
    authorizer.class.name =
    auto.create.topics.enable = true
    auto.leader.rebalance.enable = true
    background.threads = 10
    broker.id = 0
    broker.id.generation.enable = true
    broker.rack = null
    compression.type = producer
    connections.max.idle.ms = 600000
    controlled.shutdown.enable = false
    controlled.shutdown.max.retries = 3
    controlled.shutdown.retry.backoff.ms = 100
    controller.socket.timeout.ms = 1000
    create.topic.policy.class.name = null
    default.replication.factor = 1
    delete.records.purgatory.purge.interval.requests = 1
    delete.topic.enable = true
    fetch.purgatory.purge.interval.requests = 1000
    group.initial.rebalance.delay.ms = 0
    group.max.session.timeout.ms = 300000
    group.min.session.timeout.ms = 6000
    host.name =
    inter.broker.listener.name = null
    inter.broker.protocol.version = 0.11.0-IV2
    leader.imbalance.check.interval.seconds = 300
    leader.imbalance.per.broker.percentage = 10
    listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
    listeners = PLAINTEXT://localhost:55388
    log.cleaner.backoff.ms = 15000
    log.cleaner.dedupe.buffer.size = 2097152
    log.cleaner.delete.retention.ms = 86400000
    log.cleaner.enable = true
    log.cleaner.io.buffer.load.factor = 0.9
    log.cleaner.io.buffer.size = 524288
    log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
    log.cleaner.min.cleanable.ratio = 0.5
    log.cleaner.min.compaction.lag.ms = 0
    log.cleaner.threads = 1
    log.cleanup.policy = [delete]
    log.dir = /Users/esm/workspaces/rmap/rmap/core/target/classes/kafka
    log.dirs = null
    log.flush.interval.messages = 9223372036854775807
    log.flush.interval.ms = null
    log.flush.offset.checkpoint.interval.ms = 60000
    log.flush.scheduler.interval.ms = 9223372036854775807
    log.flush.start.offset.checkpoint.interval.ms = 60000
    log.index.interval.bytes = 4096
    log.index.size.max.bytes = 10485760
    log.message.format.version = 0.11.0-IV2
    log.message.timestamp.difference.max.ms = 9223372036854775807
    log.message.timestamp.type = CreateTime
    log.preallocate = false
    log.retention.bytes = -1
    log.retention.check.interval.ms = 300000
    log.retention.hours = 168
    log.retention.minutes = null
    log.retention.ms = null
    log.roll.hours = 168
    log.roll.jitter.hours = 0
    log.roll.jitter.ms = null
    log.roll.ms = null
    log.segment.bytes = 1073741824
    log.segment.delete.delay.ms = 1000
    max.connections.per.ip = 2147483647
    max.connections.per.ip.overrides =
    message.max.bytes = 1000012
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    min.insync.replicas = 1
    num.io.threads = 8
    num.network.threads = 3
    num.partitions = 1
    num.recovery.threads.per.data.dir = 1
    num.replica.fetchers = 1
    offset.metadata.max.bytes = 4096
    offsets.commit.required.acks = -1
    offsets.commit.timeout.ms = 5000
    offsets.load.buffer.size = 5242880
    offsets.retention.check.interval.ms = 600000
    offsets.retention.minutes = 1440
    offsets.topic.compression.codec = 0
    offsets.topic.num.partitions = 5
    offsets.topic.replication.factor = 1
    offsets.topic.segment.bytes = 104857600
    port = 55383
    principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
    producer.purgatory.purge.interval.requests = 1000
    queued.max.requests = 500
    quota.consumer.default = 9223372036854775807
    quota.producer.default = 9223372036854775807
    quota.window.num = 11
    quota.window.size.seconds = 1
    replica.fetch.backoff.ms = 1000
    replica.fetch.max.bytes = 1048576
    replica.fetch.min.bytes = 1
    replica.fetch.response.max.bytes = 10485760
    replica.fetch.wait.max.ms = 500
    replica.high.watermark.checkpoint.interval.ms = 5000
    replica.lag.time.max.ms = 10000
    replica.socket.receive.buffer.bytes = 65536
    replica.socket.timeout.ms = 1000
    replication.quota.window.num = 11
    replication.quota.window.size.seconds = 1
    request.timeout.ms = 30000
    reserved.broker.max.id = 1000
    sasl.enabled.mechanisms = [GSSAPI]
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.principal.to.local.rules = [DEFAULT]
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism.inter.broker.protocol = GSSAPI
    security.inter.broker.protocol = PLAINTEXT
    socket.receive.buffer.bytes = 102400
    socket.request.max.bytes = 104857600
    socket.send.buffer.bytes = 102400
    ssl.cipher.suites = null
    ssl.client.auth = none
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
    transaction.max.timeout.ms = 900000
    transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
    transaction.state.log.load.buffer.size = 5242880
    transaction.state.log.min.isr = 2
    transaction.state.log.num.partitions = 50
    transaction.state.log.replication.factor = 3
    transaction.state.log.segment.bytes = 104857600
    transactional.id.expiration.ms = 604800000
    unclean.leader.election.enable = false
    zookeeper.connect = 127.0.0.1:55386
    zookeeper.connection.timeout.ms = 10000
    zookeeper.session.timeout.ms = 6000
    zookeeper.set.acl = false
    zookeeper.sync.time.ms = 2000

@emetsger
Copy link
Author

emetsger commented Oct 3, 2017

AdminClient properties:

13:52:40.892 [                          main] INFO  [                AbstractConfig] - AdminClientConfig values:
    bootstrap.servers = [127.0.0.1:55383]
    client.id =
    connections.max.idle.ms = 300000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 120000
    retries = 5
    retry.backoff.ms = 100
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment