Skip to content

Instantly share code, notes, and snippets.

@see-quick
Created November 27, 2024 13:53
Formatting /tmp/default-log-dir-0 with metadata.version 3.8-IV0.
[2024-11-27 13:52:50,493] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2024-11-27 13:52:50,797] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2024-11-27 13:52:50,798] INFO RemoteLogManagerConfig values:
log.local.retention.bytes = -2
log.local.retention.ms = -2
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
(org.apache.kafka.server.log.remote.storage.RemoteLogManagerConfig)
[2024-11-27 13:52:50,938] INFO RemoteLogManagerConfig values:
log.local.retention.bytes = -2
log.local.retention.ms = -2
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
(org.apache.kafka.server.log.remote.storage.RemoteLogManagerConfig)
[2024-11-27 13:52:50,967] INFO RemoteLogManagerConfig values:
log.local.retention.bytes = -2
log.local.retention.ms = -2
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
(org.apache.kafka.server.log.remote.storage.RemoteLogManagerConfig)
[2024-11-27 13:52:51,002] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-11-27 13:52:51,005] INFO [ControllerServer id=0] Starting controller (kafka.server.ControllerServer)
[2024-11-27 13:52:51,009] INFO RemoteLogManagerConfig values:
log.local.retention.bytes = -2
log.local.retention.ms = -2
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
(org.apache.kafka.server.log.remote.storage.RemoteLogManagerConfig)
[2024-11-27 13:52:51,357] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2024-11-27 13:52:51,385] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(CONTROLLER) (kafka.network.SocketServer)
[2024-11-27 13:52:51,389] INFO authorizerStart completed for endpoint CONTROLLER. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-11-27 13:52:51,390] INFO [SharedServer id=0] Starting SharedServer (kafka.server.SharedServer)
[2024-11-27 13:52:51,391] INFO RemoteLogManagerConfig values:
log.local.retention.bytes = -2
log.local.retention.ms = -2
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
(org.apache.kafka.server.log.remote.storage.RemoteLogManagerConfig)
[2024-11-27 13:52:51,451] INFO [LogLoader partition=__cluster_metadata-0, dir=/tmp/default-log-dir-0] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$)
[2024-11-27 13:52:51,455] INFO [LogLoader partition=__cluster_metadata-0, dir=/tmp/default-log-dir-0] Reloading from producer snapshot and rebuilding producer state from offset 0 (kafka.log.UnifiedLog$)
[2024-11-27 13:52:51,455] INFO [LogLoader partition=__cluster_metadata-0, dir=/tmp/default-log-dir-0] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 0 (kafka.log.UnifiedLog$)
[2024-11-27 13:52:51,477] INFO Initialized snapshots with IDs SortedSet() from /tmp/default-log-dir-0/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$)
[2024-11-27 13:52:51,488] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2024-11-27 13:52:51,495] INFO [RaftManager id=0] Reading KRaft snapshot and log as part of the initialization (org.apache.kafka.raft.KafkaRaftClient)
[2024-11-27 13:52:51,496] INFO [RaftManager id=0] Starting request manager with static voters: [broker-0:9094 (id: 0 rack: null), broker-1:9094 (id: 1 rack: null), broker-2:9094 (id: 2 rack: null)] (org.apache.kafka.raft.KafkaRaftClient)
[2024-11-27 13:52:51,569] INFO [RaftManager id=0] Completed transition to Unattached(epoch=0, voters=[0, 1, 2], electionTimeoutMs=1677) from null (org.apache.kafka.raft.QuorumState)
[2024-11-27 13:52:51,571] INFO [kafka-0-raft-outbound-request-thread]: Starting (org.apache.kafka.raft.KafkaNetworkChannel$SendThread)
[2024-11-27 13:52:51,571] INFO [kafka-0-raft-io-thread]: Starting (org.apache.kafka.raft.KafkaRaftClientDriver)
[2024-11-27 13:52:51,581] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:51,581] INFO [ControllerServer id=0] Waiting for controller quorum voters future (kafka.server.ControllerServer)
[2024-11-27 13:52:51,581] INFO [ControllerServer id=0] Finished waiting for controller quorum voters future (kafka.server.ControllerServer)
[2024-11-27 13:52:51,582] INFO [RaftManager id=0] Registered the listener org.apache.kafka.image.loader.MetadataLoader@458379141 (org.apache.kafka.raft.KafkaRaftClient)
[2024-11-27 13:52:51,598] INFO [RaftManager id=0] Registered the listener org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@1249099677 (org.apache.kafka.raft.KafkaRaftClient)
[2024-11-27 13:52:51,600] INFO [controller-0-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-27 13:52:51,600] INFO [controller-0-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-27 13:52:51,600] INFO [controller-0-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-27 13:52:51,601] INFO [controller-0-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-27 13:52:51,614] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-27 13:52:51,621] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:51,621] INFO [ControllerServer id=0] Waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer)
[2024-11-27 13:52:51,621] INFO [ControllerServer id=0] Finished waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer)
[2024-11-27 13:52:51,622] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Enabling request processing. (kafka.network.SocketServer)
[2024-11-27 13:52:51,624] INFO Awaiting socket connections on 0.0.0.0:9094. (kafka.network.DataPlaneAcceptor)
[2024-11-27 13:52:51,633] INFO [controller-0-to-controller-registration-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-11-27 13:52:51,633] INFO [ControllerServer id=0] Waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer)
[2024-11-27 13:52:51,633] INFO [ControllerServer id=0] Finished waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer)
[2024-11-27 13:52:51,633] INFO [ControllerServer id=0] Waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer)
[2024-11-27 13:52:51,633] INFO [ControllerServer id=0] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer)
[2024-11-27 13:52:51,634] INFO [BrokerServer id=0] Transition from SHUTDOWN to STARTING (kafka.server.BrokerServer)
[2024-11-27 13:52:51,635] INFO [ControllerRegistrationManager id=0 incarnation=FhgXUvWyRe--2wa7ENoeUQ] initialized channel manager. (kafka.server.ControllerRegistrationManager)
[2024-11-27 13:52:51,635] INFO [ControllerRegistrationManager id=0 incarnation=FhgXUvWyRe--2wa7ENoeUQ] maybeSendControllerRegistration: cannot register yet because the metadata.version is still 3.0-IV1, which does not support KIP-919 controller registration. (kafka.server.ControllerRegistrationManager)
[2024-11-27 13:52:51,635] INFO [BrokerServer id=0] Starting broker (kafka.server.BrokerServer)
[2024-11-27 13:52:51,639] INFO RemoteLogManagerConfig values:
log.local.retention.bytes = -2
log.local.retention.ms = -2
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
(org.apache.kafka.server.log.remote.storage.RemoteLogManagerConfig)
[2024-11-27 13:52:51,644] INFO [broker-0-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-27 13:52:51,645] INFO [broker-0-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-27 13:52:51,645] INFO [broker-0-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-27 13:52:51,646] INFO [broker-0-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-27 13:52:51,661] INFO [BrokerServer id=0] Waiting for controller quorum voters future (kafka.server.BrokerServer)
[2024-11-27 13:52:51,662] INFO [BrokerServer id=0] Finished waiting for controller quorum voters future (kafka.server.BrokerServer)
[2024-11-27 13:52:51,665] INFO [broker-0-to-controller-forwarding-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-11-27 13:52:51,667] INFO [client-metrics-reaper]: Starting (org.apache.kafka.server.util.timer.SystemTimerReaper$Reaper)
[2024-11-27 13:52:51,691] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2024-11-27 13:52:51,693] INFO [SocketServer listenerType=BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2024-11-27 13:52:51,693] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2024-11-27 13:52:51,694] INFO [SocketServer listenerType=BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(BROKER1) (kafka.network.SocketServer)
[2024-11-27 13:52:51,696] INFO [broker-0-to-controller-alter-partition-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-11-27 13:52:51,699] INFO [broker-0-to-controller-directory-assignments-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-11-27 13:52:51,706] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-27 13:52:51,706] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-27 13:52:51,706] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-27 13:52:51,707] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-27 13:52:51,707] INFO [ExpirationReaper-0-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-27 13:52:51,713] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-27 13:52:51,714] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-27 13:52:51,721] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:51,727] INFO Unable to read the broker epoch in /tmp/default-log-dir-0. (kafka.log.LogManager)
[2024-11-27 13:52:51,728] INFO [broker-0-to-controller-heartbeat-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-11-27 13:52:51,729] INFO [BrokerLifecycleManager id=0] Incarnation 8R4Gw9GoQU6cIxHTS_cVqQ of broker 0 in cluster 1e0c5d35-2c47-4a52-98a9-820aacf34dc3 is now STARTING. (kafka.server.BrokerLifecycleManager)
[2024-11-27 13:52:51,741] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-27 13:52:51,752] INFO [BrokerServer id=0] Waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer)
[2024-11-27 13:52:51,752] INFO [BrokerServer id=0] Finished waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer)
[2024-11-27 13:52:51,752] INFO [BrokerServer id=0] Waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer)
[2024-11-27 13:52:51,752] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:51,853] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:51,954] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:52,055] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:52,155] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:52,256] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:52,358] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:52,458] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:52,559] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:52,660] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:52,762] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:52,863] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:52,938] INFO [RaftManager id=0] Completed transition to Unattached(epoch=1, voters=[0, 1, 2], electionTimeoutMs=241) from Unattached(epoch=0, voters=[0, 1, 2], electionTimeoutMs=1677) (org.apache.kafka.raft.QuorumState)
[2024-11-27 13:52:52,940] INFO [RaftManager id=0] Completed transition to Voted(epoch=1, votedKey=ReplicaKey(id=1, directoryId=Optional.empty), voters=[0, 1, 2], electionTimeoutMs=1079, highWatermark=Optional.empty) from Unattached(epoch=1, voters=[0, 1, 2], electionTimeoutMs=241) (org.apache.kafka.raft.QuorumState)
[2024-11-27 13:52:52,940] INFO [RaftManager id=0] Vote request VoteRequestData(clusterId='1e0c5d35-2c47-4a52-98a9-820aacf34dc3', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1, candidateId=1, lastOffsetEpoch=0, lastOffset=0)])]) with epoch 1 is granted (org.apache.kafka.raft.KafkaRaftClient)
[2024-11-27 13:52:52,964] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:52,970] INFO [RaftManager id=0] Completed transition to FollowerState(fetchTimeoutMs=2000, epoch=1, leader=broker-1:9094 (id: 1 rack: null) voters=[0, 1, 2], highWatermark=Optional.empty, fetchingSnapshot=Optional.empty) from Voted(epoch=1, votedKey=ReplicaKey(id=1, directoryId=Optional.empty), voters=[0, 1, 2], electionTimeoutMs=1079, highWatermark=Optional.empty) (org.apache.kafka.raft.QuorumState)
[2024-11-27 13:52:52,979] INFO [broker-0-to-controller-forwarding-channel-manager]: Recorded new KRaft controller, from now on will use node broker-1:9094 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-11-27 13:52:53,002] INFO [broker-0-to-controller-alter-partition-channel-manager]: Recorded new KRaft controller, from now on will use node broker-1:9094 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-11-27 13:52:53,006] INFO [broker-0-to-controller-directory-assignments-channel-manager]: Recorded new KRaft controller, from now on will use node broker-1:9094 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-11-27 13:52:53,042] INFO [broker-0-to-controller-heartbeat-channel-manager]: Recorded new KRaft controller, from now on will use node broker-1:9094 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-11-27 13:52:53,043] INFO [controller-0-to-controller-registration-channel-manager]: Recorded new KRaft controller, from now on will use node broker-1:9094 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-11-27 13:52:53,051] INFO [RaftManager id=0] High watermark set to Optional[LogOffsetMetadata(offset=1, metadata=Optional.empty)] for the first time for epoch 1 (org.apache.kafka.raft.FollowerState)
[2024-11-27 13:52:53,054] INFO [MetadataLoader id=0] maybePublishMetadata(LOG_DELTA): The loader is still catching up because we have not loaded a controller record as of offset 0 and high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,065] INFO [MetadataLoader id=0] initializeNewPublishers: The loader finished catching up to the current high water mark of 1 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,067] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing SnapshotGenerator with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,068] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing KRaftMetadataCachePublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,068] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing FeaturesPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,068] INFO [ControllerServer id=0] Loaded new metadata Features(metadataVersion=3.0-IV1, finalizedFeatures={metadata.version=1}, finalizedFeaturesEpoch=0). (org.apache.kafka.metadata.publisher.FeaturesPublisher)
[2024-11-27 13:52:53,068] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing ControllerRegistrationsPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,068] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing ControllerRegistrationManager with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,068] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing DynamicConfigPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,068] INFO [ControllerRegistrationManager id=0 incarnation=FhgXUvWyRe--2wa7ENoeUQ] maybeSendControllerRegistration: cannot register yet because the metadata.version is still 3.0-IV1, which does not support KIP-919 controller registration. (kafka.server.ControllerRegistrationManager)
[2024-11-27 13:52:53,069] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing DynamicClientQuotaPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,069] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing ScramPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,070] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing DelegationTokenPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,070] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing ControllerMetadataMetricsPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,070] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing AclPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,071] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing MetadataVersionPublisher(id=0) with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,071] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing BrokerMetadataPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,071] INFO [BrokerMetadataPublisher id=0] Publishing initial metadata at offset OffsetAndEpoch(offset=0, epoch=1) with metadata.version 3.0-IV1. (kafka.server.metadata.BrokerMetadataPublisher)
[2024-11-27 13:52:53,072] INFO Loading logs from log dirs ArrayBuffer(/tmp/default-log-dir-0) (kafka.log.LogManager)
[2024-11-27 13:52:53,076] INFO No logs found to be loaded in /tmp/default-log-dir-0 (kafka.log.LogManager)
[2024-11-27 13:52:53,081] INFO Loaded 0 logs in 8ms (kafka.log.LogManager)
[2024-11-27 13:52:53,081] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2024-11-27 13:52:53,082] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2024-11-27 13:52:53,098] INFO [BrokerLifecycleManager id=0] Successfully registered broker 0 with broker epoch 6 (kafka.server.BrokerLifecycleManager)
[2024-11-27 13:52:53,178] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread)
[2024-11-27 13:52:53,183] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-11-27 13:52:53,183] INFO [AddPartitionsToTxnSenderThread-0]: Starting (kafka.server.AddPartitionsToTxnManager)
[2024-11-27 13:52:53,183] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2024-11-27 13:52:53,185] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2024-11-27 13:52:53,205] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-11-27 13:52:53,210] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-11-27 13:52:53,211] INFO [TxnMarkerSenderThread-0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-11-27 13:52:53,224] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing BrokerRegistrationTracker(id=0) with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-27 13:52:53,225] INFO [ControllerServer id=0] Loaded new metadata Features(metadataVersion=3.8-IV0, finalizedFeatures={metadata.version=20}, finalizedFeaturesEpoch=4). (org.apache.kafka.metadata.publisher.FeaturesPublisher)
[2024-11-27 13:52:53,226] INFO [ControllerRegistrationManager id=0 incarnation=FhgXUvWyRe--2wa7ENoeUQ] sendControllerRegistration: attempting to send ControllerRegistrationRequestData(controllerId=0, incarnationId=FhgXUvWyRe--2wa7ENoeUQ, zkMigrationReady=false, listeners=[Listener(name='CONTROLLER', host='0.0.0.0', port=9094, securityProtocol=0)], features=[Feature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=20)]) (kafka.server.ControllerRegistrationManager)
[2024-11-27 13:52:53,277] INFO [ControllerRegistrationManager id=0 incarnation=FhgXUvWyRe--2wa7ENoeUQ] RegistrationResponseHandler: controller acknowledged ControllerRegistrationRequest. (kafka.server.ControllerRegistrationManager)
[2024-11-27 13:52:53,277] INFO [BrokerLifecycleManager id=0] The broker has caught up. Transitioning from STARTING to RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-11-27 13:52:53,278] INFO [BrokerServer id=0] Finished waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer)
[2024-11-27 13:52:53,278] INFO [BrokerServer id=0] Waiting for the initial broker metadata update to be published (kafka.server.BrokerServer)
[2024-11-27 13:52:53,278] INFO [BrokerServer id=0] Finished waiting for the initial broker metadata update to be published (kafka.server.BrokerServer)
[2024-11-27 13:52:53,279] INFO KafkaConfig values:
advertised.listeners = PLAINTEXT://localhost:33409,BROKER1://10.91.83.4:9091
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = false
auto.include.jmx.reporter = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.gzip.level = -1
compression.lz4.level = 9
compression.type = producer
compression.zstd.level = 3
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.listener.names = CONTROLLER
controller.quorum.append.linger.ms = 25
controller.quorum.bootstrap.servers = []
controller.quorum.election.backoff.max.ms = 1000
controller.quorum.election.timeout.ms = 1000
controller.quorum.fetch.timeout.ms = 2000
controller.quorum.request.timeout.ms = 2000
controller.quorum.retry.backoff.ms = 20
controller.quorum.voters = [0@broker-0:9094, 1@broker-1:9094, 2@broker-2:9094]
controller.quota.window.num = 11
controller.quota.window.size.seconds = 1
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delegation.token.secret.key = null
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
early.start.listeners = null
eligible.leader.replicas.enable = false
fetch.max.bytes = 57671680
fetch.purgatory.purge.interval.requests = 1000
group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor]
group.consumer.heartbeat.interval.ms = 5000
group.consumer.max.heartbeat.interval.ms = 15000
group.consumer.max.session.timeout.ms = 60000
group.consumer.max.size = 2147483647
group.consumer.migration.policy = disabled
group.consumer.min.heartbeat.interval.ms = 5000
group.consumer.min.session.timeout.ms = 45000
group.consumer.session.timeout.ms = 45000
group.coordinator.append.linger.ms = 10
group.coordinator.new.enable = false
group.coordinator.rebalance.protocols = [classic]
group.coordinator.threads = 1
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
initial.broker.registration.timeout.ms = 60000
inter.broker.listener.name = BROKER1
inter.broker.protocol.version = 3.8-IV0
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT,BROKER1:PLAINTEXT
listeners = PLAINTEXT://0.0.0.0:9092,BROKER1://0.0.0.0:9091,CONTROLLER://0.0.0.0:9094
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dir.failure.timeout.ms = 30000
log.dirs = /tmp/default-log-dir-0
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.initial.task.delay.ms = 30000
log.local.retention.bytes = -2
log.local.retention.ms = -2
log.message.downconversion.enable = true
log.message.format.version = 3.0-IV1
log.message.timestamp.after.max.ms = 9223372036854775807
log.message.timestamp.before.max.ms = 9223372036854775807
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connection.creation.rate = 2147483647
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
max.request.partition.size.limit = 2000
message.max.bytes = 1048588
metadata.log.dir = null
metadata.log.max.record.bytes.between.snapshots = 20971520
metadata.log.max.snapshot.interval.ms = 3600000
metadata.log.segment.bytes = 1073741824
metadata.log.segment.min.bytes = 8388608
metadata.log.segment.ms = 604800000
metadata.max.idle.interval.ms = 500
metadata.max.retention.bytes = 104857600
metadata.max.retention.ms = 604800000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
node.id = 0
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
process.roles = [broker, controller]
producer.id.expiration.check.interval.ms = 600000
producer.id.expiration.ms = 86400000
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.window.num = 11
quota.window.size.seconds = 1
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 30000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism.controller.protocol = GSSAPI
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
sasl.server.callback.handler.class = null
sasl.server.max.receive.size = 524288
security.inter.broker.protocol = PLAINTEXT
security.providers = null
server.max.startup.time.ms = 9223372036854775807
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
socket.listen.backlog.size = 50
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.allow.dn.changes = false
ssl.allow.san.changes = false
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
telemetry.max.bytes = 1048576
transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
transaction.max.timeout.ms = 900000
transaction.partition.verification.enable = true
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
unstable.api.versions.enable = false
unstable.feature.versions.enable = false
zookeeper.clientCnxnSocket = null
zookeeper.connect = null
zookeeper.connection.timeout.ms = null
zookeeper.max.in.flight.requests = 10
zookeeper.metadata.migration.enable = false
zookeeper.metadata.migration.min.batch.size = 200
zookeeper.session.timeout.ms = 18000
zookeeper.set.acl = false
zookeeper.ssl.cipher.suites = null
zookeeper.ssl.client.enable = false
zookeeper.ssl.crl.enable = false
zookeeper.ssl.enabled.protocols = null
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
zookeeper.ssl.keystore.location = null
zookeeper.ssl.keystore.password = null
zookeeper.ssl.keystore.type = null
zookeeper.ssl.ocsp.enable = false
zookeeper.ssl.protocol = TLSv1.2
zookeeper.ssl.truststore.location = null
zookeeper.ssl.truststore.password = null
zookeeper.ssl.truststore.type = null
(kafka.server.KafkaConfig)
[2024-11-27 13:52:53,280] INFO RemoteLogManagerConfig values:
log.local.retention.bytes = -2
log.local.retention.ms = -2
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
(org.apache.kafka.server.log.remote.storage.RemoteLogManagerConfig)
[2024-11-27 13:52:53,285] INFO [BrokerServer id=0] Waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-11-27 13:52:53,320] INFO [ControllerRegistrationManager id=0 incarnation=FhgXUvWyRe--2wa7ENoeUQ] Our registration has been persisted to the metadata log. (kafka.server.ControllerRegistrationManager)
[2024-11-27 13:52:53,323] INFO [BrokerLifecycleManager id=0] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)
[2024-11-27 13:52:53,323] INFO [BrokerServer id=0] Finished waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-11-27 13:52:53,324] INFO authorizerStart completed for endpoint PLAINTEXT. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-11-27 13:52:53,324] INFO authorizerStart completed for endpoint BROKER1. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-11-27 13:52:53,324] INFO [SocketServer listenerType=BROKER, nodeId=0] Enabling request processing. (kafka.network.SocketServer)
[2024-11-27 13:52:53,324] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor)
[2024-11-27 13:52:53,326] INFO Awaiting socket connections on 0.0.0.0:9091. (kafka.network.DataPlaneAcceptor)
[2024-11-27 13:52:53,327] INFO [BrokerServer id=0] Waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-11-27 13:52:53,327] INFO [BrokerServer id=0] Finished waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-11-27 13:52:53,327] INFO [BrokerServer id=0] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-11-27 13:52:53,327] INFO [BrokerServer id=0] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-11-27 13:52:53,327] INFO [BrokerServer id=0] Transition from STARTING to STARTED (kafka.server.BrokerServer)
[2024-11-27 13:52:53,327] INFO Kafka version: 3.8.1 (org.apache.kafka.common.utils.AppInfoParser)
[2024-11-27 13:52:53,327] INFO Kafka commitId: 70d6ff42debf7e17 (org.apache.kafka.common.utils.AppInfoParser)
[2024-11-27 13:52:53,327] INFO Kafka startTimeMs: 1732715573327 (org.apache.kafka.common.utils.AppInfoParser)
[2024-11-27 13:52:53,328] INFO [KafkaRaftServer nodeId=0] Kafka Server started (kafka.server.KafkaRaftServer)
[2024-11-27 13:52:56,553] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(my-topic-0) (kafka.server.ReplicaFetcherManager)
[2024-11-27 13:52:56,562] INFO [LogLoader partition=my-topic-0, dir=/tmp/default-log-dir-0] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$)
[2024-11-27 13:52:56,564] INFO Created log for partition my-topic-0 in /tmp/default-log-dir-0/my-topic-0 with properties {} (kafka.log.LogManager)
[2024-11-27 13:52:56,564] INFO [Partition my-topic-0 broker=0] No checkpointed highwatermark is found for partition my-topic-0 (kafka.cluster.Partition)
[2024-11-27 13:52:56,565] INFO [Partition my-topic-0 broker=0] Log loaded for partition my-topic-0 with initial high watermark 0 (kafka.cluster.Partition)
[2024-11-27 13:52:56,695] INFO RemoteLogManagerConfig values:
log.local.retention.bytes = -2
log.local.retention.ms = -2
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
(org.apache.kafka.server.log.remote.storage.RemoteLogManagerConfig)
[2024-11-27 13:52:56,768] INFO [DynamicConfigPublisher controller id=0] Updating node 0 with new configuration : leader.replication.throttled.rate -> 1 (kafka.server.metadata.DynamicConfigPublisher)
[2024-11-27 13:52:56,769] INFO KafkaConfig values:
advertised.listeners = PLAINTEXT://localhost:33409,BROKER1://10.91.83.4:9091
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = false
auto.include.jmx.reporter = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.gzip.level = -1
compression.lz4.level = 9
compression.type = producer
compression.zstd.level = 3
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.listener.names = CONTROLLER
controller.quorum.append.linger.ms = 25
controller.quorum.bootstrap.servers = []
controller.quorum.election.backoff.max.ms = 1000
controller.quorum.election.timeout.ms = 1000
controller.quorum.fetch.timeout.ms = 2000
controller.quorum.request.timeout.ms = 2000
controller.quorum.retry.backoff.ms = 20
controller.quorum.voters = [0@broker-0:9094, 1@broker-1:9094, 2@broker-2:9094]
controller.quota.window.num = 11
controller.quota.window.size.seconds = 1
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delegation.token.secret.key = null
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
early.start.listeners = null
eligible.leader.replicas.enable = false
fetch.max.bytes = 57671680
fetch.purgatory.purge.interval.requests = 1000
group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor]
group.consumer.heartbeat.interval.ms = 5000
group.consumer.max.heartbeat.interval.ms = 15000
group.consumer.max.session.timeout.ms = 60000
group.consumer.max.size = 2147483647
group.consumer.migration.policy = disabled
group.consumer.min.heartbeat.interval.ms = 5000
group.consumer.min.session.timeout.ms = 45000
group.consumer.session.timeout.ms = 45000
group.coordinator.append.linger.ms = 10
group.coordinator.new.enable = false
group.coordinator.rebalance.protocols = [classic]
group.coordinator.threads = 1
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
initial.broker.registration.timeout.ms = 60000
inter.broker.listener.name = BROKER1
inter.broker.protocol.version = 3.8-IV0
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT,BROKER1:PLAINTEXT
listeners = PLAINTEXT://0.0.0.0:9092,BROKER1://0.0.0.0:9091,CONTROLLER://0.0.0.0:9094
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dir.failure.timeout.ms = 30000
log.dirs = /tmp/default-log-dir-0
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.initial.task.delay.ms = 30000
log.local.retention.bytes = -2
log.local.retention.ms = -2
log.message.downconversion.enable = true
log.message.format.version = 3.0-IV1
log.message.timestamp.after.max.ms = 9223372036854775807
log.message.timestamp.before.max.ms = 9223372036854775807
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connection.creation.rate = 2147483647
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
max.request.partition.size.limit = 2000
message.max.bytes = 1048588
metadata.log.dir = null
metadata.log.max.record.bytes.between.snapshots = 20971520
metadata.log.max.snapshot.interval.ms = 3600000
metadata.log.segment.bytes = 1073741824
metadata.log.segment.min.bytes = 8388608
metadata.log.segment.ms = 604800000
metadata.max.idle.interval.ms = 500
metadata.max.retention.bytes = 104857600
metadata.max.retention.ms = 604800000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
node.id = 0
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
process.roles = [broker, controller]
producer.id.expiration.check.interval.ms = 600000
producer.id.expiration.ms = 86400000
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.window.num = 11
quota.window.size.seconds = 1
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 30000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism.controller.protocol = GSSAPI
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
sasl.server.callback.handler.class = null
sasl.server.max.receive.size = 524288
security.inter.broker.protocol = PLAINTEXT
security.providers = null
server.max.startup.time.ms = 9223372036854775807
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
socket.listen.backlog.size = 50
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.allow.dn.changes = false
ssl.allow.san.changes = false
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
telemetry.max.bytes = 1048576
transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
transaction.max.timeout.ms = 900000
transaction.partition.verification.enable = true
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
unstable.api.versions.enable = false
unstable.feature.versions.enable = false
zookeeper.clientCnxnSocket = null
zookeeper.connect = null
zookeeper.connection.timeout.ms = null
zookeeper.max.in.flight.requests = 10
zookeeper.metadata.migration.enable = false
zookeeper.metadata.migration.min.batch.size = 200
zookeeper.session.timeout.ms = 18000
zookeeper.set.acl = false
zookeeper.ssl.cipher.suites = null
zookeeper.ssl.client.enable = false
zookeeper.ssl.crl.enable = false
zookeeper.ssl.enabled.protocols = null
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
zookeeper.ssl.keystore.location = null
zookeeper.ssl.keystore.password = null
zookeeper.ssl.keystore.type = null
zookeeper.ssl.ocsp.enable = false
zookeeper.ssl.protocol = TLSv1.2
zookeeper.ssl.truststore.location = null
zookeeper.ssl.truststore.password = null
zookeeper.ssl.truststore.type = null
(kafka.server.KafkaConfig)
[2024-11-27 13:52:56,770] INFO RemoteLogManagerConfig values:
log.local.retention.bytes = -2
log.local.retention.ms = -2
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
(org.apache.kafka.server.log.remote.storage.RemoteLogManagerConfig)
[2024-11-27 13:52:56,772] INFO [DynamicConfigPublisher broker id=0] Updating node 0 with new configuration : leader.replication.throttled.rate -> 1 (kafka.server.metadata.DynamicConfigPublisher)
[2024-11-27 13:52:56,773] INFO KafkaConfig values:
advertised.listeners = PLAINTEXT://localhost:33409,BROKER1://10.91.83.4:9091
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = false
auto.include.jmx.reporter = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.gzip.level = -1
compression.lz4.level = 9
compression.type = producer
compression.zstd.level = 3
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.listener.names = CONTROLLER
controller.quorum.append.linger.ms = 25
controller.quorum.bootstrap.servers = []
controller.quorum.election.backoff.max.ms = 1000
controller.quorum.election.timeout.ms = 1000
controller.quorum.fetch.timeout.ms = 2000
controller.quorum.request.timeout.ms = 2000
controller.quorum.retry.backoff.ms = 20
controller.quorum.voters = [0@broker-0:9094, 1@broker-1:9094, 2@broker-2:9094]
controller.quota.window.num = 11
controller.quota.window.size.seconds = 1
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delegation.token.secret.key = null
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
early.start.listeners = null
eligible.leader.replicas.enable = false
fetch.max.bytes = 57671680
fetch.purgatory.purge.interval.requests = 1000
group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor]
group.consumer.heartbeat.interval.ms = 5000
group.consumer.max.heartbeat.interval.ms = 15000
group.consumer.max.session.timeout.ms = 60000
group.consumer.max.size = 2147483647
group.consumer.migration.policy = disabled
group.consumer.min.heartbeat.interval.ms = 5000
group.consumer.min.session.timeout.ms = 45000
group.consumer.session.timeout.ms = 45000
group.coordinator.append.linger.ms = 10
group.coordinator.new.enable = false
group.coordinator.rebalance.protocols = [classic]
group.coordinator.threads = 1
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
initial.broker.registration.timeout.ms = 60000
inter.broker.listener.name = BROKER1
inter.broker.protocol.version = 3.8-IV0
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT,BROKER1:PLAINTEXT
listeners = PLAINTEXT://0.0.0.0:9092,BROKER1://0.0.0.0:9091,CONTROLLER://0.0.0.0:9094
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dir.failure.timeout.ms = 30000
log.dirs = /tmp/default-log-dir-0
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.initial.task.delay.ms = 30000
log.local.retention.bytes = -2
log.local.retention.ms = -2
log.message.downconversion.enable = true
log.message.format.version = 3.0-IV1
log.message.timestamp.after.max.ms = 9223372036854775807
log.message.timestamp.before.max.ms = 9223372036854775807
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connection.creation.rate = 2147483647
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
max.request.partition.size.limit = 2000
message.max.bytes = 1048588
metadata.log.dir = null
metadata.log.max.record.bytes.between.snapshots = 20971520
metadata.log.max.snapshot.interval.ms = 3600000
metadata.log.segment.bytes = 1073741824
metadata.log.segment.min.bytes = 8388608
metadata.log.segment.ms = 604800000
metadata.max.idle.interval.ms = 500
metadata.max.retention.bytes = 104857600
metadata.max.retention.ms = 604800000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
node.id = 0
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
process.roles = [broker, controller]
producer.id.expiration.check.interval.ms = 600000
producer.id.expiration.ms = 86400000
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.window.num = 11
quota.window.size.seconds = 1
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 30000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism.controller.protocol = GSSAPI
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
sasl.server.callback.handler.class = null
sasl.server.max.receive.size = 524288
security.inter.broker.protocol = PLAINTEXT
security.providers = null
server.max.startup.time.ms = 9223372036854775807
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
socket.listen.backlog.size = 50
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.allow.dn.changes = false
ssl.allow.san.changes = false
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
telemetry.max.bytes = 1048576
transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
transaction.max.timeout.ms = 900000
transaction.partition.verification.enable = true
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
unstable.api.versions.enable = false
unstable.feature.versions.enable = false
zookeeper.clientCnxnSocket = null
zookeeper.connect = null
zookeeper.connection.timeout.ms = null
zookeeper.max.in.flight.requests = 10
zookeeper.metadata.migration.enable = false
zookeeper.metadata.migration.min.batch.size = 200
zookeeper.session.timeout.ms = 18000
zookeeper.set.acl = false
zookeeper.ssl.cipher.suites = null
zookeeper.ssl.client.enable = false
zookeeper.ssl.crl.enable = false
zookeeper.ssl.enabled.protocols = null
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
zookeeper.ssl.keystore.location = null
zookeeper.ssl.keystore.password = null
zookeeper.ssl.keystore.type = null
zookeeper.ssl.ocsp.enable = false
zookeeper.ssl.protocol = TLSv1.2
zookeeper.ssl.truststore.location = null
zookeeper.ssl.truststore.password = null
zookeeper.ssl.truststore.type = null
(kafka.server.KafkaConfig)
[2024-11-27 13:52:56,774] INFO RemoteLogManagerConfig values:
log.local.retention.bytes = -2
log.local.retention.ms = -2
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
(org.apache.kafka.server.log.remote.storage.RemoteLogManagerConfig)
[2024-11-27 13:52:56,805] INFO [DynamicConfigPublisher broker id=0] Updating topic my-topic with new configuration : follower.replication.throttled.replicas -> 0:1 (kafka.server.metadata.DynamicConfigPublisher)
[2024-11-27 13:52:56,868] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(my-topic-0) (kafka.server.ReplicaFetcherManager)
[2024-11-27 13:52:56,883] INFO RemoteLogManagerConfig values:
log.local.retention.bytes = -2
log.local.retention.ms = -2
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
(org.apache.kafka.server.log.remote.storage.RemoteLogManagerConfig)
[2024-11-27 13:52:56,915] INFO [DynamicConfigPublisher broker id=0] Updating topic my-topic with new configuration : compression.type -> snappy (kafka.server.metadata.DynamicConfigPublisher)
[2024-11-27 13:52:57,051] INFO [DynamicConfigPublisher controller id=0] Updating node 0 with new configuration : (kafka.server.metadata.DynamicConfigPublisher)
[2024-11-27 13:52:57,052] INFO KafkaConfig values:
advertised.listeners = PLAINTEXT://localhost:33409,BROKER1://10.91.83.4:9091
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = false
auto.include.jmx.reporter = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.gzip.level = -1
compression.lz4.level = 9
compression.type = producer
compression.zstd.level = 3
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.listener.names = CONTROLLER
controller.quorum.append.linger.ms = 25
controller.quorum.bootstrap.servers = []
controller.quorum.election.backoff.max.ms = 1000
controller.quorum.election.timeout.ms = 1000
controller.quorum.fetch.timeout.ms = 2000
controller.quorum.request.timeout.ms = 2000
controller.quorum.retry.backoff.ms = 20
controller.quorum.voters = [0@broker-0:9094, 1@broker-1:9094, 2@broker-2:9094]
controller.quota.window.num = 11
controller.quota.window.size.seconds = 1
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delegation.token.secret.key = null
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
early.start.listeners = null
eligible.leader.replicas.enable = false
fetch.max.bytes = 57671680
fetch.purgatory.purge.interval.requests = 1000
group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor]
group.consumer.heartbeat.interval.ms = 5000
group.consumer.max.heartbeat.interval.ms = 15000
group.consumer.max.session.timeout.ms = 60000
group.consumer.max.size = 2147483647
group.consumer.migration.policy = disabled
group.consumer.min.heartbeat.interval.ms = 5000
group.consumer.min.session.timeout.ms = 45000
group.consumer.session.timeout.ms = 45000
group.coordinator.append.linger.ms = 10
group.coordinator.new.enable = false
group.coordinator.rebalance.protocols = [classic]
group.coordinator.threads = 1
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
initial.broker.registration.timeout.ms = 60000
inter.broker.listener.name = BROKER1
inter.broker.protocol.version = 3.8-IV0
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT,BROKER1:PLAINTEXT
listeners = PLAINTEXT://0.0.0.0:9092,BROKER1://0.0.0.0:9091,CONTROLLER://0.0.0.0:9094
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dir.failure.timeout.ms = 30000
log.dirs = /tmp/default-log-dir-0
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.initial.task.delay.ms = 30000
log.local.retention.bytes = -2
log.local.retention.ms = -2
log.message.downconversion.enable = true
log.message.format.version = 3.0-IV1
log.message.timestamp.after.max.ms = 9223372036854775807
log.message.timestamp.before.max.ms = 9223372036854775807
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connection.creation.rate = 2147483647
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
max.request.partition.size.limit = 2000
message.max.bytes = 1048588
metadata.log.dir = null
metadata.log.max.record.bytes.between.snapshots = 20971520
metadata.log.max.snapshot.interval.ms = 3600000
metadata.log.segment.bytes = 1073741824
metadata.log.segment.min.bytes = 8388608
metadata.log.segment.ms = 604800000
metadata.max.idle.interval.ms = 500
metadata.max.retention.bytes = 104857600
metadata.max.retention.ms = 604800000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
node.id = 0
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
process.roles = [broker, controller]
producer.id.expiration.check.interval.ms = 600000
producer.id.expiration.ms = 86400000
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.window.num = 11
quota.window.size.seconds = 1
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 30000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism.controller.protocol = GSSAPI
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
sasl.server.callback.handler.class = null
sasl.server.max.receive.size = 524288
security.inter.broker.protocol = PLAINTEXT
security.providers = null
server.max.startup.time.ms = 9223372036854775807
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
socket.listen.backlog.size = 50
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.allow.dn.changes = false
ssl.allow.san.changes = false
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
telemetry.max.bytes = 1048576
transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
transaction.max.timeout.ms = 900000
transaction.partition.verification.enable = true
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
unstable.api.versions.enable = false
unstable.feature.versions.enable = false
zookeeper.clientCnxnSocket = null
zookeeper.connect = null
zookeeper.connection.timeout.ms = null
zookeeper.max.in.flight.requests = 10
zookeeper.metadata.migration.enable = false
zookeeper.metadata.migration.min.batch.size = 200
zookeeper.session.timeout.ms = 18000
zookeeper.set.acl = false
zookeeper.ssl.cipher.suites = null
zookeeper.ssl.client.enable = false
zookeeper.ssl.crl.enable = false
zookeeper.ssl.enabled.protocols = null
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
zookeeper.ssl.keystore.location = null
zookeeper.ssl.keystore.password = null
zookeeper.ssl.keystore.type = null
zookeeper.ssl.ocsp.enable = false
zookeeper.ssl.protocol = TLSv1.2
zookeeper.ssl.truststore.location = null
zookeeper.ssl.truststore.password = null
zookeeper.ssl.truststore.type = null
(kafka.server.KafkaConfig)
[2024-11-27 13:52:57,053] INFO RemoteLogManagerConfig values:
log.local.retention.bytes = -2
log.local.retention.ms = -2
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
(org.apache.kafka.server.log.remote.storage.RemoteLogManagerConfig)
[2024-11-27 13:52:57,057] INFO [DynamicConfigPublisher broker id=0] Updating node 0 with new configuration : (kafka.server.metadata.DynamicConfigPublisher)
[2024-11-27 13:52:57,058] INFO KafkaConfig values:
advertised.listeners = PLAINTEXT://localhost:33409,BROKER1://10.91.83.4:9091
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = false
auto.include.jmx.reporter = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.gzip.level = -1
compression.lz4.level = 9
compression.type = producer
compression.zstd.level = 3
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.listener.names = CONTROLLER
controller.quorum.append.linger.ms = 25
controller.quorum.bootstrap.servers = []
controller.quorum.election.backoff.max.ms = 1000
controller.quorum.election.timeout.ms = 1000
controller.quorum.fetch.timeout.ms = 2000
controller.quorum.request.timeout.ms = 2000
controller.quorum.retry.backoff.ms = 20
controller.quorum.voters = [0@broker-0:9094, 1@broker-1:9094, 2@broker-2:9094]
controller.quota.window.num = 11
controller.quota.window.size.seconds = 1
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delegation.token.secret.key = null
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
early.start.listeners = null
eligible.leader.replicas.enable = false
fetch.max.bytes = 57671680
fetch.purgatory.purge.interval.requests = 1000
group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor]
group.consumer.heartbeat.interval.ms = 5000
group.consumer.max.heartbeat.interval.ms = 15000
group.consumer.max.session.timeout.ms = 60000
group.consumer.max.size = 2147483647
group.consumer.migration.policy = disabled
group.consumer.min.heartbeat.interval.ms = 5000
group.consumer.min.session.timeout.ms = 45000
group.consumer.session.timeout.ms = 45000
group.coordinator.append.linger.ms = 10
group.coordinator.new.enable = false
group.coordinator.rebalance.protocols = [classic]
group.coordinator.threads = 1
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
initial.broker.registration.timeout.ms = 60000
inter.broker.listener.name = BROKER1
inter.broker.protocol.version = 3.8-IV0
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT,BROKER1:PLAINTEXT
listeners = PLAINTEXT://0.0.0.0:9092,BROKER1://0.0.0.0:9091,CONTROLLER://0.0.0.0:9094
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dir.failure.timeout.ms = 30000
log.dirs = /tmp/default-log-dir-0
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.initial.task.delay.ms = 30000
log.local.retention.bytes = -2
log.local.retention.ms = -2
log.message.downconversion.enable = true
log.message.format.version = 3.0-IV1
log.message.timestamp.after.max.ms = 9223372036854775807
log.message.timestamp.before.max.ms = 9223372036854775807
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connection.creation.rate = 2147483647
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
max.request.partition.size.limit = 2000
message.max.bytes = 1048588
metadata.log.dir = null
metadata.log.max.record.bytes.between.snapshots = 20971520
metadata.log.max.snapshot.interval.ms = 3600000
metadata.log.segment.bytes = 1073741824
metadata.log.segment.min.bytes = 8388608
metadata.log.segment.ms = 604800000
metadata.max.idle.interval.ms = 500
metadata.max.retention.bytes = 104857600
metadata.max.retention.ms = 604800000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
node.id = 0
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
process.roles = [broker, controller]
producer.id.expiration.check.interval.ms = 600000
producer.id.expiration.ms = 86400000
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.window.num = 11
quota.window.size.seconds = 1
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 30000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism.controller.protocol = GSSAPI
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
sasl.server.callback.handler.class = null
sasl.server.max.receive.size = 524288
security.inter.broker.protocol = PLAINTEXT
security.providers = null
server.max.startup.time.ms = 9223372036854775807
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
socket.listen.backlog.size = 50
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.allow.dn.changes = false
ssl.allow.san.changes = false
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
telemetry.max.bytes = 1048576
transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
transaction.max.timeout.ms = 900000
transaction.partition.verification.enable = true
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
unstable.api.versions.enable = false
unstable.feature.versions.enable = false
zookeeper.clientCnxnSocket = null
zookeeper.connect = null
zookeeper.connection.timeout.ms = null
zookeeper.max.in.flight.requests = 10
zookeeper.metadata.migration.enable = false
zookeeper.metadata.migration.min.batch.size = 200
zookeeper.session.timeout.ms = 18000
zookeeper.set.acl = false
zookeeper.ssl.cipher.suites = null
zookeeper.ssl.client.enable = false
zookeeper.ssl.crl.enable = false
zookeeper.ssl.enabled.protocols = null
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
zookeeper.ssl.keystore.location = null
zookeeper.ssl.keystore.password = null
zookeeper.ssl.keystore.type = null
zookeeper.ssl.ocsp.enable = false
zookeeper.ssl.protocol = TLSv1.2
zookeeper.ssl.truststore.location = null
zookeeper.ssl.truststore.password = null
zookeeper.ssl.truststore.type = null
(kafka.server.KafkaConfig)
[2024-11-27 13:52:57,059] INFO RemoteLogManagerConfig values:
log.local.retention.bytes = -2
log.local.retention.ms = -2
remote.fetch.max.wait.ms = 500
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.copier.thread.pool.size = 10
remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
remote.log.manager.copy.quota.window.num = 11
remote.log.manager.copy.quota.window.size.seconds = 1
remote.log.manager.expiration.thread.pool.size = 10
remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
remote.log.manager.fetch.quota.window.num = 11
remote.log.manager.fetch.quota.window.size.seconds = 1
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
(org.apache.kafka.server.log.remote.storage.RemoteLogManagerConfig)
[2024-11-27 13:52:58,001] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(my-topic-0) (kafka.server.ReplicaFetcherManager)
[2024-11-27 13:52:58,002] INFO [ReplicaAlterLogDirsManager on broker 0] Removed fetcher for partitions Set(my-topic-0) (kafka.server.ReplicaAlterLogDirsManager)
[2024-11-27 13:52:58,009] INFO Log for partition my-topic-0 is renamed to /tmp/default-log-dir-0/my-topic-0.7a2195b2a72942438e5824d1d73c5760-delete and is scheduled for deletion (kafka.log.LogManager)
[2024-11-27 13:52:58,054] INFO [DynamicConfigPublisher broker id=0] Updating topic my-topic with new configuration : compression.type -> gzip (kafka.server.metadata.DynamicConfigPublisher)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment