Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save stanislavkozlovski/5919238e9c6e6329f1604023fa10771f to your computer and use it in GitHub Desktop.
Save stanislavkozlovski/5919238e9c6e6329f1604023fa10771f to your computer and use it in GitHub Desktop.
metaPropertiesEnsemble=MetaPropertiesEnsemble(metadataLogDir=Optional.empty, dirs={/tmp/kafka-7862265964045152878: EMPTY})
[2023-12-06 05:14:14,945] INFO [LocalTieredStorage Id=0] Creating directory: [/tmp/kafka-remote-tier-deletesegmentsbyretentionsizetest18303542483502926029/kafka-tiered-storage] (org.apache.kafka.server.log.remote.storage.LocalTieredStorage:289)
[2023-12-06 05:14:14,946] INFO [LocalTieredStorage Id=0] Created local tiered storage manager [0]:[kafka-tiered-storage] (org.apache.kafka.server.log.remote.storage.LocalTieredStorage:301)
[2023-12-06 05:14:14,946] INFO Started configuring topic-based RLMM with configs: {remote.log.metadata.topic.replication.factor=1, remote.log.metadata.topic.num.partitions=5, remote.log.metadata.common.client.bootstrap.servers=localhost:36617, broker.id=0, remote.log.metadata.initialization.retry.interval.ms=300, remote.log.metadata.common.client.security.protocol=PLAINTEXT, cluster.id=Si-GEbyrSJeNRePZuz8ysQ, log.dir=/tmp/kafka-9703822247831753795} (org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager:358)
[2023-12-06 05:14:14,948] INFO Successfully configured topic-based RLMM with config: TopicBasedRemoteLogMetadataManagerConfig{clientIdPrefix='__remote_log_metadata_client_0', metadataTopicPartitionsCount=5, consumeWaitMs=120000, metadataTopicRetentionMs=-1, metadataTopicReplicationFactor=1, initializationRetryMaxTimeoutMs=120000, initializationRetryIntervalMs=300, commonProps={security.protocol=PLAINTEXT, bootstrap.servers=localhost:36617}, consumerProps={security.protocol=PLAINTEXT, key.deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer, value.deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer, enable.auto.commit=false, bootstrap.servers=localhost:36617, exclude.internal.topics=false, auto.offset.reset=earliest, client.id=__remote_log_metadata_client_0_consumer}, producerProps={security.protocol=PLAINTEXT, enable.idempotence=true, value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer, acks=all, bootstrap.servers=localhost:36617, key.serializer=org.apache.kafka.common.serialization.ByteArraySerializer, client.id=__remote_log_metadata_client_0_producer}} (org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager:364)
[2023-12-06 05:14:14,956] INFO Initializing topic-based RLMM resources (org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager:377)
[2023-12-06 05:14:15,067] INFO Topic __remote_log_metadata does not exist. Error: This server does not host this topic-partition. (org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager:466)
[2023-12-06 05:14:15,195] INFO Topic __remote_log_metadata created. TopicId: H5J--mdRSzWrczxXwGU8Kw, numPartitions: 5, replicationFactor: 1, config: [remote.storage.enable=false, cleanup.policy=delete, retention.ms=-1] (org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager:516)
[2023-12-06 05:14:15,263] INFO RLMM Consumer task thread is started (org.apache.kafka.server.log.remote.metadata.storage.ConsumerManager:74)
[2023-12-06 05:14:15,263] INFO Initialized topic-based RLMM resources successfully (org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager:436)
[2023-12-06 05:14:15,265] INFO Starting consumer task thread. (org.apache.kafka.server.log.remote.metadata.storage.ConsumerTask:123)
[2023-12-06 05:14:15,697] INFO Received leadership notifications with leader partitions [i7X8wcy7QIex4IMwbae74Q:topicA-0] and follower partitions [] (org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager:278)
[2023-12-06 05:14:15,697] INFO Remote log metadata snapshot file: [/tmp/kafka-9703822247831753795/topicA-0/remote_log_snapshot], newFileCreated: [true] (org.apache.kafka.server.log.remote.metadata.storage.RemoteLogMetadataSnapshotFile:78)
[2023-12-06 05:14:15,697] INFO Updating assignments for partitions added: [i7X8wcy7QIex4IMwbae74Q:topicA-0] and removed: [] (org.apache.kafka.server.log.remote.metadata.storage.ConsumerTask:295)
[2023-12-06 05:14:15,698] INFO Created a new task: class kafka.log.remote.RemoteLogManager$RLMTask[i7X8wcy7QIex4IMwbae74Q:topicA-0] and getting scheduled (kafka.log.remote.RemoteLogManager:1502)
[2023-12-06 05:14:15,698] INFO Scheduling runnable class kafka.log.remote.RemoteLogManager$RLMTask[i7X8wcy7QIex4IMwbae74Q:topicA-0] with initial delay: 0, fixed delay: 500 (kafka.log.remote.RemoteLogManager$RLMScheduledThreadPool:1616)
[2023-12-06 05:14:15,698] INFO Unassigned user-topic-partitions: 0 (org.apache.kafka.server.log.remote.metadata.storage.ConsumerTask:282)
[2023-12-06 05:14:15,824] INFO Initialized for all the 1 assigned user-partitions mapped to the 1 meta-partitions in 126 ms (org.apache.kafka.server.log.remote.metadata.storage.ConsumerTask:205)
[2023-12-06 05:14:16,226] INFO [RemoteLogManager=0 partition=i7X8wcy7QIex4IMwbae74Q:topicA-0] Found the logStartOffset: 0 for partition: i7X8wcy7QIex4IMwbae74Q:topicA-0 after becoming leader, leaderEpoch: 0 (kafka.log.remote.RemoteLogManager$RLMTask:618)
[2023-12-06 05:14:16,226] INFO [RemoteLogManager=0 partition=i7X8wcy7QIex4IMwbae74Q:topicA-0] Found the highest copiedRemoteOffset: Optional[(offset=-1, leaderEpoch=-1)] for partition: i7X8wcy7QIex4IMwbae74Q:topicA-0 after becoming leader, leaderEpoch: 0 (kafka.log.remote.RemoteLogManager$RLMTask:630)
[2023-12-06 05:14:16,226] INFO [RemoteLogManager=0 partition=i7X8wcy7QIex4IMwbae74Q:topicA-0] Copying 00000000000000000000.log to remote storage. (kafka.log.remote.RemoteLogManager$RLMTask:721)
[2023-12-06 05:14:16,236] INFO Wait until the consumer is caught up with the target partition 4 up-to offset 0 (org.apache.kafka.server.log.remote.metadata.storage.ConsumerManager:110)
[2023-12-06 05:14:16,237] INFO Creating directory: /tmp/kafka-remote-tier-deletesegmentsbyretentionsizetest18303542483502926029/kafka-tiered-storage/topicA-0-i7X8wcy7QIex4IMwbae74Q (org.apache.kafka.server.log.remote.storage.RemoteTopicPartitionDirectory:123)
[2023-12-06 05:14:16,237] INFO [LocalTieredStorage Id=0] Offloading log segment for i7X8wcy7QIex4IMwbae74Q:topicA-0 from segment=/tmp/kafka-9703822247831753795/topicA-0/00000000000000000000.log (org.apache.kafka.server.log.remote.storage.LocalTieredStorage:312)
[2023-12-06 05:14:16,240] INFO Wait until the consumer is caught up with the target partition 4 up-to offset 1 (org.apache.kafka.server.log.remote.metadata.storage.ConsumerManager:110)
[2023-12-06 05:14:16,291] INFO [RemoteLogManager=0 partition=i7X8wcy7QIex4IMwbae74Q:topicA-0] Copied 00000000000000000000.log to remote storage with segment-id: RemoteLogSegmentId{topicIdPartition=i7X8wcy7QIex4IMwbae74Q:topicA-0, id=CLxyIpDNQqCbP51RTTYn2A} (kafka.log.remote.RemoteLogManager$RLMTask:780)
[2023-12-06 05:14:16,291] INFO [RemoteLogManager=0 partition=i7X8wcy7QIex4IMwbae74Q:topicA-0] Copying 00000000000000000001.log to remote storage. (kafka.log.remote.RemoteLogManager$RLMTask:721)
[2023-12-06 05:14:16,295] INFO Wait until the consumer is caught up with the target partition 4 up-to offset 2 (org.apache.kafka.server.log.remote.metadata.storage.ConsumerManager:110)
[2023-12-06 05:14:16,346] INFO [LocalTieredStorage Id=0] Offloading log segment for i7X8wcy7QIex4IMwbae74Q:topicA-0 from segment=/tmp/kafka-9703822247831753795/topicA-0/00000000000000000001.log (org.apache.kafka.server.log.remote.storage.LocalTieredStorage:312)
[2023-12-06 05:14:16,365] INFO Wait until the consumer is caught up with the target partition 4 up-to offset 3 (org.apache.kafka.server.log.remote.metadata.storage.ConsumerManager:110)
[2023-12-06 05:14:16,366] INFO [RemoteLogManager=0 partition=i7X8wcy7QIex4IMwbae74Q:topicA-0] Copied 00000000000000000001.log to remote storage with segment-id: RemoteLogSegmentId{topicIdPartition=i7X8wcy7QIex4IMwbae74Q:topicA-0, id=zIoCOM7wR0GKtNzywi95XA} (kafka.log.remote.RemoteLogManager$RLMTask:780)
[2023-12-06 05:14:16,366] INFO [RemoteLogManager=0 partition=i7X8wcy7QIex4IMwbae74Q:topicA-0] Copying 00000000000000000002.log to remote storage. (kafka.log.remote.RemoteLogManager$RLMTask:721)
[2023-12-06 05:14:16,374] INFO Wait until the consumer is caught up with the target partition 4 up-to offset 4 (org.apache.kafka.server.log.remote.metadata.storage.ConsumerManager:110)
[2023-12-06 05:14:16,375] INFO [LocalTieredStorage Id=0] Offloading log segment for i7X8wcy7QIex4IMwbae74Q:topicA-0 from segment=/tmp/kafka-9703822247831753795/topicA-0/00000000000000000002.log (org.apache.kafka.server.log.remote.storage.LocalTieredStorage:312)
[2023-12-06 05:14:16,383] INFO Wait until the consumer is caught up with the target partition 4 up-to offset 5 (org.apache.kafka.server.log.remote.metadata.storage.ConsumerManager:110)
[2023-12-06 05:14:16,383] INFO [RemoteLogManager=0 partition=i7X8wcy7QIex4IMwbae74Q:topicA-0] Copied 00000000000000000002.log to remote storage with segment-id: RemoteLogSegmentId{topicIdPartition=i7X8wcy7QIex4IMwbae74Q:topicA-0, id=7qqnmZzMQbWaMBx0mebMfA} (kafka.log.remote.RemoteLogManager$RLMTask:780)
[2023-12-06 05:14:49,923] ERROR Error occurred while reading the remote data for topicA-0 (kafka.log.remote.RemoteLogReader:71)
org.apache.kafka.common.KafkaException: java.nio.channels.ClosedByInterruptException
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$createCacheEntry$14(RemoteIndexCache.java:413)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.loadIndexFile(RemoteIndexCache.java:356)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.createCacheEntry(RemoteIndexCache.java:401)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$getIndexEntry$10(RemoteIndexCache.java:375)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
at java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1916)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
at com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
at com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.getIndexEntry(RemoteIndexCache.java:374)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lookupOffset(RemoteIndexCache.java:446)
at kafka.log.remote.RemoteLogManager.lookupPositionForOffset(RemoteLogManager.java:1336)
at kafka.log.remote.RemoteLogManager.read(RemoteLogManager.java:1282)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:62)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:31)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.nio.channels.ClosedByInterruptException
at java.base/java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:199)
at java.base/sun.nio.ch.FileChannelImpl.endBlocking(FileChannelImpl.java:171)
at java.base/sun.nio.ch.FileChannelImpl.mapInternal(FileChannelImpl.java:1150)
at java.base/sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:1032)
at org.apache.kafka.storage.internals.log.AbstractIndex.createMappedBuffer(AbstractIndex.java:469)
at org.apache.kafka.storage.internals.log.AbstractIndex.createAndAssignMmap(AbstractIndex.java:105)
at org.apache.kafka.storage.internals.log.AbstractIndex.<init>(AbstractIndex.java:83)
at org.apache.kafka.storage.internals.log.TimeIndex.<init>(TimeIndex.java:65)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$createCacheEntry$14(RemoteIndexCache.java:409)
... 19 more
[2023-12-06 05:14:50,063] ERROR Error occurred while reading the remote data for topicA-0 (kafka.log.remote.RemoteLogReader:71)
org.apache.kafka.common.KafkaException: java.nio.channels.ClosedByInterruptException
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$createCacheEntry$12(RemoteIndexCache.java:397)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.loadIndexFile(RemoteIndexCache.java:345)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.createCacheEntry(RemoteIndexCache.java:385)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$getIndexEntry$10(RemoteIndexCache.java:375)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
at java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1916)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
at com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
at com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.getIndexEntry(RemoteIndexCache.java:374)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lookupOffset(RemoteIndexCache.java:446)
at kafka.log.remote.RemoteLogManager.lookupPositionForOffset(RemoteLogManager.java:1336)
at kafka.log.remote.RemoteLogManager.read(RemoteLogManager.java:1282)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:62)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:31)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.nio.channels.ClosedByInterruptException
at java.base/java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:199)
at java.base/sun.nio.ch.FileChannelImpl.endBlocking(FileChannelImpl.java:171)
at java.base/sun.nio.ch.FileChannelImpl.mapInternal(FileChannelImpl.java:1150)
at java.base/sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:1032)
at org.apache.kafka.storage.internals.log.AbstractIndex.createMappedBuffer(AbstractIndex.java:469)
at org.apache.kafka.storage.internals.log.AbstractIndex.createAndAssignMmap(AbstractIndex.java:105)
at org.apache.kafka.storage.internals.log.AbstractIndex.<init>(AbstractIndex.java:83)
at org.apache.kafka.storage.internals.log.OffsetIndex.<init>(OffsetIndex.java:70)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$createCacheEntry$12(RemoteIndexCache.java:393)
... 19 more
[2023-12-06 05:14:51,417] ERROR Error occurred while reading the remote data for topicA-0 (kafka.log.remote.RemoteLogReader:71)
org.apache.kafka.common.KafkaException: java.nio.channels.ClosedByInterruptException
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$createCacheEntry$16(RemoteIndexCache.java:433)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.loadIndexFile(RemoteIndexCache.java:356)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.createCacheEntry(RemoteIndexCache.java:417)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$getIndexEntry$10(RemoteIndexCache.java:375)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
at java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1916)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
at com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
at com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.getIndexEntry(RemoteIndexCache.java:374)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lookupOffset(RemoteIndexCache.java:446)
at kafka.log.remote.RemoteLogManager.lookupPositionForOffset(RemoteLogManager.java:1336)
at kafka.log.remote.RemoteLogManager.read(RemoteLogManager.java:1282)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:62)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:31)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.nio.channels.ClosedByInterruptException
at java.base/java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:199)
at java.base/sun.nio.ch.FileChannelImpl.endBlocking(FileChannelImpl.java:171)
at java.base/sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:395)
at org.apache.kafka.storage.internals.log.TransactionIndex.openChannel(TransactionIndex.java:201)
at org.apache.kafka.storage.internals.log.TransactionIndex.<init>(TransactionIndex.java:73)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$createCacheEntry$16(RemoteIndexCache.java:429)
... 19 more
[2023-12-06 05:14:53,114] ERROR Error occurred while reading the remote data for topicA-0 (kafka.log.remote.RemoteLogReader:71)
org.apache.kafka.common.KafkaException: java.nio.channels.ClosedByInterruptException
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$createCacheEntry$14(RemoteIndexCache.java:413)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.loadIndexFile(RemoteIndexCache.java:356)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.createCacheEntry(RemoteIndexCache.java:401)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$getIndexEntry$10(RemoteIndexCache.java:375)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
at java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1916)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
at com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
at com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.getIndexEntry(RemoteIndexCache.java:374)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lookupOffset(RemoteIndexCache.java:446)
at kafka.log.remote.RemoteLogManager.lookupPositionForOffset(RemoteLogManager.java:1336)
at kafka.log.remote.RemoteLogManager.read(RemoteLogManager.java:1282)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:62)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:31)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.nio.channels.ClosedByInterruptException
at java.base/java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:199)
at java.base/sun.nio.ch.FileChannelImpl.endBlocking(FileChannelImpl.java:171)
at java.base/sun.nio.ch.FileChannelImpl.mapInternal(FileChannelImpl.java:1150)
at java.base/sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:1032)
at org.apache.kafka.storage.internals.log.AbstractIndex.createMappedBuffer(AbstractIndex.java:469)
at org.apache.kafka.storage.internals.log.AbstractIndex.createAndAssignMmap(AbstractIndex.java:105)
at org.apache.kafka.storage.internals.log.AbstractIndex.<init>(AbstractIndex.java:83)
at org.apache.kafka.storage.internals.log.TimeIndex.<init>(TimeIndex.java:65)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$createCacheEntry$14(RemoteIndexCache.java:409)
... 19 more
[2023-12-06 05:14:55,513] ERROR Error occurred while reading the remote data for topicA-0 (kafka.log.remote.RemoteLogReader:71)
org.apache.kafka.common.KafkaException: java.nio.channels.ClosedByInterruptException
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$createCacheEntry$16(RemoteIndexCache.java:433)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.loadIndexFile(RemoteIndexCache.java:356)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.createCacheEntry(RemoteIndexCache.java:417)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$getIndexEntry$10(RemoteIndexCache.java:375)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
at java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1916)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
at com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
at com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.getIndexEntry(RemoteIndexCache.java:374)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lookupOffset(RemoteIndexCache.java:446)
at kafka.log.remote.RemoteLogManager.lookupPositionForOffset(RemoteLogManager.java:1336)
at kafka.log.remote.RemoteLogManager.read(RemoteLogManager.java:1282)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:62)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:31)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.nio.channels.ClosedByInterruptException
at java.base/java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:199)
at java.base/sun.nio.ch.FileChannelImpl.endBlocking(FileChannelImpl.java:171)
at java.base/sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:395)
at org.apache.kafka.storage.internals.log.TransactionIndex.openChannel(TransactionIndex.java:201)
at org.apache.kafka.storage.internals.log.TransactionIndex.<init>(TransactionIndex.java:73)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$createCacheEntry$16(RemoteIndexCache.java:429)
... 19 more
[2023-12-06 05:14:55,767] ERROR Error occurred while reading the remote data for topicA-0 (kafka.log.remote.RemoteLogReader:71)
org.apache.kafka.common.KafkaException: java.nio.channels.ClosedByInterruptException
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$createCacheEntry$12(RemoteIndexCache.java:397)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.loadIndexFile(RemoteIndexCache.java:345)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.createCacheEntry(RemoteIndexCache.java:385)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$getIndexEntry$10(RemoteIndexCache.java:375)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
at java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1916)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
at com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
at com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.getIndexEntry(RemoteIndexCache.java:374)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lookupOffset(RemoteIndexCache.java:446)
at kafka.log.remote.RemoteLogManager.lookupPositionForOffset(RemoteLogManager.java:1336)
at kafka.log.remote.RemoteLogManager.read(RemoteLogManager.java:1282)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:62)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:31)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.nio.channels.ClosedByInterruptException
at java.base/java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:199)
at java.base/sun.nio.ch.FileChannelImpl.endBlocking(FileChannelImpl.java:171)
at java.base/sun.nio.ch.FileChannelImpl.mapInternal(FileChannelImpl.java:1150)
at java.base/sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:1032)
at org.apache.kafka.storage.internals.log.AbstractIndex.createMappedBuffer(AbstractIndex.java:469)
at org.apache.kafka.storage.internals.log.AbstractIndex.createAndAssignMmap(AbstractIndex.java:105)
at org.apache.kafka.storage.internals.log.AbstractIndex.<init>(AbstractIndex.java:83)
at org.apache.kafka.storage.internals.log.OffsetIndex.<init>(OffsetIndex.java:70)
at org.apache.kafka.storage.internals.log.RemoteIndexCache.lambda$createCacheEntry$12(RemoteIndexCache.java:393)
... 19 more
[2023-12-06 05:14:57,770] INFO [RemoteLogManager=0 partition=i7X8wcy7QIex4IMwbae74Q:topicA-0] About to delete remote log segment RemoteLogSegmentId{topicIdPartition=i7X8wcy7QIex4IMwbae74Q:topicA-0, id=CLxyIpDNQqCbP51RTTYn2A} due to retention size 1 breach. Log size after deletion will be 216. (kafka.log.remote.RemoteLogManager$RLMTask:861)
[2023-12-06 05:14:57,879] INFO [RemoteLogManager=0 partition=i7X8wcy7QIex4IMwbae74Q:topicA-0] About to delete remote log segment RemoteLogSegmentId{topicIdPartition=i7X8wcy7QIex4IMwbae74Q:topicA-0, id=zIoCOM7wR0GKtNzywi95XA} due to retention size 1 breach. Log size after deletion will be 144. (kafka.log.remote.RemoteLogManager$RLMTask:861)
[2023-12-06 05:14:57,879] INFO [RemoteLogManager=0 partition=i7X8wcy7QIex4IMwbae74Q:topicA-0] About to delete remote log segment RemoteLogSegmentId{topicIdPartition=i7X8wcy7QIex4IMwbae74Q:topicA-0, id=7qqnmZzMQbWaMBx0mebMfA} due to retention size 1 breach. Log size after deletion will be 72. (kafka.log.remote.RemoteLogManager$RLMTask:861)
[2023-12-06 05:14:58,020] INFO Wait until the consumer is caught up with the target partition 4 up-to offset 6 (org.apache.kafka.server.log.remote.metadata.storage.ConsumerManager:110)
[2023-12-06 05:14:58,325] INFO Wait until the consumer is caught up with the target partition 4 up-to offset 7 (org.apache.kafka.server.log.remote.metadata.storage.ConsumerManager:110)
[2023-12-06 05:14:58,371] INFO Wait until the consumer is caught up with the target partition 4 up-to offset 8 (org.apache.kafka.server.log.remote.metadata.storage.ConsumerManager:110)
[2023-12-06 05:14:58,473] INFO Wait until the consumer is caught up with the target partition 4 up-to offset 9 (org.apache.kafka.server.log.remote.metadata.storage.ConsumerManager:110)
[2023-12-06 05:14:58,580] INFO Wait until the consumer is caught up with the target partition 4 up-to offset 10 (org.apache.kafka.server.log.remote.metadata.storage.ConsumerManager:110)
[2023-12-06 05:14:58,728] INFO Wait until the consumer is caught up with the target partition 4 up-to offset 11 (org.apache.kafka.server.log.remote.metadata.storage.ConsumerManager:110)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment