Created
December 1, 2021 17:35
-
-
Save yihua/93662b8d094ca2ac31412726a2966817 to your computer and use it in GitHub Desktop.
Unable to transition clustering inflight to complete
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
21/12/01 01:19:22 INFO SparkContext: Created broadcast 1070 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:19:22 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1155 (MapPartitionsRDD[2610] at mapToPair at HoodieSparkEngineContext.java:145) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:19:22 INFO TaskSchedulerImpl: Adding task set 1155.0 with 1 tasks resource profile 0 | |
21/12/01 01:19:22 INFO TaskSetManager: Starting task 0.0 in stage 1155.0 (TID 2184) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:22 INFO Executor: Running task 0.0 in stage 1155.0 (TID 2184) | |
21/12/01 01:19:23 INFO Executor: Finished task 0.0 in stage 1155.0 (TID 2184). 898 bytes result sent to driver | |
21/12/01 01:19:23 INFO TaskSetManager: Finished task 0.0 in stage 1155.0 (TID 2184) in 848 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:19:23 INFO TaskSchedulerImpl: Removed TaskSet 1155.0, whose tasks have all completed, from pool | |
21/12/01 01:19:23 INFO DAGScheduler: ResultStage 1155 (collectAsMap at HoodieSparkEngineContext.java:148) finished in 0.864 s | |
21/12/01 01:19:23 INFO DAGScheduler: Job 786 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:19:23 INFO TaskSchedulerImpl: Killing all running tasks in stage 1155: Stage finished | |
21/12/01 01:19:23 INFO DAGScheduler: Job 786 finished: collectAsMap at HoodieSparkEngineContext.java:148, took 0.864448 s | |
21/12/01 01:19:24 INFO RocksDbBasedFileSystemView: Resetting replacedFileGroups to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=60 | |
21/12/01 01:19:24 INFO RocksDBDAO: Prefix DELETE (query=part=) on hudi_replaced_fgs3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:19:24 INFO RocksDbBasedFileSystemView: Finished adding replaced file groups to partition (americas/brazil/sao_paulo) to ROCKSDB based view at /tmp/hoodie_timeline_rocksdb, Total file-groups=20 | |
21/12/01 01:19:24 INFO RocksDbBasedFileSystemView: Finished adding replaced file groups to partition (americas/united_states/san_francisco) to ROCKSDB based view at /tmp/hoodie_timeline_rocksdb, Total file-groups=20 | |
21/12/01 01:19:24 INFO RocksDbBasedFileSystemView: Finished adding replaced file groups to partition (asia/india/chennai) to ROCKSDB based view at /tmp/hoodie_timeline_rocksdb, Total file-groups=20 | |
21/12/01 01:19:24 INFO RocksDbBasedFileSystemView: Resetting replacedFileGroups to ROCKSDB based file-system view complete | |
21/12/01 01:19:24 INFO AbstractTableFileSystemView: Took 1518 ms to read 7 instants, 60 replaced file groups | |
21/12/01 01:19:24 INFO RocksDbBasedFileSystemView: Initializing pending compaction operations. Count=0 | |
21/12/01 01:19:24 INFO RocksDbBasedFileSystemView: Initializing external data file mapping. Count=0 | |
21/12/01 01:19:24 INFO FSUtils: Removed directory at s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.temp/20211201005831499 | |
21/12/01 01:19:25 INFO ClusteringUtils: Found 9 files in pending clustering operations | |
21/12/01 01:19:25 INFO RocksDbBasedFileSystemView: Resetting file groups in pending clustering to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=9 | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix DELETE (query=part=) on hudi_pending_clustering_fgs3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:19:25 INFO RocksDbBasedFileSystemView: Resetting replacedFileGroups to ROCKSDB based file-system view complete | |
21/12/01 01:19:25 INFO RocksDbBasedFileSystemView: Created ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix Search for (query=) on hudi_pending_compaction_s3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=0. Serialization Time taken(micro)=0, num entries=0 | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix Search for (query=) on hudi_pending_compaction_s3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=0. Serialization Time taken(micro)=0, num entries=0 | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix Search for (query=) on hudi_pending_compaction_s3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=0. Serialization Time taken(micro)=0, num entries=0 | |
21/12/01 01:19:25 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/clustering/pending/?basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011818630&timelinehash=d55682b9332795769babef5de2a9a1e4e5faa20cee01fdcacb036b97b8c4b954) | |
21/12/01 01:19:25 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/clustering/pending/?basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011818630&timelinehash=d55682b9332795769babef5de2a9a1e4e5faa20cee01fdcacb036b97b8c4b954) | |
21/12/01 01:19:25 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/clustering/pending/?basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011818630&timelinehash=d55682b9332795769babef5de2a9a1e4e5faa20cee01fdcacb036b97b8c4b954) | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix Search for (query=) on hudi_pending_clustering_fgs3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=0. Serialization Time taken(micro)=46, num entries=9 | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix Search for (query=) on hudi_pending_clustering_fgs3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=0. Serialization Time taken(micro)=45, num entries=9 | |
21/12/01 01:19:25 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/slices/partition/latest/?partition=americas%2Funited_states%2Fsan_francisco&basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011818630&timelinehash=d55682b9332795769babef5de2a9a1e4e5faa20cee01fdcacb036b97b8c4b954) | |
21/12/01 01:19:25 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/slices/partition/latest/?partition=asia%2Findia%2Fchennai&basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011818630&timelinehash=d55682b9332795769babef5de2a9a1e4e5faa20cee01fdcacb036b97b8c4b954) | |
21/12/01 01:19:25 INFO AbstractTableFileSystemView: Building file system view for partition (asia/india/chennai) | |
21/12/01 01:19:25 INFO AbstractTableFileSystemView: Building file system view for partition (americas/united_states/san_francisco) | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix Search for (query=) on hudi_pending_clustering_fgs3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=3. Serialization Time taken(micro)=3672, num entries=9 | |
21/12/01 01:19:25 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/slices/partition/latest/?partition=americas%2Fbrazil%2Fsao_paulo&basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011818630&timelinehash=d55682b9332795769babef5de2a9a1e4e5faa20cee01fdcacb036b97b8c4b954) | |
21/12/01 01:19:25 INFO AbstractTableFileSystemView: Building file system view for partition (americas/brazil/sao_paulo) | |
21/12/01 01:19:25 INFO RocksDbBasedFileSystemView: Resetting and adding new partition (asia/india/chennai) to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=15 | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix DELETE (query=type=slice,part=asia/india/chennai,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix DELETE (query=type=df,part=asia/india/chennai,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:19:25 INFO RocksDbBasedFileSystemView: Finished adding new partition (asia/india/chennai) to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=15 | |
21/12/01 01:19:25 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=16, NumFileGroups=15, FileGroupsCreationTime=1, StoreTimeTaken=1 | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix Search for (query=type=slice,part=asia/india/chennai,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=0. Serialization Time taken(micro)=78, num entries=15 | |
21/12/01 01:19:25 INFO SparkSizeBasedClusteringPlanStrategy: Adding final clustering group 399260274 max bytes: 2147483648 num input slices: 2 output groups: 1 | |
21/12/01 01:19:25 INFO Executor: Finished task 2.0 in stage 1153.0 (TID 2182). 1528 bytes result sent to driver | |
21/12/01 01:19:25 INFO TaskSetManager: Finished task 2.0 in stage 1153.0 (TID 2182) in 5647 ms on 192.168.1.48 (executor driver) (1/3) | |
21/12/01 01:19:25 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011818630__deltacommit__COMPLETED]} | |
21/12/01 01:19:25 INFO HoodieTimelineArchiveLog: No Instants to archive | |
21/12/01 01:19:25 INFO HoodieHeartbeatClient: Stopping heartbeat for instant 20211201005831499 | |
21/12/01 01:19:25 INFO HoodieHeartbeatClient: Stopped heartbeat for instant 20211201005831499 | |
21/12/01 01:19:25 INFO RocksDbBasedFileSystemView: Resetting and adding new partition (americas/united_states/san_francisco) to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=15 | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix DELETE (query=type=slice,part=americas/united_states/san_francisco,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix DELETE (query=type=df,part=americas/united_states/san_francisco,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:19:25 INFO RocksDbBasedFileSystemView: Finished adding new partition (americas/united_states/san_francisco) to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=15 | |
21/12/01 01:19:25 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=16, NumFileGroups=15, FileGroupsCreationTime=1, StoreTimeTaken=1 | |
21/12/01 01:19:25 INFO RocksDbBasedFileSystemView: Resetting and adding new partition (americas/brazil/sao_paulo) to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=15 | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix DELETE (query=type=slice,part=americas/brazil/sao_paulo,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix DELETE (query=type=df,part=americas/brazil/sao_paulo,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix Search for (query=type=slice,part=americas/united_states/san_francisco,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=3. Serialization Time taken(micro)=2926, num entries=15 | |
21/12/01 01:19:25 INFO RocksDbBasedFileSystemView: Finished adding new partition (americas/brazil/sao_paulo) to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=15 | |
21/12/01 01:19:25 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=16, NumFileGroups=15, FileGroupsCreationTime=1, StoreTimeTaken=1 | |
21/12/01 01:19:25 INFO SparkSizeBasedClusteringPlanStrategy: Adding final clustering group 401572154 max bytes: 2147483648 num input slices: 2 output groups: 1 | |
21/12/01 01:19:25 INFO Executor: Finished task 1.0 in stage 1153.0 (TID 2181). 1600 bytes result sent to driver | |
21/12/01 01:19:25 INFO TaskSetManager: Finished task 1.0 in stage 1153.0 (TID 2181) in 5948 ms on 192.168.1.48 (executor driver) (2/3) | |
21/12/01 01:19:25 INFO RocksDBDAO: Prefix Search for (query=type=slice,part=americas/brazil/sao_paulo,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=4. Serialization Time taken(micro)=3344, num entries=15 | |
21/12/01 01:19:25 INFO SparkSizeBasedClusteringPlanStrategy: Adding final clustering group 400485608 max bytes: 2147483648 num input slices: 2 output groups: 1 | |
21/12/01 01:19:25 INFO Executor: Finished task 0.0 in stage 1153.0 (TID 2180). 1556 bytes result sent to driver | |
21/12/01 01:19:25 INFO TaskSetManager: Finished task 0.0 in stage 1153.0 (TID 2180) in 5952 ms on 192.168.1.48 (executor driver) (3/3) | |
21/12/01 01:19:25 INFO TaskSchedulerImpl: Removed TaskSet 1153.0, whose tasks have all completed, from pool | |
21/12/01 01:19:25 INFO DAGScheduler: ResultStage 1153 (collect at HoodieSparkEngineContext.java:134) finished in 6.013 s | |
21/12/01 01:19:25 INFO DAGScheduler: Job 784 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:19:25 INFO TaskSchedulerImpl: Killing all running tasks in stage 1153: Stage finished | |
21/12/01 01:19:25 INFO DAGScheduler: Job 784 finished: collect at HoodieSparkEngineContext.java:134, took 6.012890 s | |
21/12/01 01:19:25 INFO HeartbeatUtils: Deleted the heartbeat for instant 20211201005831499 | |
21/12/01 01:19:25 INFO HoodieHeartbeatClient: Deleted heartbeat file for instant 20211201005831499 | |
21/12/01 01:19:25 INFO SparkContext: Starting job: collect at SparkHoodieBackedTableMetadataWriter.java:146 | |
21/12/01 01:19:25 INFO DAGScheduler: Got job 787 (collect at SparkHoodieBackedTableMetadataWriter.java:146) with 1 output partitions | |
21/12/01 01:19:25 INFO DAGScheduler: Final stage: ResultStage 1157 (collect at SparkHoodieBackedTableMetadataWriter.java:146) | |
21/12/01 01:19:25 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1156) | |
21/12/01 01:19:25 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:19:25 INFO DAGScheduler: Submitting ResultStage 1157 (MapPartitionsRDD[2603] at flatMap at BaseSparkCommitActionExecutor.java:176), which has no missing parents | |
21/12/01 01:19:26 INFO MemoryStore: Block broadcast_1071 stored as values in memory (estimated size 424.1 KiB, free 363.6 MiB) | |
21/12/01 01:19:26 INFO MemoryStore: Block broadcast_1071_piece0 stored as bytes in memory (estimated size 150.2 KiB, free 363.5 MiB) | |
21/12/01 01:19:26 INFO BlockManagerInfo: Added broadcast_1071_piece0 in memory on 192.168.1.48:56496 (size: 150.2 KiB, free: 365.5 MiB) | |
21/12/01 01:19:26 INFO SparkContext: Created broadcast 1071 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:19:26 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1157 (MapPartitionsRDD[2603] at flatMap at BaseSparkCommitActionExecutor.java:176) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:19:26 INFO TaskSchedulerImpl: Adding task set 1157.0 with 1 tasks resource profile 0 | |
21/12/01 01:19:26 INFO TaskSetManager: Starting task 0.0 in stage 1157.0 (TID 2185) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:26 INFO Executor: Running task 0.0 in stage 1157.0 (TID 2185) | |
21/12/01 01:19:26 INFO BlockManager: Found block rdd_2603_0 locally | |
21/12/01 01:19:26 INFO Executor: Finished task 0.0 in stage 1157.0 (TID 2185). 1746 bytes result sent to driver | |
21/12/01 01:19:26 INFO TaskSetManager: Finished task 0.0 in stage 1157.0 (TID 2185) in 20 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:19:26 INFO TaskSchedulerImpl: Removed TaskSet 1157.0, whose tasks have all completed, from pool | |
21/12/01 01:19:26 INFO DAGScheduler: ResultStage 1157 (collect at SparkHoodieBackedTableMetadataWriter.java:146) finished in 0.082 s | |
21/12/01 01:19:26 INFO DAGScheduler: Job 787 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:19:26 INFO TaskSchedulerImpl: Killing all running tasks in stage 1157: Stage finished | |
21/12/01 01:19:26 INFO DAGScheduler: Job 787 finished: collect at SparkHoodieBackedTableMetadataWriter.java:146, took 0.083044 s | |
21/12/01 01:19:26 INFO HoodieDeltaStreamer: Scheduled async clustering for instant: 20211201011906814 | |
21/12/01 01:19:26 INFO HoodieAsyncService: Enqueuing new pending clustering instant: 20211201011906814 | |
21/12/01 01:19:26 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011818630__deltacommit__COMPLETED]} | |
21/12/01 01:19:26 INFO SparkRDDWriteClient: Committing Clustering 20211201005831499. Finished with result HoodieReplaceMetadata{partitionToWriteStats={americas/brazil/sao_paulo=[HoodieWriteStat{fileId='b27687f7-ef5f-4b64-9322-6168e5c4a6f4-0', path='americas/brazil/sao_paulo/b27687f7-ef5f-4b64-9322-6168e5c4a6f4-0_0-1125-2147_20211201005831499.parquet', prevCommit='null', numWrites=200120, numDeletes=0, numUpdateWrites=0, totalWriteBytes=18621127, totalWriteErrors=0, tempPath='null', partitionPath='americas/brazil/sao_paulo', totalLogRecords=0, totalLogFilesCompacted=0, totalLogSizeCompacted=0, totalUpdatedRecordsCompacted=0, totalLogBlocks=0, totalCorruptLogBlock=0, totalRollbackBlocks=0}], americas/united_states/san_francisco=[HoodieWriteStat{fileId='d0293067-0948-4804-824a-ee888e0c4999-0', path='americas/united_states/san_francisco/d0293067-0948-4804-824a-ee888e0c4999-0_1-1125-2148_20211201005831499.parquet', prevCommit='null', numWrites=200239, numDeletes=0, numUpdateWrites=0, totalWriteBytes=18665722, totalWriteErrors=0, tempPath='null', partitionPath='americas/united_states/san_francisco', totalLogRecords=0, totalLogFilesCompacted=0, totalLogSizeCompacted=0, totalUpdatedRecordsCompacted=0, totalLogBlocks=0, totalCorruptLogBlock=0, totalRollbackBlocks=0}], asia/india/chennai=[HoodieWriteStat{fileId='09ef9ba2-32b5-4368-9ed1-497a85e93e2c-0', path='asia/india/chennai/09ef9ba2-32b5-4368-9ed1-497a85e93e2c-0_2-1125-2149_20211201005831499.parquet', prevCommit='null', numWrites=199641, numDeletes=0, numUpdateWrites=0, totalWriteBytes=18554350, totalWriteErrors=0, tempPath='null', partitionPath='asia/india/chennai', totalLogRecords=0, totalLogFilesCompacted=0, totalLogSizeCompacted=0, totalUpdatedRecordsCompacted=0, totalLogBlocks=0, totalCorruptLogBlock=0, totalRollbackBlocks=0}]}, partitionToReplaceFileIds={americas/brazil/sao_paulo=[97393a21-b04e-4107-98df-1b18b464df62-0], americas/united_states/san_francisco=[97393a21-b04e-4107-98df-1b18b464df62-1], asia/india/chennai=[97393a21-b04e-4107-98df-1b18b464df62-2]}, compacted=false, extraMetadata={schema={"type":"record","name":"triprec","fields":[{"name":"begin_lat","type":"double"},{"name":"begin_lon","type":"double"},{"name":"driver","type":"string"},{"name":"end_lat","type":"double"},{"name":"end_lon","type":"double"},{"name":"fare","type":"double"},{"name":"partitionpath","type":"string"},{"name":"rider","type":"string"},{"name":"ts","type":"long"},{"name":"uuid","type":"string"}]}}, operationType=CLUSTER} | |
21/12/01 01:19:26 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201005831499.replacecommit.inflight | |
21/12/01 01:19:26 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:26 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:19:26 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:26 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011906814__replacecommit__REQUESTED]} | |
21/12/01 01:19:27 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201005831499.replacecommit | |
21/12/01 01:19:27 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/dir/delete?markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201005831499) | |
21/12/01 01:19:27 INFO DeltaSync: Checkpoint to resume from : Option{val=1638324392000} | |
21/12/01 01:19:27 INFO DFSPathSelector: Root path => s3a://hudi-testing/test_input_data/ source limit => 50485760 | |
21/12/01 01:19:27 INFO SparkContext: Starting job: collectAsMap at HoodieSparkEngineContext.java:148 | |
21/12/01 01:19:27 INFO DAGScheduler: Got job 788 (collectAsMap at HoodieSparkEngineContext.java:148) with 2 output partitions | |
21/12/01 01:19:27 INFO DAGScheduler: Final stage: ResultStage 1158 (collectAsMap at HoodieSparkEngineContext.java:148) | |
21/12/01 01:19:27 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:19:27 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:19:27 INFO DAGScheduler: Submitting ResultStage 1158 (MapPartitionsRDD[2612] at mapToPair at HoodieSparkEngineContext.java:145), which has no missing parents | |
21/12/01 01:19:27 INFO MemoryStore: Block broadcast_1072 stored as values in memory (estimated size 99.6 KiB, free 363.4 MiB) | |
21/12/01 01:19:27 INFO MemoryStore: Block broadcast_1072_piece0 stored as bytes in memory (estimated size 35.3 KiB, free 363.3 MiB) | |
21/12/01 01:19:27 INFO BlockManagerInfo: Added broadcast_1072_piece0 in memory on 192.168.1.48:56496 (size: 35.3 KiB, free: 365.5 MiB) | |
21/12/01 01:19:27 INFO SparkContext: Created broadcast 1072 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:19:27 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 1158 (MapPartitionsRDD[2612] at mapToPair at HoodieSparkEngineContext.java:145) (first 15 tasks are for partitions Vector(0, 1)) | |
21/12/01 01:19:27 INFO TaskSchedulerImpl: Adding task set 1158.0 with 2 tasks resource profile 0 | |
21/12/01 01:19:27 INFO TaskSetManager: Starting task 0.0 in stage 1158.0 (TID 2186) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4418 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:27 INFO TaskSetManager: Starting task 1.0 in stage 1158.0 (TID 2187) (192.168.1.48, executor driver, partition 1, PROCESS_LOCAL, 4414 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:27 INFO Executor: Running task 0.0 in stage 1158.0 (TID 2186) | |
21/12/01 01:19:27 INFO Executor: Running task 1.0 in stage 1158.0 (TID 2187) | |
21/12/01 01:19:28 INFO Executor: Finished task 0.0 in stage 1158.0 (TID 2186). 888 bytes result sent to driver | |
21/12/01 01:19:28 INFO TaskSetManager: Finished task 0.0 in stage 1158.0 (TID 2186) in 343 ms on 192.168.1.48 (executor driver) (1/2) | |
21/12/01 01:19:28 INFO Executor: Finished task 1.0 in stage 1158.0 (TID 2187). 884 bytes result sent to driver | |
21/12/01 01:19:28 INFO TaskSetManager: Finished task 1.0 in stage 1158.0 (TID 2187) in 869 ms on 192.168.1.48 (executor driver) (2/2) | |
21/12/01 01:19:28 INFO TaskSchedulerImpl: Removed TaskSet 1158.0, whose tasks have all completed, from pool | |
21/12/01 01:19:28 INFO DAGScheduler: ResultStage 1158 (collectAsMap at HoodieSparkEngineContext.java:148) finished in 0.887 s | |
21/12/01 01:19:28 INFO DAGScheduler: Job 788 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:19:28 INFO TaskSchedulerImpl: Killing all running tasks in stage 1158: Stage finished | |
21/12/01 01:19:28 INFO DAGScheduler: Job 788 finished: collectAsMap at HoodieSparkEngineContext.java:148, took 0.887310 s | |
21/12/01 01:19:29 INFO FSUtils: Removed directory at s3a://hudi-testing/test_hoodie_table_2/.hoodie/.temp/20211201005831499 | |
21/12/01 01:19:29 INFO SparkRDDWriteClient: Clustering successfully on commit 20211201005831499 | |
21/12/01 01:19:29 INFO AsyncClusteringService: Finished clustering for instant [==>20211201005831499__replacecommit__REQUESTED] | |
21/12/01 01:19:29 INFO HoodieAsyncService: Waiting for next instant upto 10 seconds | |
21/12/01 01:19:29 INFO AsyncClusteringService: Starting clustering for instant [==>20211201010227744__replacecommit__REQUESTED] | |
21/12/01 01:19:29 INFO HoodieSparkClusteringClient: Executing clustering instance [==>20211201010227744__replacecommit__REQUESTED] | |
21/12/01 01:19:29 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:29 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:19:30 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:30 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:30 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011906814__replacecommit__REQUESTED]} | |
21/12/01 01:19:30 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:30 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:19:30 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:30 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:19:30 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:19:31 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:19:31 INFO FileSystemViewManager: Creating View Manager with storage type :REMOTE_FIRST | |
21/12/01 01:19:31 INFO FileSystemViewManager: Creating remote first table view | |
21/12/01 01:19:31 INFO FileSystemViewManager: Creating remote view for basePath s3a://hudi-testing/test_hoodie_table_2. Server=192.168.1.48:56507, Timeout=300 | |
21/12/01 01:19:31 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2 | |
21/12/01 01:19:32 INFO AbstractTableFileSystemView: Took 1771 ms to read 8 instants, 63 replaced file groups | |
21/12/01 01:19:33 INFO ClusteringUtils: Found 12 files in pending clustering operations | |
21/12/01 01:19:33 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/refresh/?basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011818630&timelinehash=8c54ec0d1a4e68f75130fd2202a0fdc5813de8cf68a82f3ff3d8a3eb411ea167) | |
21/12/01 01:19:33 INFO RocksDbBasedFileSystemView: Closing Rocksdb !! | |
21/12/01 01:19:33 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl.cc:463] Shutdown: canceling all background work | |
21/12/01 01:19:33 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl.cc:642] Shutdown complete | |
21/12/01 01:19:33 INFO RocksDbBasedFileSystemView: Closed Rocksdb !! | |
21/12/01 01:19:35 INFO AbstractTableFileSystemView: Took 1705 ms to read 8 instants, 63 replaced file groups | |
21/12/01 01:19:36 INFO ClusteringUtils: Found 12 files in pending clustering operations | |
21/12/01 01:19:36 INFO AsyncCleanerService: Async auto cleaning is not enabled. Not running cleaner now | |
21/12/01 01:19:36 INFO SparkRDDWriteClient: Starting clustering at 20211201010227744 | |
21/12/01 01:19:36 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201010227744.replacecommit.requested | |
21/12/01 01:19:37 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201010227744.replacecommit.inflight | |
21/12/01 01:19:38 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011906814__replacecommit__REQUESTED]} | |
21/12/01 01:19:38 INFO SparkSortAndSizeExecutionStrategy: Starting clustering for a group, parallelism:1 commit:20211201010227744 | |
21/12/01 01:19:38 INFO BlockManagerInfo: Removed broadcast_1067_piece0 on 192.168.1.48:56496 in memory (size: 150.3 KiB, free: 365.7 MiB) | |
21/12/01 01:19:38 INFO BlockManager: Removing RDD 2603 | |
21/12/01 01:19:38 INFO BlockManagerInfo: Removed broadcast_1072_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 365.7 MiB) | |
21/12/01 01:19:38 INFO BlockManagerInfo: Removed broadcast_1071_piece0 on 192.168.1.48:56496 in memory (size: 150.2 KiB, free: 365.8 MiB) | |
21/12/01 01:19:38 INFO BlockManagerInfo: Removed broadcast_1068_piece0 on 192.168.1.48:56496 in memory (size: 177.7 KiB, free: 366.0 MiB) | |
21/12/01 01:19:38 INFO BlockManagerInfo: Removed broadcast_1070_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 366.0 MiB) | |
21/12/01 01:19:38 INFO BlockManager: Removing RDD 2588 | |
21/12/01 01:19:38 INFO BlockManagerInfo: Removed broadcast_1069_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 366.1 MiB) | |
21/12/01 01:19:38 INFO SparkSortAndSizeExecutionStrategy: Starting clustering for a group, parallelism:1 commit:20211201010227744 | |
21/12/01 01:19:38 INFO SparkSortAndSizeExecutionStrategy: Starting clustering for a group, parallelism:1 commit:20211201010227744 | |
21/12/01 01:19:38 INFO SparkContext: Starting job: collect at SparkExecuteClusteringCommitActionExecutor.java:85 | |
21/12/01 01:19:38 INFO DAGScheduler: Registering RDD 2632 (sortBy at GlobalSortPartitioner.java:41) as input to shuffle 256 | |
21/12/01 01:19:38 INFO DAGScheduler: Registering RDD 2624 (sortBy at GlobalSortPartitioner.java:41) as input to shuffle 257 | |
21/12/01 01:19:38 INFO DAGScheduler: Registering RDD 2616 (sortBy at GlobalSortPartitioner.java:41) as input to shuffle 258 | |
21/12/01 01:19:38 INFO DAGScheduler: Got job 789 (collect at SparkExecuteClusteringCommitActionExecutor.java:85) with 3 output partitions | |
21/12/01 01:19:38 INFO DAGScheduler: Final stage: ResultStage 1162 (collect at SparkExecuteClusteringCommitActionExecutor.java:85) | |
21/12/01 01:19:38 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1159, ShuffleMapStage 1160, ShuffleMapStage 1161) | |
21/12/01 01:19:38 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1159, ShuffleMapStage 1160, ShuffleMapStage 1161) | |
21/12/01 01:19:38 INFO DAGScheduler: Submitting ShuffleMapStage 1159 (MapPartitionsRDD[2632] at sortBy at GlobalSortPartitioner.java:41), which has no missing parents | |
21/12/01 01:19:38 INFO MemoryStore: Block broadcast_1073 stored as values in memory (estimated size 512.2 KiB, free 365.0 MiB) | |
21/12/01 01:19:38 INFO MemoryStore: Block broadcast_1073_piece0 stored as bytes in memory (estimated size 179.3 KiB, free 364.9 MiB) | |
21/12/01 01:19:38 INFO BlockManagerInfo: Added broadcast_1073_piece0 in memory on 192.168.1.48:56496 (size: 179.3 KiB, free: 365.9 MiB) | |
21/12/01 01:19:38 INFO SparkContext: Created broadcast 1073 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:19:38 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1159 (MapPartitionsRDD[2632] at sortBy at GlobalSortPartitioner.java:41) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:19:38 INFO TaskSchedulerImpl: Adding task set 1159.0 with 1 tasks resource profile 0 | |
21/12/01 01:19:38 INFO DAGScheduler: Submitting ShuffleMapStage 1160 (MapPartitionsRDD[2624] at sortBy at GlobalSortPartitioner.java:41), which has no missing parents | |
21/12/01 01:19:38 INFO TaskSetManager: Starting task 0.0 in stage 1159.0 (TID 2188) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4595 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:38 INFO Executor: Running task 0.0 in stage 1159.0 (TID 2188) | |
21/12/01 01:19:38 INFO MemoryStore: Block broadcast_1074 stored as values in memory (estimated size 512.2 KiB, free 364.4 MiB) | |
21/12/01 01:19:38 INFO MemoryStore: Block broadcast_1074_piece0 stored as bytes in memory (estimated size 179.3 KiB, free 364.2 MiB) | |
21/12/01 01:19:38 INFO BlockManagerInfo: Added broadcast_1074_piece0 in memory on 192.168.1.48:56496 (size: 179.3 KiB, free: 365.7 MiB) | |
21/12/01 01:19:38 INFO SparkContext: Created broadcast 1074 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:19:38 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1160 (MapPartitionsRDD[2624] at sortBy at GlobalSortPartitioner.java:41) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:19:38 INFO TaskSchedulerImpl: Adding task set 1160.0 with 1 tasks resource profile 0 | |
21/12/01 01:19:38 INFO DAGScheduler: Submitting ShuffleMapStage 1161 (MapPartitionsRDD[2616] at sortBy at GlobalSortPartitioner.java:41), which has no missing parents | |
21/12/01 01:19:38 INFO TaskSetManager: Starting task 0.0 in stage 1160.0 (TID 2189) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4631 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:38 INFO Executor: Running task 0.0 in stage 1160.0 (TID 2189) | |
21/12/01 01:19:38 INFO MemoryStore: Block broadcast_1075 stored as values in memory (estimated size 512.2 KiB, free 363.7 MiB) | |
21/12/01 01:19:38 INFO MemoryStore: Block broadcast_1075_piece0 stored as bytes in memory (estimated size 179.3 KiB, free 363.5 MiB) | |
21/12/01 01:19:38 INFO BlockManagerInfo: Added broadcast_1075_piece0 in memory on 192.168.1.48:56496 (size: 179.3 KiB, free: 365.6 MiB) | |
21/12/01 01:19:38 INFO SparkContext: Created broadcast 1075 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:19:38 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1161 (MapPartitionsRDD[2616] at sortBy at GlobalSortPartitioner.java:41) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:19:38 INFO TaskSchedulerImpl: Adding task set 1161.0 with 1 tasks resource profile 0 | |
21/12/01 01:19:38 INFO TaskSetManager: Starting task 0.0 in stage 1161.0 (TID 2190) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4609 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:38 INFO Executor: Running task 0.0 in stage 1161.0 (TID 2190) | |
21/12/01 01:19:39 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:39 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:39 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:39 INFO InternalParquetRecordReader: RecordReader initialized will read a total of 199629 records. | |
21/12/01 01:19:39 INFO InternalParquetRecordReader: at row 0. reading next block | |
21/12/01 01:19:39 INFO InternalParquetRecordReader: RecordReader initialized will read a total of 200194 records. | |
21/12/01 01:19:39 INFO InternalParquetRecordReader: at row 0. reading next block | |
21/12/01 01:19:39 INFO InternalParquetRecordReader: RecordReader initialized will read a total of 200177 records. | |
21/12/01 01:19:39 INFO InternalParquetRecordReader: at row 0. reading next block | |
21/12/01 01:19:40 INFO CodecPool: Got brand-new decompressor [.gz] | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 129 ms. row count = 2848 | |
21/12/01 01:19:40 INFO CodecPool: Got brand-new decompressor [.gz] | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 202 ms. row count = 3157 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 2848 records from 15 columns in 14 ms: 203.42857 rec/ms, 3051.4285 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 90% reading (129 ms) and 9% processing (14 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 2848. reading next block | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 3157 records from 15 columns in 15 ms: 210.46666 rec/ms, 3157.0 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 93% reading (202 ms) and 6% processing (15 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 3157. reading next block | |
21/12/01 01:19:40 INFO InMemoryFileIndex: It took 1304 ms to list leaf files for 6 paths. | |
21/12/01 01:19:40 INFO SparkContext: Starting job: parquet at ParquetDFSSource.java:55 | |
21/12/01 01:19:40 INFO DAGScheduler: Got job 790 (parquet at ParquetDFSSource.java:55) with 1 output partitions | |
21/12/01 01:19:40 INFO DAGScheduler: Final stage: ResultStage 1163 (parquet at ParquetDFSSource.java:55) | |
21/12/01 01:19:40 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:19:40 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:19:40 INFO DAGScheduler: Submitting ResultStage 1163 (MapPartitionsRDD[2640] at parquet at ParquetDFSSource.java:55), which has no missing parents | |
21/12/01 01:19:40 INFO CodecPool: Got brand-new decompressor [.gz] | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 153 ms. row count = 3029 | |
21/12/01 01:19:40 INFO MemoryStore: Block broadcast_1076 stored as values in memory (estimated size 101.6 KiB, free 363.4 MiB) | |
21/12/01 01:19:40 INFO MemoryStore: Block broadcast_1076_piece0 stored as bytes in memory (estimated size 36.5 KiB, free 363.4 MiB) | |
21/12/01 01:19:40 INFO BlockManagerInfo: Added broadcast_1076_piece0 in memory on 192.168.1.48:56496 (size: 36.5 KiB, free: 365.5 MiB) | |
21/12/01 01:19:40 INFO SparkContext: Created broadcast 1076 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:19:40 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1163 (MapPartitionsRDD[2640] at parquet at ParquetDFSSource.java:55) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:19:40 INFO TaskSchedulerImpl: Adding task set 1163.0 with 1 tasks resource profile 0 | |
21/12/01 01:19:40 INFO TaskSetManager: Starting task 0.0 in stage 1163.0 (TID 2191) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4446 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:40 INFO Executor: Running task 0.0 in stage 1163.0 (TID 2191) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 3029 records from 15 columns in 15 ms: 201.93333 rec/ms, 3029.0 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 91% reading (153 ms) and 8% processing (15 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 3029. reading next block | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 123 ms. row count = 2848 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 128 ms. row count = 3157 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 5696 records from 15 columns in 28 ms: 203.42857 rec/ms, 3051.4285 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 90% reading (252 ms) and 10% processing (28 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 5696. reading next block | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 6314 records from 15 columns in 60 ms: 105.23333 rec/ms, 1578.5 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 84% reading (330 ms) and 15% processing (60 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 6314. reading next block | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 3029 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 6058 records from 15 columns in 33 ms: 183.57576 rec/ms, 2753.6365 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 88% reading (264 ms) and 11% processing (33 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 6058. reading next block | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 120 ms. row count = 2848 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 8544 records from 15 columns in 45 ms: 189.86667 rec/ms, 2848.0 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 89% reading (372 ms) and 10% processing (45 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 8544. reading next block | |
21/12/01 01:19:40 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 133 ms. row count = 3157 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 3029 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 9471 records from 15 columns in 74 ms: 127.98649 rec/ms, 1919.7972 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 86% reading (463 ms) and 13% processing (74 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 9471. reading next block | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 9087 records from 15 columns in 46 ms: 197.54347 rec/ms, 2963.152 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 89% reading (375 ms) and 10% processing (46 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 9087. reading next block | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 2848 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 11392 records from 15 columns in 58 ms: 196.41379 rec/ms, 2946.2068 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 89% reading (484 ms) and 10% processing (58 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 11392. reading next block | |
21/12/01 01:19:40 INFO Executor: Finished task 0.0 in stage 1163.0 (TID 2191). 1385 bytes result sent to driver | |
21/12/01 01:19:40 INFO TaskSetManager: Finished task 0.0 in stage 1163.0 (TID 2191) in 356 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:19:40 INFO TaskSchedulerImpl: Removed TaskSet 1163.0, whose tasks have all completed, from pool | |
21/12/01 01:19:40 INFO DAGScheduler: ResultStage 1163 (parquet at ParquetDFSSource.java:55) finished in 0.371 s | |
21/12/01 01:19:40 INFO DAGScheduler: Job 790 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:19:40 INFO TaskSchedulerImpl: Killing all running tasks in stage 1163: Stage finished | |
21/12/01 01:19:40 INFO DAGScheduler: Job 790 finished: parquet at ParquetDFSSource.java:55, took 0.371836 s | |
21/12/01 01:19:40 INFO FileSourceStrategy: Pushed Filters: | |
21/12/01 01:19:40 INFO FileSourceStrategy: Post-Scan Filters: | |
21/12/01 01:19:40 INFO FileSourceStrategy: Output Data Schema: struct<begin_lat: double, begin_lon: double, driver: string, end_lat: double, end_lon: double ... 8 more fields> | |
21/12/01 01:19:40 INFO MemoryStore: Block broadcast_1077 stored as values in memory (estimated size 349.8 KiB, free 363.0 MiB) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 3157 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 114 ms. row count = 3029 | |
21/12/01 01:19:40 INFO MemoryStore: Block broadcast_1077_piece0 stored as bytes in memory (estimated size 34.5 KiB, free 363.0 MiB) | |
21/12/01 01:19:40 INFO BlockManagerInfo: Added broadcast_1077_piece0 in memory on 192.168.1.48:56496 (size: 34.5 KiB, free: 365.5 MiB) | |
21/12/01 01:19:40 INFO SparkContext: Created broadcast 1077 from toRdd at HoodieSparkUtils.scala:152 | |
21/12/01 01:19:40 INFO FileSourceScanExec: Planning scan with bin packing, max size: 6215407 bytes, open cost is considered as scanning 4194304 bytes. | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 12628 records from 15 columns in 90 ms: 140.31111 rec/ms, 2104.6667 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 86% reading (574 ms) and 13% processing (90 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 12628. reading next block | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 12116 records from 15 columns in 75 ms: 161.54666 rec/ms, 2423.2 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 86% reading (489 ms) and 13% processing (75 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 12116. reading next block | |
21/12/01 01:19:40 INFO SparkContext: Starting job: isEmpty at DeltaSync.java:445 | |
21/12/01 01:19:40 INFO DAGScheduler: Got job 791 (isEmpty at DeltaSync.java:445) with 1 output partitions | |
21/12/01 01:19:40 INFO DAGScheduler: Final stage: ResultStage 1164 (isEmpty at DeltaSync.java:445) | |
21/12/01 01:19:40 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:19:40 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:19:40 INFO DAGScheduler: Submitting ResultStage 1164 (MapPartitionsRDD[2646] at mapPartitions at HoodieSparkUtils.scala:153), which has no missing parents | |
21/12/01 01:19:40 INFO MemoryStore: Block broadcast_1078 stored as values in memory (estimated size 46.0 KiB, free 363.0 MiB) | |
21/12/01 01:19:40 INFO MemoryStore: Block broadcast_1078_piece0 stored as bytes in memory (estimated size 15.7 KiB, free 362.9 MiB) | |
21/12/01 01:19:40 INFO BlockManagerInfo: Added broadcast_1078_piece0 in memory on 192.168.1.48:56496 (size: 15.7 KiB, free: 365.5 MiB) | |
21/12/01 01:19:40 INFO SparkContext: Created broadcast 1078 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:19:40 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1164 (MapPartitionsRDD[2646] at mapPartitions at HoodieSparkUtils.scala:153) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:19:40 INFO TaskSchedulerImpl: Adding task set 1164.0 with 1 tasks resource profile 0 | |
21/12/01 01:19:40 INFO TaskSetManager: Starting task 0.0 in stage 1164.0 (TID 2192) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4929 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:40 INFO Executor: Running task 0.0 in stage 1164.0 (TID 2192) | |
21/12/01 01:19:40 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/8/part-00000-feb37af9-4c18-4a76-aac0-6dbacb7c1464-c000.snappy.parquet, range: 0-6215407, partition values: [empty row] | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 2848 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 14240 records from 15 columns in 75 ms: 189.86667 rec/ms, 2848.0 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 88% reading (593 ms) and 11% processing (75 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 14240. reading next block | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 3157 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 105 ms. row count = 3029 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 15785 records from 15 columns in 105 ms: 150.33333 rec/ms, 2255.0 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 86% reading (684 ms) and 13% processing (105 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 15785. reading next block | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 15145 records from 15 columns in 88 ms: 172.10228 rec/ms, 2581.5342 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 87% reading (594 ms) and 12% processing (88 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 15145. reading next block | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 108 ms. row count = 2848 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 17088 records from 15 columns in 88 ms: 194.18182 rec/ms, 2912.7273 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 88% reading (701 ms) and 11% processing (88 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 17088. reading next block | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 199 ms. row count = 3157 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: block read in memory in 191 ms. row count = 3029 | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 18942 records from 15 columns in 119 ms: 159.17647 rec/ms, 2387.647 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 88% reading (883 ms) and 11% processing (119 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 18942. reading next block | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: Assembled and processed 18174 records from 15 columns in 101 ms: 179.9406 rec/ms, 2699.109 cell/ms | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: time spent so far 88% reading (785 ms) and 11% processing (101 ms) | |
21/12/01 01:19:40 INFO InternalParquetRecordReader: at row 18174. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 315 ms. row count = 2848 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 168 ms. row count = 3157 | |
21/12/01 01:19:41 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 170 ms. row count = 3029 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 19936 records from 15 columns in 102 ms: 195.45097 rec/ms, 2931.7646 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 90% reading (1016 ms) and 9% processing (102 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 19936. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 22099 records from 15 columns in 134 ms: 164.91791 rec/ms, 2473.7686 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 88% reading (1051 ms) and 11% processing (134 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 22099. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 21203 records from 15 columns in 114 ms: 185.99123 rec/ms, 2789.8684 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 89% reading (955 ms) and 10% processing (114 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 21203. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 182 ms. row count = 2848 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 180 ms. row count = 3029 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 185 ms. row count = 3157 | |
21/12/01 01:19:41 INFO BlockManagerInfo: Removed broadcast_1076_piece0 on 192.168.1.48:56496 in memory (size: 36.5 KiB, free: 365.5 MiB) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 22784 records from 15 columns in 131 ms: 173.92366 rec/ms, 2608.855 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 90% reading (1198 ms) and 9% processing (131 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 22784. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 24232 records from 15 columns in 144 ms: 168.27777 rec/ms, 2524.1667 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 88% reading (1135 ms) and 11% processing (144 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 24232. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 25256 records from 15 columns in 164 ms: 154.0 rec/ms, 2310.0 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 88% reading (1236 ms) and 11% processing (164 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 25256. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 127 ms. row count = 3157 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 28413 records from 15 columns in 177 ms: 160.52542 rec/ms, 2407.8813 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 88% reading (1363 ms) and 11% processing (177 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 28413. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 184 ms. row count = 2848 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 182 ms. row count = 3029 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 25632 records from 15 columns in 143 ms: 179.24475 rec/ms, 2688.6714 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 90% reading (1382 ms) and 9% processing (143 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 25632. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 27261 records from 15 columns in 157 ms: 173.63695 rec/ms, 2604.5542 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 89% reading (1317 ms) and 10% processing (157 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 27261. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 108 ms. row count = 3157 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 31570 records from 15 columns in 190 ms: 166.1579 rec/ms, 2492.3684 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 88% reading (1471 ms) and 11% processing (190 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 31570. reading next block | |
21/12/01 01:19:41 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 132 ms. row count = 3029 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 143 ms. row count = 2848 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 30290 records from 15 columns in 173 ms: 175.0867 rec/ms, 2626.3005 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 89% reading (1449 ms) and 10% processing (173 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 30290. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 28480 records from 15 columns in 161 ms: 176.89441 rec/ms, 2653.4163 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 90% reading (1525 ms) and 9% processing (161 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 28480. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 3157 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 34727 records from 15 columns in 207 ms: 167.76329 rec/ms, 2516.4492 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 88% reading (1580 ms) and 11% processing (207 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 34727. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 113 ms. row count = 3029 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 2848 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 33319 records from 15 columns in 189 ms: 176.291 rec/ms, 2644.365 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 89% reading (1562 ms) and 10% processing (189 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 33319. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 31328 records from 15 columns in 175 ms: 179.01714 rec/ms, 2685.257 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 90% reading (1636 ms) and 9% processing (175 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 31328. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 3157 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 37884 records from 15 columns in 220 ms: 172.2 rec/ms, 2583.0 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 88% reading (1690 ms) and 11% processing (220 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 37884. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 3029 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 2848 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 36348 records from 15 columns in 204 ms: 178.17647 rec/ms, 2672.647 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 89% reading (1672 ms) and 10% processing (204 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 36348. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 34176 records from 15 columns in 188 ms: 181.78723 rec/ms, 2726.8086 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 90% reading (1747 ms) and 9% processing (188 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 34176. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 113 ms. row count = 3157 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: Assembled and processed 41041 records from 15 columns in 233 ms: 176.14163 rec/ms, 2642.1245 cell/ms | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: time spent so far 88% reading (1803 ms) and 11% processing (233 ms) | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: at row 41041. reading next block | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 3029 | |
21/12/01 01:19:41 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 2848 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 39377 records from 15 columns in 219 ms: 179.80365 rec/ms, 2697.0547 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 89% reading (1782 ms) and 10% processing (219 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 39377. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 37024 records from 15 columns in 201 ms: 184.199 rec/ms, 2762.985 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 90% reading (1859 ms) and 9% processing (201 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 37024. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 140 ms. row count = 3157 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 44198 records from 15 columns in 251 ms: 176.08765 rec/ms, 2641.3147 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 88% reading (1943 ms) and 11% processing (251 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 44198. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 3029 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 2848 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 42406 records from 15 columns in 250 ms: 169.624 rec/ms, 2544.36 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 88% reading (1894 ms) and 11% processing (250 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 42406. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 39872 records from 15 columns in 217 ms: 183.74193 rec/ms, 2756.1292 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 90% reading (1971 ms) and 9% processing (217 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 39872. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 116 ms. row count = 3157 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 47355 records from 15 columns in 269 ms: 176.0409 rec/ms, 2640.6133 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 88% reading (2059 ms) and 11% processing (269 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 47355. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 3029 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 2848 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 45435 records from 15 columns in 268 ms: 169.53358 rec/ms, 2543.0037 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 88% reading (2006 ms) and 11% processing (268 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 45435. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 42720 records from 15 columns in 233 ms: 183.34764 rec/ms, 2750.2146 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 89% reading (2083 ms) and 10% processing (233 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 42720. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 3157 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 50512 records from 15 columns in 286 ms: 176.61539 rec/ms, 2649.2307 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 88% reading (2180 ms) and 11% processing (286 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 50512. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 3029 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 114 ms. row count = 2848 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 48464 records from 15 columns in 285 ms: 170.04912 rec/ms, 2550.7368 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 88% reading (2116 ms) and 11% processing (285 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 48464. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 45568 records from 15 columns in 249 ms: 183.00401 rec/ms, 2745.0603 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 89% reading (2197 ms) and 10% processing (249 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 45568. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 156 ms. row count = 3157 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 2848 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 53669 records from 15 columns in 304 ms: 176.54277 rec/ms, 2648.1414 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 88% reading (2336 ms) and 11% processing (304 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 53669. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 48416 records from 15 columns in 265 ms: 182.70189 rec/ms, 2740.5283 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 89% reading (2309 ms) and 10% processing (265 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 48416. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 153 ms. row count = 3029 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 51493 records from 15 columns in 302 ms: 170.50662 rec/ms, 2557.5994 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 88% reading (2269 ms) and 11% processing (302 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 51493. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 118 ms. row count = 3157 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 56826 records from 15 columns in 321 ms: 177.02803 rec/ms, 2655.4207 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 88% reading (2454 ms) and 11% processing (321 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 56826. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 140 ms. row count = 2848 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 51264 records from 15 columns in 281 ms: 182.43416 rec/ms, 2736.5125 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 89% reading (2449 ms) and 10% processing (281 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 51264. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 151 ms. row count = 3029 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 54522 records from 15 columns in 320 ms: 170.38126 rec/ms, 2555.7188 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 88% reading (2420 ms) and 11% processing (320 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 54522. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 127 ms. row count = 3157 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 59983 records from 15 columns in 338 ms: 177.4645 rec/ms, 2661.9675 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 88% reading (2581 ms) and 11% processing (338 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 59983. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 128 ms. row count = 2848 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 54112 records from 15 columns in 297 ms: 182.19528 rec/ms, 2732.9292 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 89% reading (2577 ms) and 10% processing (297 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 54112. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 3029 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 57551 records from 15 columns in 337 ms: 170.77448 rec/ms, 2561.6172 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 88% reading (2529 ms) and 11% processing (337 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 57551. reading next block | |
21/12/01 01:19:42 INFO Executor: Finished task 0.0 in stage 1164.0 (TID 2192). 1900 bytes result sent to driver | |
21/12/01 01:19:42 INFO TaskSetManager: Finished task 0.0 in stage 1164.0 (TID 2192) in 2351 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:19:42 INFO TaskSchedulerImpl: Removed TaskSet 1164.0, whose tasks have all completed, from pool | |
21/12/01 01:19:42 INFO DAGScheduler: ResultStage 1164 (isEmpty at DeltaSync.java:445) finished in 2.353 s | |
21/12/01 01:19:42 INFO DAGScheduler: Job 791 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:19:42 INFO TaskSchedulerImpl: Killing all running tasks in stage 1164: Stage finished | |
21/12/01 01:19:42 INFO DAGScheduler: Job 791 finished: isEmpty at DeltaSync.java:445, took 2.353456 s | |
21/12/01 01:19:42 INFO BlockManagerInfo: Removed broadcast_1078_piece0 on 192.168.1.48:56496 in memory (size: 15.7 KiB, free: 365.5 MiB) | |
21/12/01 01:19:42 INFO SparkContext: Starting job: isEmpty at DeltaSync.java:492 | |
21/12/01 01:19:42 INFO DAGScheduler: Got job 792 (isEmpty at DeltaSync.java:492) with 1 output partitions | |
21/12/01 01:19:42 INFO DAGScheduler: Final stage: ResultStage 1165 (isEmpty at DeltaSync.java:492) | |
21/12/01 01:19:42 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:19:42 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:19:42 INFO DAGScheduler: Submitting ResultStage 1165 (MapPartitionsRDD[2647] at map at DeltaSync.java:452), which has no missing parents | |
21/12/01 01:19:42 INFO MemoryStore: Block broadcast_1079 stored as values in memory (estimated size 51.0 KiB, free 363.1 MiB) | |
21/12/01 01:19:42 INFO MemoryStore: Block broadcast_1079_piece0 stored as bytes in memory (estimated size 18.4 KiB, free 363.1 MiB) | |
21/12/01 01:19:42 INFO BlockManagerInfo: Added broadcast_1079_piece0 in memory on 192.168.1.48:56496 (size: 18.4 KiB, free: 365.5 MiB) | |
21/12/01 01:19:42 INFO SparkContext: Created broadcast 1079 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:19:42 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1165 (MapPartitionsRDD[2647] at map at DeltaSync.java:452) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:19:42 INFO TaskSchedulerImpl: Adding task set 1165.0 with 1 tasks resource profile 0 | |
21/12/01 01:19:42 INFO TaskSetManager: Starting task 0.0 in stage 1165.0 (TID 2193) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4929 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:42 INFO Executor: Running task 0.0 in stage 1165.0 (TID 2193) | |
21/12/01 01:19:42 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/8/part-00000-feb37af9-4c18-4a76-aac0-6dbacb7c1464-c000.snappy.parquet, range: 0-6215407, partition values: [empty row] | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 3157 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 63140 records from 15 columns in 359 ms: 175.87744 rec/ms, 2638.1616 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 88% reading (2692 ms) and 11% processing (359 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 63140. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 114 ms. row count = 2848 | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: Assembled and processed 56960 records from 15 columns in 313 ms: 181.98083 rec/ms, 2729.7124 cell/ms | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: time spent so far 89% reading (2691 ms) and 10% processing (313 ms) | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: at row 56960. reading next block | |
21/12/01 01:19:42 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 3029 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 60580 records from 15 columns in 360 ms: 168.27777 rec/ms, 2524.1667 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 87% reading (2638 ms) and 12% processing (360 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 60580. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 116 ms. row count = 3157 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 66297 records from 15 columns in 377 ms: 175.85411 rec/ms, 2637.8118 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 88% reading (2808 ms) and 11% processing (377 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 66297. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 2848 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 59808 records from 15 columns in 331 ms: 180.68883 rec/ms, 2710.3323 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 89% reading (2801 ms) and 10% processing (331 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 59808. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 107 ms. row count = 3029 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 63609 records from 15 columns in 378 ms: 168.27777 rec/ms, 2524.1667 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 87% reading (2745 ms) and 12% processing (378 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 63609. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 3157 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 107 ms. row count = 2848 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 69454 records from 15 columns in 395 ms: 175.83292 rec/ms, 2637.4937 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 88% reading (2919 ms) and 11% processing (395 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 69454. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 62656 records from 15 columns in 347 ms: 180.56483 rec/ms, 2708.4727 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 89% reading (2908 ms) and 10% processing (347 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 62656. reading next block | |
21/12/01 01:19:43 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 114 ms. row count = 3029 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 66638 records from 15 columns in 395 ms: 168.7038 rec/ms, 2530.557 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 87% reading (2859 ms) and 12% processing (395 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 66638. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 113 ms. row count = 3157 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 2848 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 72611 records from 15 columns in 412 ms: 176.2403 rec/ms, 2643.6042 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 88% reading (3032 ms) and 11% processing (412 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 72611. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 65504 records from 15 columns in 363 ms: 180.4518 rec/ms, 2706.7769 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 89% reading (3020 ms) and 10% processing (363 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 65504. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 108 ms. row count = 3029 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 69667 records from 15 columns in 412 ms: 169.09467 rec/ms, 2536.42 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 87% reading (2967 ms) and 12% processing (412 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 69667. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 3157 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 108 ms. row count = 2848 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 75768 records from 15 columns in 430 ms: 176.20465 rec/ms, 2643.0698 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 87% reading (3141 ms) and 12% processing (430 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 75768. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 68352 records from 15 columns in 379 ms: 180.34828 rec/ms, 2705.2244 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 89% reading (3128 ms) and 10% processing (379 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 68352. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 3029 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 72696 records from 15 columns in 429 ms: 169.45454 rec/ms, 2541.818 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 87% reading (3079 ms) and 12% processing (429 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 72696. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 108 ms. row count = 2848 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 71200 records from 15 columns in 397 ms: 179.3451 rec/ms, 2690.1763 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 89% reading (3236 ms) and 10% processing (397 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 71200. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 3029 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 160 ms. row count = 3157 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 75725 records from 15 columns in 461 ms: 164.26247 rec/ms, 2463.937 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 87% reading (3188 ms) and 12% processing (461 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 75725. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 78925 records from 15 columns in 462 ms: 170.83333 rec/ms, 2562.5 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 87% reading (3301 ms) and 12% processing (462 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 78925. reading next block | |
21/12/01 01:19:43 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 113 ms. row count = 2848 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 74048 records from 15 columns in 414 ms: 178.85991 rec/ms, 2682.8984 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 88% reading (3349 ms) and 11% processing (414 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 74048. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 118 ms. row count = 3157 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 82082 records from 15 columns in 480 ms: 171.00417 rec/ms, 2565.0625 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 87% reading (3419 ms) and 12% processing (480 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 82082. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 146 ms. row count = 3029 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 78754 records from 15 columns in 478 ms: 164.75732 rec/ms, 2471.3599 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 87% reading (3334 ms) and 12% processing (478 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 78754. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 141 ms. row count = 2848 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 76896 records from 15 columns in 430 ms: 178.82791 rec/ms, 2682.4187 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 89% reading (3490 ms) and 10% processing (430 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 76896. reading next block | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: block read in memory in 114 ms. row count = 3157 | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: Assembled and processed 85239 records from 15 columns in 498 ms: 171.16264 rec/ms, 2567.4397 cell/ms | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: time spent so far 87% reading (3533 ms) and 12% processing (498 ms) | |
21/12/01 01:19:43 INFO InternalParquetRecordReader: at row 85239. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 161 ms. row count = 3029 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 81783 records from 15 columns in 495 ms: 165.21819 rec/ms, 2478.2727 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 87% reading (3495 ms) and 12% processing (495 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 81783. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 2848 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 79744 records from 15 columns in 446 ms: 178.7982 rec/ms, 2681.9731 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 88% reading (3602 ms) and 11% processing (446 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 79744. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 128 ms. row count = 3157 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 88396 records from 15 columns in 517 ms: 170.97873 rec/ms, 2564.681 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 87% reading (3661 ms) and 12% processing (517 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 88396. reading next block | |
21/12/01 01:19:44 INFO Executor: Finished task 0.0 in stage 1165.0 (TID 2193). 1764 bytes result sent to driver | |
21/12/01 01:19:44 INFO TaskSetManager: Finished task 0.0 in stage 1165.0 (TID 2193) in 1180 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:19:44 INFO TaskSchedulerImpl: Removed TaskSet 1165.0, whose tasks have all completed, from pool | |
21/12/01 01:19:44 INFO DAGScheduler: ResultStage 1165 (isEmpty at DeltaSync.java:492) finished in 1.182 s | |
21/12/01 01:19:44 INFO DAGScheduler: Job 792 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:19:44 INFO TaskSchedulerImpl: Killing all running tasks in stage 1165: Stage finished | |
21/12/01 01:19:44 INFO DAGScheduler: Job 792 finished: isEmpty at DeltaSync.java:492, took 1.182526 s | |
21/12/01 01:19:44 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 115 ms. row count = 3029 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 2848 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 84812 records from 15 columns in 513 ms: 165.32553 rec/ms, 2479.883 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 87% reading (3610 ms) and 12% processing (513 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 84812. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 82592 records from 15 columns in 463 ms: 178.38445 rec/ms, 2675.7668 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 88% reading (3711 ms) and 11% processing (463 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 82592. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 3157 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 91553 records from 15 columns in 535 ms: 171.1271 rec/ms, 2566.9065 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 87% reading (3771 ms) and 12% processing (535 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 91553. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 3029 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 105 ms. row count = 2848 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 87841 records from 15 columns in 556 ms: 157.98741 rec/ms, 2369.811 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 86% reading (3720 ms) and 13% processing (556 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 87841. reading next block | |
21/12/01 01:19:44 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 85440 records from 15 columns in 504 ms: 169.5238 rec/ms, 2542.8572 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 88% reading (3816 ms) and 11% processing (504 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 85440. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 3157 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 94710 records from 15 columns in 553 ms: 171.26582 rec/ms, 2568.9873 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 87% reading (3880 ms) and 12% processing (553 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 94710. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 2848 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 117 ms. row count = 3029 | |
21/12/01 01:19:44 INFO BlockManagerInfo: Removed broadcast_1079_piece0 on 192.168.1.48:56496 in memory (size: 18.4 KiB, free: 365.5 MiB) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 88288 records from 15 columns in 520 ms: 169.78462 rec/ms, 2546.7693 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 88% reading (3928 ms) and 11% processing (520 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 88288. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 90870 records from 15 columns in 573 ms: 158.58638 rec/ms, 2378.796 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 87% reading (3837 ms) and 12% processing (573 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 90870. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 108 ms. row count = 3157 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 97867 records from 15 columns in 571 ms: 171.3958 rec/ms, 2570.937 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 87% reading (3988 ms) and 12% processing (571 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 97867. reading next block | |
21/12/01 01:19:44 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:44 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 106 ms. row count = 2848 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 117 ms. row count = 3029 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 91136 records from 15 columns in 536 ms: 170.02985 rec/ms, 2550.4478 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 88% reading (4034 ms) and 11% processing (536 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 91136. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 93899 records from 15 columns in 589 ms: 159.42105 rec/ms, 2391.3157 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 87% reading (3954 ms) and 12% processing (589 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 93899. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 113 ms. row count = 3157 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 101024 records from 15 columns in 588 ms: 171.80952 rec/ms, 2577.1428 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 87% reading (4101 ms) and 12% processing (588 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 101024. reading next block | |
21/12/01 01:19:44 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011906814__replacecommit__REQUESTED]} | |
21/12/01 01:19:44 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 108 ms. row count = 2848 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 93984 records from 15 columns in 552 ms: 170.26086 rec/ms, 2553.913 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 88% reading (4142 ms) and 11% processing (552 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 93984. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 116 ms. row count = 3029 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 96928 records from 15 columns in 604 ms: 160.47682 rec/ms, 2407.1523 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 87% reading (4070 ms) and 12% processing (604 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 96928. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 156 ms. row count = 3157 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 104181 records from 15 columns in 606 ms: 171.91585 rec/ms, 2578.7375 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 87% reading (4257 ms) and 12% processing (606 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 104181. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 107 ms. row count = 2848 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 106 ms. row count = 3029 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 96832 records from 15 columns in 567 ms: 170.77954 rec/ms, 2561.693 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 88% reading (4249 ms) and 11% processing (567 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 96832. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 99957 records from 15 columns in 618 ms: 161.74272 rec/ms, 2426.1409 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 87% reading (4176 ms) and 12% processing (618 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 99957. reading next block | |
21/12/01 01:19:44 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 118 ms. row count = 3157 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 107338 records from 15 columns in 622 ms: 172.56914 rec/ms, 2588.5369 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 87% reading (4375 ms) and 12% processing (622 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 107338. reading next block | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 2848 | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: Assembled and processed 99680 records from 15 columns in 583 ms: 170.9777 rec/ms, 2564.6655 cell/ms | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: time spent so far 88% reading (4360 ms) and 11% processing (583 ms) | |
21/12/01 01:19:44 INFO InternalParquetRecordReader: at row 99680. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 148 ms. row count = 3029 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 102986 records from 15 columns in 635 ms: 162.18268 rec/ms, 2432.7402 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (4324 ms) and 12% processing (635 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 102986. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 3157 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 110495 records from 15 columns in 639 ms: 172.91862 rec/ms, 2593.7793 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (4487 ms) and 12% processing (639 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 110495. reading next block | |
21/12/01 01:19:45 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:45 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 158 ms. row count = 2848 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 102528 records from 15 columns in 599 ms: 171.16527 rec/ms, 2567.4792 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 88% reading (4518 ms) and 11% processing (599 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 102528. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 129 ms. row count = 3029 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 106015 records from 15 columns in 651 ms: 162.84946 rec/ms, 2442.742 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (4453 ms) and 12% processing (651 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 106015. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 108 ms. row count = 3157 | |
21/12/01 01:19:45 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011906814__replacecommit__REQUESTED]} | |
21/12/01 01:19:45 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 113652 records from 15 columns in 668 ms: 170.13773 rec/ms, 2552.066 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (4595 ms) and 12% processing (668 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 113652. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 122 ms. row count = 2848 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 105376 records from 15 columns in 615 ms: 171.3431 rec/ms, 2570.1462 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 88% reading (4640 ms) and 11% processing (615 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 105376. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 123 ms. row count = 3029 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 109044 records from 15 columns in 668 ms: 163.23952 rec/ms, 2448.5928 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (4576 ms) and 12% processing (668 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 109044. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 3157 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 116809 records from 15 columns in 684 ms: 170.77339 rec/ms, 2561.6008 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (4706 ms) and 12% processing (684 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 116809. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 113 ms. row count = 2848 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 108224 records from 15 columns in 630 ms: 171.78413 rec/ms, 2576.762 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 88% reading (4753 ms) and 11% processing (630 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 108224. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 114 ms. row count = 3029 | |
21/12/01 01:19:45 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 108 ms. row count = 3157 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 112073 records from 15 columns in 684 ms: 163.84941 rec/ms, 2457.7412 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (4690 ms) and 12% processing (684 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 112073. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 119966 records from 15 columns in 700 ms: 171.38 rec/ms, 2570.7 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (4814 ms) and 12% processing (700 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 119966. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 2848 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 3029 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 111072 records from 15 columns in 646 ms: 171.93808 rec/ms, 2579.0713 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 88% reading (4874 ms) and 11% processing (646 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 111072. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 108 ms. row count = 3157 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 115102 records from 15 columns in 700 ms: 164.43143 rec/ms, 2466.4714 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (4801 ms) and 12% processing (700 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 115102. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 123123 records from 15 columns in 724 ms: 170.05939 rec/ms, 2550.8909 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (4922 ms) and 12% processing (724 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 123123. reading next block | |
21/12/01 01:19:45 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:45 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 2848 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 106 ms. row count = 3029 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 113920 records from 15 columns in 662 ms: 172.0846 rec/ms, 2581.2688 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 88% reading (4983 ms) and 11% processing (662 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 113920. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 118131 records from 15 columns in 716 ms: 164.98743 rec/ms, 2474.8115 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (4907 ms) and 12% processing (716 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 118131. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 3157 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 126280 records from 15 columns in 739 ms: 170.87956 rec/ms, 2563.1936 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (5032 ms) and 12% processing (739 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 126280. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 106 ms. row count = 2848 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 116768 records from 15 columns in 678 ms: 172.22418 rec/ms, 2583.3628 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 88% reading (5089 ms) and 11% processing (678 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 116768. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 117 ms. row count = 3029 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 121160 records from 15 columns in 734 ms: 165.06812 rec/ms, 2476.0217 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (5024 ms) and 12% processing (734 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 121160. reading next block | |
21/12/01 01:19:45 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 153 ms. row count = 3157 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 129437 records from 15 columns in 755 ms: 171.43973 rec/ms, 2571.596 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (5185 ms) and 12% processing (755 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 129437. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 108 ms. row count = 2848 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: block read in memory in 105 ms. row count = 3029 | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 119616 records from 15 columns in 699 ms: 171.12447 rec/ms, 2566.867 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 88% reading (5197 ms) and 11% processing (699 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 119616. reading next block | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: Assembled and processed 124189 records from 15 columns in 763 ms: 162.76408 rec/ms, 2441.4614 cell/ms | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: time spent so far 87% reading (5129 ms) and 12% processing (763 ms) | |
21/12/01 01:19:45 INFO InternalParquetRecordReader: at row 124189. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 3157 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 132594 records from 15 columns in 771 ms: 171.97665 rec/ms, 2579.65 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5306 ms) and 12% processing (771 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 132594. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 2848 | |
21/12/01 01:19:46 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:19:46 INFO FileSystemViewManager: Creating View Manager with storage type :REMOTE_FIRST | |
21/12/01 01:19:46 INFO FileSystemViewManager: Creating remote first table view | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 122464 records from 15 columns in 716 ms: 171.03911 rec/ms, 2565.5867 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 88% reading (5308 ms) and 11% processing (716 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 122464. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 142 ms. row count = 3029 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 127218 records from 15 columns in 779 ms: 163.30937 rec/ms, 2449.6406 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5271 ms) and 12% processing (779 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 127218. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 125 ms. row count = 3157 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 135751 records from 15 columns in 788 ms: 172.27284 rec/ms, 2584.0925 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5431 ms) and 12% processing (788 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 135751. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 113 ms. row count = 2848 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 125312 records from 15 columns in 731 ms: 171.42545 rec/ms, 2571.3816 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 88% reading (5421 ms) and 11% processing (731 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 125312. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 138 ms. row count = 3029 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 130247 records from 15 columns in 795 ms: 163.8327 rec/ms, 2457.4905 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5409 ms) and 12% processing (795 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 130247. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 115 ms. row count = 3157 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 138908 records from 15 columns in 803 ms: 172.9863 rec/ms, 2594.7944 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5546 ms) and 12% processing (803 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 138908. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 185 ms. row count = 2848 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 116 ms. row count = 3029 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 128160 records from 15 columns in 747 ms: 171.56627 rec/ms, 2573.494 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 88% reading (5606 ms) and 11% processing (747 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 128160. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 3157 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 133276 records from 15 columns in 810 ms: 164.53827 rec/ms, 2468.074 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5525 ms) and 12% processing (810 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 133276. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 142065 records from 15 columns in 816 ms: 174.09926 rec/ms, 2611.489 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5655 ms) and 12% processing (816 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 142065. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 2848 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 3029 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 131008 records from 15 columns in 763 ms: 171.70119 rec/ms, 2575.5176 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 88% reading (5727 ms) and 11% processing (763 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 131008. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 3157 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 136305 records from 15 columns in 826 ms: 165.01816 rec/ms, 2475.2725 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5634 ms) and 12% processing (826 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 136305. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 145222 records from 15 columns in 831 ms: 174.75572 rec/ms, 2621.3357 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5766 ms) and 12% processing (831 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 145222. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 2848 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 3029 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 3157 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 133856 records from 15 columns in 778 ms: 172.0514 rec/ms, 2580.7712 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 88% reading (5839 ms) and 11% processing (778 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 133856. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 139334 records from 15 columns in 842 ms: 165.47981 rec/ms, 2482.1973 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5745 ms) and 12% processing (842 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 139334. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 148379 records from 15 columns in 846 ms: 175.38889 rec/ms, 2630.8333 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5875 ms) and 12% processing (846 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 148379. reading next block | |
21/12/01 01:19:46 INFO AbstractHoodieWriteClient: Generate a new instant time: 20211201011944112 action: commit | |
21/12/01 01:19:46 INFO HoodieActiveTimeline: Creating a new instant [==>20211201011944112__commit__REQUESTED] | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 106 ms. row count = 2848 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 118 ms. row count = 3029 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 3157 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 136704 records from 15 columns in 804 ms: 170.02985 rec/ms, 2550.4478 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 88% reading (5945 ms) and 11% processing (804 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 136704. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 142363 records from 15 columns in 860 ms: 165.53838 rec/ms, 2483.0757 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5863 ms) and 12% processing (860 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 142363. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 151536 records from 15 columns in 863 ms: 175.59212 rec/ms, 2633.8818 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5985 ms) and 12% processing (863 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 151536. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 105 ms. row count = 2848 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: block read in memory in 114 ms. row count = 3029 | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 139552 records from 15 columns in 818 ms: 170.60147 rec/ms, 2559.022 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 88% reading (6050 ms) and 11% processing (818 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 139552. reading next block | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: Assembled and processed 145392 records from 15 columns in 876 ms: 165.9726 rec/ms, 2489.589 cell/ms | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: time spent so far 87% reading (5977 ms) and 12% processing (876 ms) | |
21/12/01 01:19:46 INFO InternalParquetRecordReader: at row 145392. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 184 ms. row count = 3157 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 154693 records from 15 columns in 880 ms: 175.7875 rec/ms, 2636.8125 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6169 ms) and 12% processing (880 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 154693. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 2848 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 142400 records from 15 columns in 833 ms: 170.94838 rec/ms, 2564.2256 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 88% reading (6160 ms) and 11% processing (833 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 142400. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 3029 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 148421 records from 15 columns in 890 ms: 166.76517 rec/ms, 2501.4775 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6087 ms) and 12% processing (890 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 148421. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 119 ms. row count = 3157 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 157850 records from 15 columns in 897 ms: 175.97548 rec/ms, 2639.632 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6288 ms) and 12% processing (897 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 157850. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 2848 | |
21/12/01 01:19:47 INFO DeltaSync: Starting commit : 20211201011944112 | |
21/12/01 01:19:47 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 145248 records from 15 columns in 848 ms: 171.28302 rec/ms, 2569.2454 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 88% reading (6269 ms) and 11% processing (848 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 145248. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 3029 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 151450 records from 15 columns in 905 ms: 167.34807 rec/ms, 2510.221 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6199 ms) and 12% processing (905 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 151450. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 114 ms. row count = 3157 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 161007 records from 15 columns in 913 ms: 176.3494 rec/ms, 2645.241 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6402 ms) and 12% processing (913 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 161007. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 114 ms. row count = 2848 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 148096 records from 15 columns in 862 ms: 171.8051 rec/ms, 2577.0767 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 88% reading (6383 ms) and 11% processing (862 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 148096. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 173 ms. row count = 3029 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 154479 records from 15 columns in 921 ms: 167.72964 rec/ms, 2515.9446 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6372 ms) and 12% processing (921 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 154479. reading next block | |
21/12/01 01:19:47 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 115 ms. row count = 3157 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 116 ms. row count = 2848 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 164164 records from 15 columns in 931 ms: 176.33083 rec/ms, 2644.9624 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6517 ms) and 12% processing (931 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 164164. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 150944 records from 15 columns in 877 ms: 172.11403 rec/ms, 2581.7104 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 88% reading (6499 ms) and 11% processing (877 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 150944. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 129 ms. row count = 3029 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 157508 records from 15 columns in 934 ms: 168.63812 rec/ms, 2529.5718 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6501 ms) and 12% processing (934 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 157508. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 117 ms. row count = 3157 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 167321 records from 15 columns in 957 ms: 174.83908 rec/ms, 2622.5862 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6634 ms) and 12% processing (957 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 167321. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 136 ms. row count = 2848 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 153792 records from 15 columns in 889 ms: 172.99437 rec/ms, 2594.9155 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 88% reading (6635 ms) and 11% processing (889 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 153792. reading next block | |
21/12/01 01:19:47 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:47 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 3029 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 160537 records from 15 columns in 947 ms: 169.52165 rec/ms, 2542.8247 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6613 ms) and 12% processing (947 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 160537. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 3157 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 170478 records from 15 columns in 970 ms: 175.75052 rec/ms, 2636.2578 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6746 ms) and 12% processing (970 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 170478. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 120 ms. row count = 2848 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 156640 records from 15 columns in 902 ms: 173.65854 rec/ms, 2604.878 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 88% reading (6755 ms) and 11% processing (902 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 156640. reading next block | |
21/12/01 01:19:47 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011944112__commit__REQUESTED]} | |
21/12/01 01:19:47 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 3029 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 163566 records from 15 columns in 960 ms: 170.38126 rec/ms, 2555.7188 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6722 ms) and 12% processing (960 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 163566. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 3157 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 173635 records from 15 columns in 984 ms: 176.45833 rec/ms, 2646.875 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6858 ms) and 12% processing (984 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 173635. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 105 ms. row count = 2848 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 159488 records from 15 columns in 917 ms: 173.92366 rec/ms, 2608.855 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 88% reading (6860 ms) and 11% processing (917 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 159488. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 3029 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 166595 records from 15 columns in 973 ms: 171.21788 rec/ms, 2568.2683 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6834 ms) and 12% processing (973 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 166595. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 3157 | |
21/12/01 01:19:47 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 176792 records from 15 columns in 997 ms: 177.32397 rec/ms, 2659.8596 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 87% reading (6967 ms) and 12% processing (997 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 176792. reading next block | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: block read in memory in 106 ms. row count = 2848 | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: Assembled and processed 162336 records from 15 columns in 930 ms: 174.55484 rec/ms, 2618.3225 cell/ms | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: time spent so far 88% reading (6966 ms) and 11% processing (930 ms) | |
21/12/01 01:19:47 INFO InternalParquetRecordReader: at row 162336. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 107 ms. row count = 3029 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 169624 records from 15 columns in 986 ms: 172.03246 rec/ms, 2580.4868 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (6941 ms) and 12% processing (986 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 169624. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 107 ms. row count = 2848 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 136 ms. row count = 3157 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 165184 records from 15 columns in 943 ms: 175.16861 rec/ms, 2627.529 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 88% reading (7073 ms) and 11% processing (943 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 165184. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 179949 records from 15 columns in 1010 ms: 178.16733 rec/ms, 2672.51 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (7103 ms) and 12% processing (1010 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 179949. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 3029 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 172653 records from 15 columns in 1000 ms: 172.653 rec/ms, 2589.795 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (7050 ms) and 12% processing (1000 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 172653. reading next block | |
21/12/01 01:19:48 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:19:48 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 114 ms. row count = 2848 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 116 ms. row count = 3157 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 168032 records from 15 columns in 957 ms: 175.58203 rec/ms, 2633.7305 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 88% reading (7187 ms) and 11% processing (957 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 168032. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 183106 records from 15 columns in 1026 ms: 178.46588 rec/ms, 2676.9883 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (7219 ms) and 12% processing (1026 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 183106. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 117 ms. row count = 3029 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 175682 records from 15 columns in 1016 ms: 172.91536 rec/ms, 2593.7302 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (7167 ms) and 12% processing (1016 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 175682. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 108 ms. row count = 2848 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 170880 records from 15 columns in 971 ms: 175.98352 rec/ms, 2639.753 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 88% reading (7295 ms) and 11% processing (971 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 170880. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 119 ms. row count = 3157 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 186263 records from 15 columns in 1040 ms: 179.09904 rec/ms, 2686.4856 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (7338 ms) and 12% processing (1040 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 186263. reading next block | |
21/12/01 01:19:48 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 137 ms. row count = 3029 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 178711 records from 15 columns in 1029 ms: 173.67444 rec/ms, 2605.1167 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (7304 ms) and 12% processing (1029 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 178711. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 2848 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 173728 records from 15 columns in 984 ms: 176.55284 rec/ms, 2648.2927 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 88% reading (7404 ms) and 11% processing (984 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 173728. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 3157 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 189420 records from 15 columns in 1054 ms: 179.71536 rec/ms, 2695.7305 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (7448 ms) and 12% processing (1054 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 189420. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 122 ms. row count = 3029 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 113 ms. row count = 2848 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 181740 records from 15 columns in 1045 ms: 173.91388 rec/ms, 2608.7083 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (7426 ms) and 12% processing (1045 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 181740. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 176576 records from 15 columns in 998 ms: 176.92986 rec/ms, 2653.948 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 88% reading (7517 ms) and 11% processing (998 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 176576. reading next block | |
21/12/01 01:19:48 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:19:48 INFO FileSystemViewManager: Creating View Manager with storage type :REMOTE_FIRST | |
21/12/01 01:19:48 INFO FileSystemViewManager: Creating remote first table view | |
21/12/01 01:19:48 INFO FileSystemViewManager: Creating remote view for basePath s3a://hudi-testing/test_hoodie_table_2. Server=192.168.1.48:56507, Timeout=300 | |
21/12/01 01:19:48 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 117 ms. row count = 3157 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 192577 records from 15 columns in 1067 ms: 180.48454 rec/ms, 2707.268 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (7565 ms) and 12% processing (1067 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 192577. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 112 ms. row count = 3029 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 184769 records from 15 columns in 1059 ms: 174.47498 rec/ms, 2617.1248 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (7538 ms) and 12% processing (1059 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 184769. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 136 ms. row count = 2848 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 3157 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 179424 records from 15 columns in 1018 ms: 176.25148 rec/ms, 2643.7722 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 88% reading (7653 ms) and 11% processing (1018 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 179424. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 195734 records from 15 columns in 1083 ms: 180.73315 rec/ms, 2710.9973 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (7676 ms) and 12% processing (1083 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 195734. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 110 ms. row count = 3029 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 187798 records from 15 columns in 1072 ms: 175.18471 rec/ms, 2627.7705 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (7648 ms) and 12% processing (1072 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 187798. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 2848 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 113 ms. row count = 3157 | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 182272 records from 15 columns in 1031 ms: 176.79146 rec/ms, 2651.872 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 88% reading (7764 ms) and 11% processing (1031 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 182272. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: Assembled and processed 198891 records from 15 columns in 1097 ms: 181.30447 rec/ms, 2719.567 cell/ms | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: time spent so far 87% reading (7789 ms) and 12% processing (1097 ms) | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: at row 198891. reading next block | |
21/12/01 01:19:48 INFO InternalParquetRecordReader: block read in memory in 135 ms. row count = 3029 | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: Assembled and processed 190827 records from 15 columns in 1085 ms: 175.87743 rec/ms, 2638.1614 cell/ms | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: time spent so far 87% reading (7783 ms) and 12% processing (1085 ms) | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: at row 190827. reading next block | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: block read in memory in 107 ms. row count = 2848 | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 738 | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: Assembled and processed 185120 records from 15 columns in 1044 ms: 177.31801 rec/ms, 2659.77 cell/ms | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: time spent so far 88% reading (7871 ms) and 11% processing (1044 ms) | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: at row 185120. reading next block | |
21/12/01 01:19:49 INFO Executor: Finished task 0.0 in stage 1159.0 (TID 2188). 1000 bytes result sent to driver | |
21/12/01 01:19:49 INFO TaskSetManager: Finished task 0.0 in stage 1159.0 (TID 2188) in 10220 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:19:49 INFO TaskSchedulerImpl: Removed TaskSet 1159.0, whose tasks have all completed, from pool | |
21/12/01 01:19:49 INFO DAGScheduler: ShuffleMapStage 1159 (sortBy at GlobalSortPartitioner.java:41) finished in 10.283 s | |
21/12/01 01:19:49 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:19:49 INFO DAGScheduler: running: Set(ShuffleMapStage 1160, ShuffleMapStage 1161) | |
21/12/01 01:19:49 INFO DAGScheduler: waiting: Set(ResultStage 1162) | |
21/12/01 01:19:49 INFO DAGScheduler: failed: Set() | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 3029 | |
21/12/01 01:19:49 INFO BlockManagerInfo: Removed broadcast_1073_piece0 on 192.168.1.48:56496 in memory (size: 179.3 KiB, free: 365.7 MiB) | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 2848 | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: Assembled and processed 193856 records from 15 columns in 1100 ms: 176.23273 rec/ms, 2643.491 cell/ms | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: time spent so far 87% reading (7894 ms) and 12% processing (1100 ms) | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: at row 193856. reading next block | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: Assembled and processed 187968 records from 15 columns in 1058 ms: 177.66351 rec/ms, 2664.9526 cell/ms | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: time spent so far 88% reading (7982 ms) and 11% processing (1058 ms) | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: at row 187968. reading next block | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: block read in memory in 111 ms. row count = 3029 | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 2848 | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: Assembled and processed 196885 records from 15 columns in 1113 ms: 176.89578 rec/ms, 2653.4368 cell/ms | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: time spent so far 87% reading (8005 ms) and 12% processing (1113 ms) | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: at row 196885. reading next block | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: Assembled and processed 190816 records from 15 columns in 1070 ms: 178.33272 rec/ms, 2674.9907 cell/ms | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: time spent so far 88% reading (8091 ms) and 11% processing (1070 ms) | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: at row 190816. reading next block | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: block read in memory in 114 ms. row count = 3029 | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: block read in memory in 113 ms. row count = 2848 | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: Assembled and processed 199914 records from 15 columns in 1125 ms: 177.70134 rec/ms, 2665.52 cell/ms | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: time spent so far 87% reading (8119 ms) and 12% processing (1125 ms) | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: at row 199914. reading next block | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: Assembled and processed 193664 records from 15 columns in 1082 ms: 178.98706 rec/ms, 2684.806 cell/ms | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: time spent so far 88% reading (8204 ms) and 11% processing (1082 ms) | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: at row 193664. reading next block | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: block read in memory in 106 ms. row count = 263 | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: block read in memory in 109 ms. row count = 2848 | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: Assembled and processed 196512 records from 15 columns in 1095 ms: 179.46301 rec/ms, 2691.9453 cell/ms | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: time spent so far 88% reading (8313 ms) and 11% processing (1095 ms) | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: at row 196512. reading next block | |
21/12/01 01:19:49 INFO Executor: Finished task 0.0 in stage 1161.0 (TID 2190). 1000 bytes result sent to driver | |
21/12/01 01:19:49 INFO TaskSetManager: Finished task 0.0 in stage 1161.0 (TID 2190) in 10623 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:19:49 INFO TaskSchedulerImpl: Removed TaskSet 1161.0, whose tasks have all completed, from pool | |
21/12/01 01:19:49 INFO DAGScheduler: ShuffleMapStage 1161 (sortBy at GlobalSortPartitioner.java:41) finished in 10.684 s | |
21/12/01 01:19:49 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:19:49 INFO DAGScheduler: running: Set(ShuffleMapStage 1160) | |
21/12/01 01:19:49 INFO DAGScheduler: waiting: Set(ResultStage 1162) | |
21/12/01 01:19:49 INFO DAGScheduler: failed: Set() | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: block read in memory in 107 ms. row count = 2848 | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: Assembled and processed 199360 records from 15 columns in 1109 ms: 179.76555 rec/ms, 2696.4834 cell/ms | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: time spent so far 88% reading (8420 ms) and 11% processing (1109 ms) | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: at row 199360. reading next block | |
21/12/01 01:19:49 INFO InternalParquetRecordReader: block read in memory in 108 ms. row count = 834 | |
21/12/01 01:19:49 INFO Executor: Finished task 0.0 in stage 1160.0 (TID 2189). 1000 bytes result sent to driver | |
21/12/01 01:19:49 INFO TaskSetManager: Finished task 0.0 in stage 1160.0 (TID 2189) in 10949 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:19:49 INFO TaskSchedulerImpl: Removed TaskSet 1160.0, whose tasks have all completed, from pool | |
21/12/01 01:19:49 INFO DAGScheduler: ShuffleMapStage 1160 (sortBy at GlobalSortPartitioner.java:41) finished in 11.013 s | |
21/12/01 01:19:49 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:19:49 INFO DAGScheduler: running: Set() | |
21/12/01 01:19:49 INFO DAGScheduler: waiting: Set(ResultStage 1162) | |
21/12/01 01:19:49 INFO DAGScheduler: failed: Set() | |
21/12/01 01:19:49 INFO DAGScheduler: Submitting ResultStage 1162 (MapPartitionsRDD[2638] at map at SparkExecuteClusteringCommitActionExecutor.java:85), which has no missing parents | |
21/12/01 01:19:49 INFO MemoryStore: Block broadcast_1080 stored as values in memory (estimated size 553.9 KiB, free 363.3 MiB) | |
21/12/01 01:19:49 INFO MemoryStore: Block broadcast_1080_piece0 stored as bytes in memory (estimated size 189.5 KiB, free 363.1 MiB) | |
21/12/01 01:19:49 INFO BlockManagerInfo: Added broadcast_1080_piece0 in memory on 192.168.1.48:56496 (size: 189.5 KiB, free: 365.5 MiB) | |
21/12/01 01:19:49 INFO SparkContext: Created broadcast 1080 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:19:49 INFO DAGScheduler: Submitting 3 missing tasks from ResultStage 1162 (MapPartitionsRDD[2638] at map at SparkExecuteClusteringCommitActionExecutor.java:85) (first 15 tasks are for partitions Vector(0, 1, 2)) | |
21/12/01 01:19:49 INFO TaskSchedulerImpl: Adding task set 1162.0 with 3 tasks resource profile 0 | |
21/12/01 01:19:49 INFO TaskSetManager: Starting task 0.0 in stage 1162.0 (TID 2194) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4380 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:49 INFO TaskSetManager: Starting task 1.0 in stage 1162.0 (TID 2195) (192.168.1.48, executor driver, partition 1, NODE_LOCAL, 4380 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:49 INFO TaskSetManager: Starting task 2.0 in stage 1162.0 (TID 2196) (192.168.1.48, executor driver, partition 2, NODE_LOCAL, 4380 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:49 INFO Executor: Running task 1.0 in stage 1162.0 (TID 2195) | |
21/12/01 01:19:49 INFO Executor: Running task 2.0 in stage 1162.0 (TID 2196) | |
21/12/01 01:19:49 INFO Executor: Running task 0.0 in stage 1162.0 (TID 2194) | |
21/12/01 01:19:49 INFO ShuffleBlockFetcherIterator: Getting 1 (26.9 MiB) non-empty blocks including 1 (26.9 MiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:19:49 INFO ShuffleBlockFetcherIterator: Getting 1 (26.9 MiB) non-empty blocks including 1 (26.9 MiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:19:49 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:19:49 INFO ShuffleBlockFetcherIterator: Getting 1 (26.9 MiB) non-empty blocks including 1 (26.9 MiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:19:49 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:19:49 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:19:49 INFO BlockManagerInfo: Removed broadcast_1075_piece0 on 192.168.1.48:56496 in memory (size: 179.3 KiB, free: 365.7 MiB) | |
21/12/01 01:19:50 INFO BlockManagerInfo: Removed broadcast_1074_piece0 on 192.168.1.48:56496 in memory (size: 179.3 KiB, free: 365.9 MiB) | |
21/12/01 01:19:50 INFO BlockManagerInfo: Removed broadcast_1047_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 365.9 MiB) | |
21/12/01 01:19:50 INFO BlockManagerInfo: Removed broadcast_1014_piece0 on 192.168.1.48:56496 on disk (size: 34.5 KiB) | |
21/12/01 01:19:50 INFO BlockManager: Removing RDD 2477 | |
21/12/01 01:19:50 INFO BlockManagerInfo: Removed broadcast_1046_piece0 on 192.168.1.48:56496 in memory (size: 189.5 KiB, free: 366.1 MiB) | |
21/12/01 01:19:50 INFO BlockManager: Removing RDD 2549 | |
21/12/01 01:19:50 INFO AbstractTableFileSystemView: Took 1942 ms to read 8 instants, 63 replaced file groups | |
21/12/01 01:19:50 INFO ExternalSorter: Thread 14195 spilling in-memory map of 126.8 MiB to disk (1 time so far) | |
21/12/01 01:19:50 INFO ExternalSorter: Thread 12437 spilling in-memory map of 129.0 MiB to disk (1 time so far) | |
21/12/01 01:19:50 INFO ExternalSorter: Thread 12438 spilling in-memory map of 126.7 MiB to disk (1 time so far) | |
21/12/01 01:19:51 INFO ClusteringUtils: Found 12 files in pending clustering operations | |
21/12/01 01:19:51 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/refresh/?basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011818630&timelinehash=8c54ec0d1a4e68f75130fd2202a0fdc5813de8cf68a82f3ff3d8a3eb411ea167) | |
21/12/01 01:19:52 INFO ExternalSorter: Thread 14195 spilling in-memory map of 126.8 MiB to disk (2 times so far) | |
21/12/01 01:19:52 INFO ExternalSorter: Thread 12438 spilling in-memory map of 126.7 MiB to disk (2 times so far) | |
21/12/01 01:19:52 INFO ExternalSorter: Thread 12437 spilling in-memory map of 129.0 MiB to disk (2 times so far) | |
21/12/01 01:19:53 INFO IteratorBasedQueueProducer: starting to buffer records | |
21/12/01 01:19:53 INFO BoundedInMemoryExecutor: starting consumer thread | |
21/12/01 01:19:53 INFO IteratorBasedQueueProducer: starting to buffer records | |
21/12/01 01:19:53 INFO BoundedInMemoryExecutor: starting consumer thread | |
21/12/01 01:19:53 INFO IteratorBasedQueueProducer: starting to buffer records | |
21/12/01 01:19:53 INFO BoundedInMemoryExecutor: starting consumer thread | |
21/12/01 01:19:53 INFO AbstractTableFileSystemView: Took 2210 ms to read 8 instants, 63 replaced file groups | |
21/12/01 01:19:53 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/create?markername=americas%2Funited_states%2Fsan_francisco%2Ff67453ae-b123-452c-85f7-1c49289786be-0_1-1162-2195_20211201010227744.parquet.marker.CREATE&markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201010227744) | |
21/12/01 01:19:53 INFO MarkerHandler: Request: create marker s3a://hudi-testing/test_hoodie_table_2/.hoodie/.temp/20211201010227744 americas/united_states/san_francisco/f67453ae-b123-452c-85f7-1c49289786be-0_1-1162-2195_20211201010227744.parquet.marker.CREATE | |
21/12/01 01:19:53 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/create?markername=asia%2Findia%2Fchennai%2F1f337654-ab8a-46eb-92d5-87c4b70a7864-0_2-1162-2196_20211201010227744.parquet.marker.CREATE&markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201010227744) | |
21/12/01 01:19:53 INFO MarkerHandler: Request: create marker s3a://hudi-testing/test_hoodie_table_2/.hoodie/.temp/20211201010227744 asia/india/chennai/1f337654-ab8a-46eb-92d5-87c4b70a7864-0_2-1162-2196_20211201010227744.parquet.marker.CREATE | |
21/12/01 01:19:53 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/create?markername=americas%2Fbrazil%2Fsao_paulo%2F354cffd7-15cd-4805-91bd-751cd2f50027-0_0-1162-2194_20211201010227744.parquet.marker.CREATE&markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201010227744) | |
21/12/01 01:19:53 INFO MarkerHandler: Request: create marker s3a://hudi-testing/test_hoodie_table_2/.hoodie/.temp/20211201010227744 americas/brazil/sao_paulo/354cffd7-15cd-4805-91bd-751cd2f50027-0_0-1162-2194_20211201010227744.parquet.marker.CREATE | |
21/12/01 01:19:54 INFO ClusteringUtils: Found 12 files in pending clustering operations | |
21/12/01 01:19:54 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/refresh/?basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011818630&timelinehash=8c54ec0d1a4e68f75130fd2202a0fdc5813de8cf68a82f3ff3d8a3eb411ea167) | |
21/12/01 01:19:56 INFO TimelineServerBasedWriteMarkers: [timeline-server-based] Created marker file americas/united_states/san_francisco/f67453ae-b123-452c-85f7-1c49289786be-0_1-1162-2195_20211201010227744.parquet.marker.CREATE in 2480 ms | |
21/12/01 01:19:56 INFO TimelineServerBasedWriteMarkers: [timeline-server-based] Created marker file americas/brazil/sao_paulo/354cffd7-15cd-4805-91bd-751cd2f50027-0_0-1162-2194_20211201010227744.parquet.marker.CREATE in 2434 ms | |
21/12/01 01:19:56 INFO TimelineServerBasedWriteMarkers: [timeline-server-based] Created marker file asia/india/chennai/1f337654-ab8a-46eb-92d5-87c4b70a7864-0_2-1162-2196_20211201010227744.parquet.marker.CREATE in 2478 ms | |
21/12/01 01:19:56 INFO AbstractTableFileSystemView: Took 1749 ms to read 8 instants, 63 replaced file groups | |
21/12/01 01:19:56 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:19:56 INFO HoodieCreateHandle: New CreateHandle for partition :americas/united_states/san_francisco with fileId f67453ae-b123-452c-85f7-1c49289786be-0 | |
21/12/01 01:19:56 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:19:56 INFO HoodieCreateHandle: New CreateHandle for partition :americas/brazil/sao_paulo with fileId 354cffd7-15cd-4805-91bd-751cd2f50027-0 | |
21/12/01 01:19:56 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:19:56 INFO HoodieCreateHandle: New CreateHandle for partition :asia/india/chennai with fileId 1f337654-ab8a-46eb-92d5-87c4b70a7864-0 | |
21/12/01 01:19:57 INFO ClusteringUtils: Found 12 files in pending clustering operations | |
21/12/01 01:19:57 INFO AsyncCleanerService: Async auto cleaning is not enabled. Not running cleaner now | |
21/12/01 01:19:57 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201011944112.commit.requested | |
21/12/01 01:19:58 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201011944112.inflight | |
21/12/01 01:19:58 INFO BaseSparkCommitActionExecutor: no validators configured. | |
21/12/01 01:19:58 INFO BaseCommitActionExecutor: Auto commit disabled for 20211201011944112 | |
21/12/01 01:19:58 INFO SparkContext: Starting job: sum at DeltaSync.java:519 | |
21/12/01 01:19:58 INFO DAGScheduler: Registering RDD 2648 (sortBy at GlobalSortPartitioner.java:41) as input to shuffle 259 | |
21/12/01 01:19:58 INFO DAGScheduler: Got job 793 (sum at DeltaSync.java:519) with 1 output partitions | |
21/12/01 01:19:58 INFO DAGScheduler: Final stage: ResultStage 1167 (sum at DeltaSync.java:519) | |
21/12/01 01:19:58 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1166) | |
21/12/01 01:19:58 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1166) | |
21/12/01 01:19:58 INFO DAGScheduler: Submitting ShuffleMapStage 1166 (MapPartitionsRDD[2648] at sortBy at GlobalSortPartitioner.java:41), which has no missing parents | |
21/12/01 01:19:58 INFO MemoryStore: Block broadcast_1081 stored as values in memory (estimated size 53.5 KiB, free 277.2 MiB) | |
21/12/01 01:19:58 INFO MemoryStore: Block broadcast_1081_piece0 stored as bytes in memory (estimated size 19.8 KiB, free 277.2 MiB) | |
21/12/01 01:19:58 INFO BlockManagerInfo: Added broadcast_1081_piece0 in memory on 192.168.1.48:56496 (size: 19.8 KiB, free: 366.1 MiB) | |
21/12/01 01:19:58 INFO SparkContext: Created broadcast 1081 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:19:58 INFO DAGScheduler: Submitting 12 missing tasks from ShuffleMapStage 1166 (MapPartitionsRDD[2648] at sortBy at GlobalSortPartitioner.java:41) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11)) | |
21/12/01 01:19:58 INFO TaskSchedulerImpl: Adding task set 1166.0 with 12 tasks resource profile 0 | |
21/12/01 01:19:58 INFO TaskSetManager: Starting task 0.0 in stage 1166.0 (TID 2197) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4918 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:58 INFO TaskSetManager: Starting task 1.0 in stage 1166.0 (TID 2198) (192.168.1.48, executor driver, partition 1, PROCESS_LOCAL, 4918 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:58 INFO TaskSetManager: Starting task 2.0 in stage 1166.0 (TID 2199) (192.168.1.48, executor driver, partition 2, PROCESS_LOCAL, 4919 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:58 INFO TaskSetManager: Starting task 3.0 in stage 1166.0 (TID 2200) (192.168.1.48, executor driver, partition 3, PROCESS_LOCAL, 4919 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:58 INFO TaskSetManager: Starting task 4.0 in stage 1166.0 (TID 2201) (192.168.1.48, executor driver, partition 4, PROCESS_LOCAL, 4918 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:58 INFO TaskSetManager: Starting task 5.0 in stage 1166.0 (TID 2202) (192.168.1.48, executor driver, partition 5, PROCESS_LOCAL, 4918 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:58 INFO TaskSetManager: Starting task 6.0 in stage 1166.0 (TID 2203) (192.168.1.48, executor driver, partition 6, PROCESS_LOCAL, 4918 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:58 INFO TaskSetManager: Starting task 7.0 in stage 1166.0 (TID 2204) (192.168.1.48, executor driver, partition 7, PROCESS_LOCAL, 4918 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:58 INFO TaskSetManager: Starting task 8.0 in stage 1166.0 (TID 2205) (192.168.1.48, executor driver, partition 8, PROCESS_LOCAL, 4918 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:58 INFO Executor: Running task 0.0 in stage 1166.0 (TID 2197) | |
21/12/01 01:19:58 INFO Executor: Running task 1.0 in stage 1166.0 (TID 2198) | |
21/12/01 01:19:58 INFO Executor: Running task 3.0 in stage 1166.0 (TID 2200) | |
21/12/01 01:19:58 INFO Executor: Running task 4.0 in stage 1166.0 (TID 2201) | |
21/12/01 01:19:58 INFO Executor: Running task 2.0 in stage 1166.0 (TID 2199) | |
21/12/01 01:19:58 INFO Executor: Running task 6.0 in stage 1166.0 (TID 2203) | |
21/12/01 01:19:58 INFO Executor: Running task 5.0 in stage 1166.0 (TID 2202) | |
21/12/01 01:19:58 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/0/part-00000-77af58d1-5537-4ea7-9484-65582d12e634-c000.snappy.parquet, range: 0-6215407, partition values: [empty row] | |
21/12/01 01:19:58 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/9/part-00000-4086954f-ba1f-40e6-85b7-e5db055d25cf-c000.snappy.parquet, range: 0-6215407, partition values: [empty row] | |
21/12/01 01:19:58 INFO Executor: Running task 7.0 in stage 1166.0 (TID 2204) | |
21/12/01 01:19:58 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/11/part-00000-64714cf0-e178-46f6-866f-9046e634cef9-c000.snappy.parquet, range: 0-6215407, partition values: [empty row] | |
21/12/01 01:19:58 INFO Executor: Running task 8.0 in stage 1166.0 (TID 2205) | |
21/12/01 01:19:58 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/1/part-00000-f55cf6f4-8118-4d70-886a-bcb1f7d1f9bb-c000.snappy.parquet, range: 6215407-8237960, partition values: [empty row] | |
21/12/01 01:19:58 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/1/part-00000-f55cf6f4-8118-4d70-886a-bcb1f7d1f9bb-c000.snappy.parquet, range: 0-6215407, partition values: [empty row] | |
21/12/01 01:19:58 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/0/part-00000-77af58d1-5537-4ea7-9484-65582d12e634-c000.snappy.parquet, range: 6215407-8236713, partition values: [empty row] | |
21/12/01 01:19:58 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/8/part-00000-feb37af9-4c18-4a76-aac0-6dbacb7c1464-c000.snappy.parquet, range: 0-6215407, partition values: [empty row] | |
21/12/01 01:19:58 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/10/part-00000-f8538772-637a-4853-8540-b757dde9c246-c000.snappy.parquet, range: 0-6215407, partition values: [empty row] | |
21/12/01 01:19:58 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/8/part-00000-feb37af9-4c18-4a76-aac0-6dbacb7c1464-c000.snappy.parquet, range: 6215407-8236953, partition values: [empty row] | |
21/12/01 01:19:58 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:58 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:58 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:58 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:58 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:58 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:58 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:59 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:59 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:59 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:59 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:59 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:59 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:59 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:59 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:59 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:59 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:59 INFO Executor: Finished task 7.0 in stage 1166.0 (TID 2204). 1460 bytes result sent to driver | |
21/12/01 01:19:59 INFO TaskSetManager: Starting task 9.0 in stage 1166.0 (TID 2206) (192.168.1.48, executor driver, partition 9, PROCESS_LOCAL, 4918 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:59 INFO TaskSetManager: Finished task 7.0 in stage 1166.0 (TID 2204) in 1224 ms on 192.168.1.48 (executor driver) (1/12) | |
21/12/01 01:19:59 INFO Executor: Running task 9.0 in stage 1166.0 (TID 2206) | |
21/12/01 01:19:59 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/9/part-00000-4086954f-ba1f-40e6-85b7-e5db055d25cf-c000.snappy.parquet, range: 6215407-8236547, partition values: [empty row] | |
21/12/01 01:19:59 INFO Executor: Finished task 6.0 in stage 1166.0 (TID 2203). 1460 bytes result sent to driver | |
21/12/01 01:19:59 INFO TaskSetManager: Starting task 10.0 in stage 1166.0 (TID 2207) (192.168.1.48, executor driver, partition 10, PROCESS_LOCAL, 4919 bytes) taskResourceAssignments Map() | |
21/12/01 01:19:59 INFO TaskSetManager: Finished task 6.0 in stage 1166.0 (TID 2203) in 1323 ms on 192.168.1.48 (executor driver) (2/12) | |
21/12/01 01:19:59 INFO Executor: Running task 10.0 in stage 1166.0 (TID 2207) | |
21/12/01 01:19:59 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/11/part-00000-64714cf0-e178-46f6-866f-9046e634cef9-c000.snappy.parquet, range: 6215407-8235534, partition values: [empty row] | |
21/12/01 01:19:59 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:59 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:19:59 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:20:00 INFO Executor: Finished task 8.0 in stage 1166.0 (TID 2205). 1460 bytes result sent to driver | |
21/12/01 01:20:00 INFO TaskSetManager: Starting task 11.0 in stage 1166.0 (TID 2208) (192.168.1.48, executor driver, partition 11, PROCESS_LOCAL, 4919 bytes) taskResourceAssignments Map() | |
21/12/01 01:20:00 INFO TaskSetManager: Finished task 8.0 in stage 1166.0 (TID 2205) in 1844 ms on 192.168.1.48 (executor driver) (3/12) | |
21/12/01 01:20:00 INFO Executor: Running task 11.0 in stage 1166.0 (TID 2208) | |
21/12/01 01:20:00 INFO FileScanRDD: Reading File path: s3a://hudi-testing/test_input_data/10/part-00000-f8538772-637a-4853-8540-b757dde9c246-c000.snappy.parquet, range: 6215407-8235357, partition values: [empty row] | |
21/12/01 01:20:00 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:20:00 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:20:00 INFO Executor: Finished task 9.0 in stage 1166.0 (TID 2206). 1460 bytes result sent to driver | |
21/12/01 01:20:00 INFO TaskSetManager: Finished task 9.0 in stage 1166.0 (TID 2206) in 1053 ms on 192.168.1.48 (executor driver) (4/12) | |
21/12/01 01:20:00 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:20:00 INFO Executor: Finished task 10.0 in stage 1166.0 (TID 2207). 1460 bytes result sent to driver | |
21/12/01 01:20:00 INFO TaskSetManager: Finished task 10.0 in stage 1166.0 (TID 2207) in 1058 ms on 192.168.1.48 (executor driver) (5/12) | |
21/12/01 01:20:01 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:20:01 INFO Executor: Finished task 11.0 in stage 1166.0 (TID 2208). 1460 bytes result sent to driver | |
21/12/01 01:20:01 INFO TaskSetManager: Finished task 11.0 in stage 1166.0 (TID 2208) in 1084 ms on 192.168.1.48 (executor driver) (6/12) | |
21/12/01 01:20:01 INFO Executor: Finished task 4.0 in stage 1166.0 (TID 2201). 1632 bytes result sent to driver | |
21/12/01 01:20:01 INFO TaskSetManager: Finished task 4.0 in stage 1166.0 (TID 2201) in 3023 ms on 192.168.1.48 (executor driver) (7/12) | |
21/12/01 01:20:01 INFO Executor: Finished task 1.0 in stage 1166.0 (TID 2198). 1632 bytes result sent to driver | |
21/12/01 01:20:01 INFO TaskSetManager: Finished task 1.0 in stage 1166.0 (TID 2198) in 3113 ms on 192.168.1.48 (executor driver) (8/12) | |
21/12/01 01:20:01 INFO IteratorBasedQueueProducer: finished buffering records | |
21/12/01 01:20:01 INFO Executor: Finished task 3.0 in stage 1166.0 (TID 2200). 1632 bytes result sent to driver | |
21/12/01 01:20:01 INFO TaskSetManager: Finished task 3.0 in stage 1166.0 (TID 2200) in 3164 ms on 192.168.1.48 (executor driver) (9/12) | |
21/12/01 01:20:01 INFO HoodieCreateHandle: Closing the file 354cffd7-15cd-4805-91bd-751cd2f50027-0 as we are done with all the records 200177 | |
21/12/01 01:20:01 INFO IteratorBasedQueueProducer: finished buffering records | |
21/12/01 01:20:01 INFO HoodieCreateHandle: Closing the file f67453ae-b123-452c-85f7-1c49289786be-0 as we are done with all the records 200194 | |
21/12/01 01:20:01 INFO IteratorBasedQueueProducer: finished buffering records | |
21/12/01 01:20:01 INFO HoodieCreateHandle: Closing the file 1f337654-ab8a-46eb-92d5-87c4b70a7864-0 as we are done with all the records 199629 | |
21/12/01 01:20:02 INFO Executor: Finished task 5.0 in stage 1166.0 (TID 2202). 1632 bytes result sent to driver | |
21/12/01 01:20:02 INFO TaskSetManager: Finished task 5.0 in stage 1166.0 (TID 2202) in 3891 ms on 192.168.1.48 (executor driver) (10/12) | |
21/12/01 01:20:02 INFO Executor: Finished task 2.0 in stage 1166.0 (TID 2199). 1632 bytes result sent to driver | |
21/12/01 01:20:02 INFO TaskSetManager: Finished task 2.0 in stage 1166.0 (TID 2199) in 3911 ms on 192.168.1.48 (executor driver) (11/12) | |
21/12/01 01:20:02 INFO Executor: Finished task 0.0 in stage 1166.0 (TID 2197). 1632 bytes result sent to driver | |
21/12/01 01:20:02 INFO TaskSetManager: Finished task 0.0 in stage 1166.0 (TID 2197) in 4274 ms on 192.168.1.48 (executor driver) (12/12) | |
21/12/01 01:20:02 INFO TaskSchedulerImpl: Removed TaskSet 1166.0, whose tasks have all completed, from pool | |
21/12/01 01:20:02 INFO DAGScheduler: ShuffleMapStage 1166 (sortBy at GlobalSortPartitioner.java:41) finished in 4.276 s | |
21/12/01 01:20:02 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:20:02 INFO DAGScheduler: running: Set(ResultStage 1162) | |
21/12/01 01:20:02 INFO DAGScheduler: waiting: Set(ResultStage 1167) | |
21/12/01 01:20:02 INFO DAGScheduler: failed: Set() | |
21/12/01 01:20:02 INFO DAGScheduler: Submitting ResultStage 1167 (MapPartitionsRDD[2653] at mapToDouble at DeltaSync.java:519), which has no missing parents | |
21/12/01 01:20:02 INFO MemoryStore: Block broadcast_1082 stored as values in memory (estimated size 513.8 KiB, free 364.6 MiB) | |
21/12/01 01:20:02 INFO MemoryStore: Block broadcast_1082_piece0 stored as bytes in memory (estimated size 179.8 KiB, free 364.4 MiB) | |
21/12/01 01:20:02 INFO BlockManagerInfo: Added broadcast_1082_piece0 in memory on 192.168.1.48:56496 (size: 179.8 KiB, free: 365.9 MiB) | |
21/12/01 01:20:02 INFO SparkContext: Created broadcast 1082 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:20:02 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1167 (MapPartitionsRDD[2653] at mapToDouble at DeltaSync.java:519) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:20:02 INFO TaskSchedulerImpl: Adding task set 1167.0 with 1 tasks resource profile 0 | |
21/12/01 01:20:02 INFO TaskSetManager: Starting task 0.0 in stage 1167.0 (TID 2209) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:20:02 INFO Executor: Running task 0.0 in stage 1167.0 (TID 2209) | |
21/12/01 01:20:02 INFO ShuffleBlockFetcherIterator: Getting 6 (68.5 MiB) non-empty blocks including 6 (68.5 MiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:20:02 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:20:04 INFO MemoryStore: Will not store rdd_2652_0 | |
21/12/01 01:20:04 WARN MemoryStore: Failed to reserve initial memory threshold of 1024.0 KiB for computing block rdd_2652_0 in memory. | |
21/12/01 01:20:04 INFO IteratorBasedQueueProducer: starting to buffer records | |
21/12/01 01:20:04 INFO BoundedInMemoryExecutor: starting consumer thread | |
21/12/01 01:20:04 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/create?markername=americas%2Fbrazil%2Fsao_paulo%2Fc6d17cfb-140b-485b-b9ef-0cec97daa7e8-0_0-1167-2209_20211201011944112.parquet.marker.CREATE&markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201011944112) | |
21/12/01 01:20:04 INFO MarkerHandler: Request: create marker s3a://hudi-testing/test_hoodie_table_2/.hoodie/.temp/20211201011944112 americas/brazil/sao_paulo/c6d17cfb-140b-485b-b9ef-0cec97daa7e8-0_0-1167-2209_20211201011944112.parquet.marker.CREATE | |
21/12/01 01:20:07 INFO TimelineServerBasedWriteMarkers: [timeline-server-based] Created marker file americas/brazil/sao_paulo/c6d17cfb-140b-485b-b9ef-0cec97daa7e8-0_0-1167-2209_20211201011944112.parquet.marker.CREATE in 2971 ms | |
21/12/01 01:20:07 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:20:07 INFO HoodieCreateHandle: New CreateHandle for partition :americas/brazil/sao_paulo with fileId c6d17cfb-140b-485b-b9ef-0cec97daa7e8-0 | |
21/12/01 01:20:11 INFO HoodieCreateHandle: Closing the file c6d17cfb-140b-485b-b9ef-0cec97daa7e8-0 as we are done with all the records 200406 | |
21/12/01 01:20:33 INFO BlockManagerInfo: Removed broadcast_1081_piece0 on 192.168.1.48:56496 in memory (size: 19.8 KiB, free: 365.9 MiB) | |
21/12/01 01:20:52 INFO LruBlockCache: totalSize=396.98 KB, freeSize=376.41 MB, max=376.80 MB, blockCount=0, accesses=254, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=869, evicted=0, evictedPerRun=0.0 | |
21/12/01 01:20:55 INFO HoodieCreateHandle: CreateHandle for partitionPath asia/india/chennai fileID 1f337654-ab8a-46eb-92d5-87c4b70a7864-0, took 61671 ms. | |
21/12/01 01:20:55 INFO BoundedInMemoryExecutor: Queue Consumption is done; notifying producer threads | |
21/12/01 01:20:55 INFO MemoryStore: Block rdd_2637_2 stored as values in memory (estimated size 401.0 B, free 3.1 MiB) | |
21/12/01 01:20:55 INFO BlockManagerInfo: Added rdd_2637_2 in memory on 192.168.1.48:56496 (size: 401.0 B, free: 365.9 MiB) | |
21/12/01 01:20:55 INFO Executor: Finished task 2.0 in stage 1162.0 (TID 2196). 1572 bytes result sent to driver | |
21/12/01 01:20:55 INFO TaskSetManager: Finished task 2.0 in stage 1162.0 (TID 2196) in 65184 ms on 192.168.1.48 (executor driver) (1/3) | |
21/12/01 01:20:57 INFO HoodieCreateHandle: CreateHandle for partitionPath americas/brazil/sao_paulo fileID 354cffd7-15cd-4805-91bd-751cd2f50027-0, took 63763 ms. | |
21/12/01 01:20:57 INFO BoundedInMemoryExecutor: Queue Consumption is done; notifying producer threads | |
21/12/01 01:20:57 INFO MemoryStore: Block rdd_2637_0 stored as values in memory (estimated size 415.0 B, free 3.1 MiB) | |
21/12/01 01:20:57 INFO BlockManagerInfo: Added rdd_2637_0 in memory on 192.168.1.48:56496 (size: 415.0 B, free: 365.9 MiB) | |
21/12/01 01:20:57 INFO Executor: Finished task 0.0 in stage 1162.0 (TID 2194). 1586 bytes result sent to driver | |
21/12/01 01:20:57 INFO TaskSetManager: Finished task 0.0 in stage 1162.0 (TID 2194) in 67325 ms on 192.168.1.48 (executor driver) (2/3) | |
21/12/01 01:20:58 INFO HoodieCreateHandle: CreateHandle for partitionPath americas/brazil/sao_paulo fileID c6d17cfb-140b-485b-b9ef-0cec97daa7e8-0, took 54331 ms. | |
21/12/01 01:20:58 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/create?markername=americas%2Funited_states%2Fsan_francisco%2Fc6d17cfb-140b-485b-b9ef-0cec97daa7e8-1_0-1167-2209_20211201011944112.parquet.marker.CREATE&markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201011944112) | |
21/12/01 01:20:58 INFO MarkerHandler: Request: create marker s3a://hudi-testing/test_hoodie_table_2/.hoodie/.temp/20211201011944112 americas/united_states/san_francisco/c6d17cfb-140b-485b-b9ef-0cec97daa7e8-1_0-1167-2209_20211201011944112.parquet.marker.CREATE | |
21/12/01 01:20:59 INFO TimelineServerBasedWriteMarkers: [timeline-server-based] Created marker file americas/united_states/san_francisco/c6d17cfb-140b-485b-b9ef-0cec97daa7e8-1_0-1167-2209_20211201011944112.parquet.marker.CREATE in 539 ms | |
21/12/01 01:20:59 INFO HoodieCreateHandle: CreateHandle for partitionPath americas/united_states/san_francisco fileID f67453ae-b123-452c-85f7-1c49289786be-0, took 66204 ms. | |
21/12/01 01:20:59 INFO BoundedInMemoryExecutor: Queue Consumption is done; notifying producer threads | |
21/12/01 01:20:59 INFO MemoryStore: Block rdd_2637_1 stored as values in memory (estimated size 437.0 B, free 3.1 MiB) | |
21/12/01 01:20:59 INFO BlockManagerInfo: Added rdd_2637_1 in memory on 192.168.1.48:56496 (size: 437.0 B, free: 365.9 MiB) | |
21/12/01 01:20:59 INFO Executor: Finished task 1.0 in stage 1162.0 (TID 2195). 1608 bytes result sent to driver | |
21/12/01 01:20:59 INFO TaskSetManager: Finished task 1.0 in stage 1162.0 (TID 2195) in 69713 ms on 192.168.1.48 (executor driver) (3/3) | |
21/12/01 01:20:59 INFO TaskSchedulerImpl: Removed TaskSet 1162.0, whose tasks have all completed, from pool | |
21/12/01 01:20:59 INFO DAGScheduler: ResultStage 1162 (collect at SparkExecuteClusteringCommitActionExecutor.java:85) finished in 69.780 s | |
21/12/01 01:20:59 INFO DAGScheduler: Job 789 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:20:59 INFO TaskSchedulerImpl: Killing all running tasks in stage 1162: Stage finished | |
21/12/01 01:20:59 INFO DAGScheduler: Job 789 finished: collect at SparkExecuteClusteringCommitActionExecutor.java:85, took 80.857458 s | |
21/12/01 01:20:59 INFO BaseSparkCommitActionExecutor: no validators configured. | |
21/12/01 01:20:59 INFO BaseCommitActionExecutor: Auto commit disabled for 20211201010227744 | |
21/12/01 01:20:59 INFO CommitUtils: Creating metadata for CLUSTER numWriteStats:3numReplaceFileIds:3 | |
21/12/01 01:20:59 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/dir/exists?markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201010227744) | |
21/12/01 01:20:59 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/create-and-merge?markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201010227744) | |
21/12/01 01:20:59 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:20:59 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:20:59 INFO HoodieCreateHandle: New CreateHandle for partition :americas/united_states/san_francisco with fileId c6d17cfb-140b-485b-b9ef-0cec97daa7e8-1 | |
21/12/01 01:21:00 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:21:00 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:00 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:00 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:21:00 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:01 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011818630__deltacommit__COMPLETED]} | |
21/12/01 01:21:01 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011944112__commit__INFLIGHT]} | |
21/12/01 01:21:01 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:01 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:21:01 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:01 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:01 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:21:02 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:02 INFO HoodieTableMetadataUtil: Updating at 20211201010227744 from Commit/CLUSTER. #partitions_updated=4 | |
21/12/01 01:21:02 INFO HoodieTableMetadataUtil: Loading file groups for metadata table partition files | |
21/12/01 01:21:02 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011818630__deltacommit__COMPLETED]} | |
21/12/01 01:21:02 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:02 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:02 INFO AbstractTableFileSystemView: Building file system view for partition (files) | |
21/12/01 01:21:02 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=12, NumFileGroups=1, FileGroupsCreationTime=1, StoreTimeTaken=0 | |
21/12/01 01:21:02 INFO AbstractHoodieClient: Embedded Timeline Server is disabled. Not starting timeline service | |
21/12/01 01:21:02 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:02 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:21:03 INFO HoodieCreateHandle: Closing the file c6d17cfb-140b-485b-b9ef-0cec97daa7e8-1 as we are done with all the records 199752 | |
21/12/01 01:21:03 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:03 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:03 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011818630__deltacommit__COMPLETED]} | |
21/12/01 01:21:03 INFO AbstractHoodieWriteClient: Generate a new instant time: 20211201010227744 action: deltacommit | |
21/12/01 01:21:03 INFO HoodieHeartbeatClient: Received request to start heartbeat for instant time 20211201010227744 | |
21/12/01 01:21:03 INFO HoodieActiveTimeline: Creating a new instant [==>20211201010227744__deltacommit__REQUESTED] | |
21/12/01 01:21:04 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:05 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:21:05 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:05 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:05 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011818630__deltacommit__COMPLETED]} | |
21/12/01 01:21:05 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:21:05 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:21:05 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata | |
21/12/01 01:21:05 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:05 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:05 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011818630__deltacommit__COMPLETED]} | |
21/12/01 01:21:05 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:06 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:06 INFO AsyncCleanerService: Async auto cleaning is not enabled. Not running cleaner now | |
21/12/01 01:21:06 INFO SparkContext: Starting job: countByKey at BaseSparkCommitActionExecutor.java:191 | |
21/12/01 01:21:06 INFO DAGScheduler: Registering RDD 2658 (countByKey at BaseSparkCommitActionExecutor.java:191) as input to shuffle 260 | |
21/12/01 01:21:06 INFO DAGScheduler: Got job 794 (countByKey at BaseSparkCommitActionExecutor.java:191) with 1 output partitions | |
21/12/01 01:21:06 INFO DAGScheduler: Final stage: ResultStage 1169 (countByKey at BaseSparkCommitActionExecutor.java:191) | |
21/12/01 01:21:06 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1168) | |
21/12/01 01:21:06 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1168) | |
21/12/01 01:21:06 INFO DAGScheduler: Submitting ShuffleMapStage 1168 (MapPartitionsRDD[2658] at countByKey at BaseSparkCommitActionExecutor.java:191), which has no missing parents | |
21/12/01 01:21:06 INFO MemoryStore: Block broadcast_1083 stored as values in memory (estimated size 10.3 KiB, free 3.1 MiB) | |
21/12/01 01:21:06 INFO MemoryStore: Block broadcast_1083_piece0 stored as bytes in memory (estimated size 5.2 KiB, free 3.1 MiB) | |
21/12/01 01:21:06 INFO BlockManagerInfo: Added broadcast_1083_piece0 in memory on 192.168.1.48:56496 (size: 5.2 KiB, free: 365.9 MiB) | |
21/12/01 01:21:06 INFO SparkContext: Created broadcast 1083 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:06 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1168 (MapPartitionsRDD[2658] at countByKey at BaseSparkCommitActionExecutor.java:191) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:06 INFO TaskSchedulerImpl: Adding task set 1168.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:06 INFO TaskSetManager: Starting task 0.0 in stage 1168.0 (TID 2210) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4898 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:06 INFO Executor: Running task 0.0 in stage 1168.0 (TID 2210) | |
21/12/01 01:21:06 INFO MemoryStore: Block rdd_2656_0 stored as values in memory (estimated size 1337.0 B, free 3.1 MiB) | |
21/12/01 01:21:06 INFO BlockManagerInfo: Added rdd_2656_0 in memory on 192.168.1.48:56496 (size: 1337.0 B, free: 365.9 MiB) | |
21/12/01 01:21:06 INFO Executor: Finished task 0.0 in stage 1168.0 (TID 2210). 1043 bytes result sent to driver | |
21/12/01 01:21:06 INFO TaskSetManager: Finished task 0.0 in stage 1168.0 (TID 2210) in 5 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:06 INFO TaskSchedulerImpl: Removed TaskSet 1168.0, whose tasks have all completed, from pool | |
21/12/01 01:21:06 INFO DAGScheduler: ShuffleMapStage 1168 (countByKey at BaseSparkCommitActionExecutor.java:191) finished in 0.006 s | |
21/12/01 01:21:06 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:21:06 INFO DAGScheduler: running: Set(ResultStage 1167) | |
21/12/01 01:21:06 INFO DAGScheduler: waiting: Set(ResultStage 1169) | |
21/12/01 01:21:06 INFO DAGScheduler: failed: Set() | |
21/12/01 01:21:06 INFO DAGScheduler: Submitting ResultStage 1169 (ShuffledRDD[2659] at countByKey at BaseSparkCommitActionExecutor.java:191), which has no missing parents | |
21/12/01 01:21:06 INFO MemoryStore: Block broadcast_1084 stored as values in memory (estimated size 5.6 KiB, free 3.0 MiB) | |
21/12/01 01:21:06 INFO MemoryStore: Block broadcast_1084_piece0 stored as bytes in memory (estimated size 3.2 KiB, free 3.0 MiB) | |
21/12/01 01:21:06 INFO BlockManagerInfo: Added broadcast_1084_piece0 in memory on 192.168.1.48:56496 (size: 3.2 KiB, free: 365.9 MiB) | |
21/12/01 01:21:06 INFO SparkContext: Created broadcast 1084 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:06 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1169 (ShuffledRDD[2659] at countByKey at BaseSparkCommitActionExecutor.java:191) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:06 INFO TaskSchedulerImpl: Adding task set 1169.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:06 INFO TaskSetManager: Starting task 0.0 in stage 1169.0 (TID 2211) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:06 INFO Executor: Running task 0.0 in stage 1169.0 (TID 2211) | |
21/12/01 01:21:06 INFO ShuffleBlockFetcherIterator: Getting 1 (156.0 B) non-empty blocks including 1 (156.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:21:06 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:21:06 INFO Executor: Finished task 0.0 in stage 1169.0 (TID 2211). 1318 bytes result sent to driver | |
21/12/01 01:21:06 INFO TaskSetManager: Finished task 0.0 in stage 1169.0 (TID 2211) in 4 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:06 INFO TaskSchedulerImpl: Removed TaskSet 1169.0, whose tasks have all completed, from pool | |
21/12/01 01:21:06 INFO DAGScheduler: ResultStage 1169 (countByKey at BaseSparkCommitActionExecutor.java:191) finished in 0.005 s | |
21/12/01 01:21:06 INFO DAGScheduler: Job 794 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:06 INFO TaskSchedulerImpl: Killing all running tasks in stage 1169: Stage finished | |
21/12/01 01:21:06 INFO DAGScheduler: Job 794 finished: countByKey at BaseSparkCommitActionExecutor.java:191, took 0.012101 s | |
21/12/01 01:21:06 INFO BaseSparkCommitActionExecutor: Workload profile :WorkloadProfile {globalStat=WorkloadStat {numInserts=0, numUpdates=4}, partitionStat={files=WorkloadStat {numInserts=0, numUpdates=4}}, operationType=UPSERT_PREPPED} | |
21/12/01 01:21:06 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201010227744.deltacommit.requested | |
21/12/01 01:21:08 INFO HoodieActiveTimeline: Created a new file in meta path: s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201010227744.deltacommit.inflight | |
21/12/01 01:21:09 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201010227744.deltacommit.inflight | |
21/12/01 01:21:09 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:09 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:09 INFO SparkContext: Starting job: collect at SparkRejectUpdateStrategy.java:52 | |
21/12/01 01:21:09 INFO DAGScheduler: Registering RDD 2662 (distinct at SparkRejectUpdateStrategy.java:52) as input to shuffle 261 | |
21/12/01 01:21:09 INFO DAGScheduler: Got job 795 (collect at SparkRejectUpdateStrategy.java:52) with 1 output partitions | |
21/12/01 01:21:09 INFO DAGScheduler: Final stage: ResultStage 1171 (collect at SparkRejectUpdateStrategy.java:52) | |
21/12/01 01:21:09 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1170) | |
21/12/01 01:21:09 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1170) | |
21/12/01 01:21:09 INFO DAGScheduler: Submitting ShuffleMapStage 1170 (MapPartitionsRDD[2662] at distinct at SparkRejectUpdateStrategy.java:52), which has no missing parents | |
21/12/01 01:21:09 INFO MemoryStore: Block broadcast_1085 stored as values in memory (estimated size 10.3 KiB, free 3.0 MiB) | |
21/12/01 01:21:09 INFO MemoryStore: Block broadcast_1085_piece0 stored as bytes in memory (estimated size 5.1 KiB, free 3.0 MiB) | |
21/12/01 01:21:09 INFO BlockManagerInfo: Added broadcast_1085_piece0 in memory on 192.168.1.48:56496 (size: 5.1 KiB, free: 365.9 MiB) | |
21/12/01 01:21:09 INFO SparkContext: Created broadcast 1085 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:09 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1170 (MapPartitionsRDD[2662] at distinct at SparkRejectUpdateStrategy.java:52) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:09 INFO TaskSchedulerImpl: Adding task set 1170.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:09 INFO TaskSetManager: Starting task 0.0 in stage 1170.0 (TID 2212) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4898 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:09 INFO Executor: Running task 0.0 in stage 1170.0 (TID 2212) | |
21/12/01 01:21:09 INFO BlockManager: Found block rdd_2656_0 locally | |
21/12/01 01:21:09 INFO Executor: Finished task 0.0 in stage 1170.0 (TID 2212). 1129 bytes result sent to driver | |
21/12/01 01:21:09 INFO TaskSetManager: Finished task 0.0 in stage 1170.0 (TID 2212) in 4 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:09 INFO TaskSchedulerImpl: Removed TaskSet 1170.0, whose tasks have all completed, from pool | |
21/12/01 01:21:09 INFO DAGScheduler: ShuffleMapStage 1170 (distinct at SparkRejectUpdateStrategy.java:52) finished in 0.006 s | |
21/12/01 01:21:09 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:21:09 INFO DAGScheduler: running: Set(ResultStage 1167) | |
21/12/01 01:21:09 INFO DAGScheduler: waiting: Set(ResultStage 1171) | |
21/12/01 01:21:09 INFO DAGScheduler: failed: Set() | |
21/12/01 01:21:09 INFO DAGScheduler: Submitting ResultStage 1171 (MapPartitionsRDD[2664] at distinct at SparkRejectUpdateStrategy.java:52), which has no missing parents | |
21/12/01 01:21:09 INFO MemoryStore: Block broadcast_1086 stored as values in memory (estimated size 6.4 KiB, free 3.0 MiB) | |
21/12/01 01:21:09 INFO MemoryStore: Block broadcast_1086_piece0 stored as bytes in memory (estimated size 3.5 KiB, free 3.0 MiB) | |
21/12/01 01:21:09 INFO BlockManagerInfo: Added broadcast_1086_piece0 in memory on 192.168.1.48:56496 (size: 3.5 KiB, free: 365.9 MiB) | |
21/12/01 01:21:09 INFO SparkContext: Created broadcast 1086 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:09 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1171 (MapPartitionsRDD[2664] at distinct at SparkRejectUpdateStrategy.java:52) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:09 INFO TaskSchedulerImpl: Adding task set 1171.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:09 INFO TaskSetManager: Starting task 0.0 in stage 1171.0 (TID 2213) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:09 INFO Executor: Running task 0.0 in stage 1171.0 (TID 2213) | |
21/12/01 01:21:09 INFO ShuffleBlockFetcherIterator: Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:21:09 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:21:10 INFO Executor: Finished task 0.0 in stage 1171.0 (TID 2213). 1249 bytes result sent to driver | |
21/12/01 01:21:10 INFO TaskSetManager: Finished task 0.0 in stage 1171.0 (TID 2213) in 4 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:10 INFO TaskSchedulerImpl: Removed TaskSet 1171.0, whose tasks have all completed, from pool | |
21/12/01 01:21:10 INFO DAGScheduler: ResultStage 1171 (collect at SparkRejectUpdateStrategy.java:52) finished in 0.005 s | |
21/12/01 01:21:10 INFO DAGScheduler: Job 795 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:10 INFO TaskSchedulerImpl: Killing all running tasks in stage 1171: Stage finished | |
21/12/01 01:21:10 INFO DAGScheduler: Job 795 finished: collect at SparkRejectUpdateStrategy.java:52, took 0.011769 s | |
21/12/01 01:21:10 INFO UpsertPartitioner: AvgRecordSize => 1024 | |
21/12/01 01:21:10 INFO SparkContext: Starting job: collectAsMap at UpsertPartitioner.java:256 | |
21/12/01 01:21:10 INFO DAGScheduler: Got job 796 (collectAsMap at UpsertPartitioner.java:256) with 1 output partitions | |
21/12/01 01:21:10 INFO DAGScheduler: Final stage: ResultStage 1172 (collectAsMap at UpsertPartitioner.java:256) | |
21/12/01 01:21:10 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:21:10 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:21:10 INFO DAGScheduler: Submitting ResultStage 1172 (MapPartitionsRDD[2666] at mapToPair at UpsertPartitioner.java:255), which has no missing parents | |
21/12/01 01:21:10 INFO MemoryStore: Block broadcast_1087 stored as values in memory (estimated size 316.4 KiB, free 2.7 MiB) | |
21/12/01 01:21:10 INFO MemoryStore: Block broadcast_1087_piece0 stored as bytes in memory (estimated size 110.4 KiB, free 2.6 MiB) | |
21/12/01 01:21:10 INFO BlockManagerInfo: Added broadcast_1087_piece0 in memory on 192.168.1.48:56496 (size: 110.4 KiB, free: 365.8 MiB) | |
21/12/01 01:21:10 INFO SparkContext: Created broadcast 1087 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:10 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1172 (MapPartitionsRDD[2666] at mapToPair at UpsertPartitioner.java:255) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:10 INFO TaskSchedulerImpl: Adding task set 1172.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:10 INFO TaskSetManager: Starting task 0.0 in stage 1172.0 (TID 2214) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4338 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:10 INFO Executor: Running task 0.0 in stage 1172.0 (TID 2214) | |
21/12/01 01:21:10 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:21:10 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:21:10 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata | |
21/12/01 01:21:10 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:10 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:10 INFO AbstractTableFileSystemView: Building file system view for partition (files) | |
21/12/01 01:21:11 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=12, NumFileGroups=1, FileGroupsCreationTime=2, StoreTimeTaken=0 | |
21/12/01 01:21:11 INFO Executor: Finished task 0.0 in stage 1172.0 (TID 2214). 829 bytes result sent to driver | |
21/12/01 01:21:11 INFO TaskSetManager: Finished task 0.0 in stage 1172.0 (TID 2214) in 337 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:11 INFO TaskSchedulerImpl: Removed TaskSet 1172.0, whose tasks have all completed, from pool | |
21/12/01 01:21:11 INFO DAGScheduler: ResultStage 1172 (collectAsMap at UpsertPartitioner.java:256) finished in 0.376 s | |
21/12/01 01:21:11 INFO DAGScheduler: Job 796 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:11 INFO TaskSchedulerImpl: Killing all running tasks in stage 1172: Stage finished | |
21/12/01 01:21:11 INFO DAGScheduler: Job 796 finished: collectAsMap at UpsertPartitioner.java:256, took 0.375851 s | |
21/12/01 01:21:11 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:11 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:11 INFO UpsertPartitioner: Total Buckets :1, buckets info => {0=BucketInfo {bucketType=UPDATE, fileIdPrefix=files-0000, partitionPath=files}}, | |
Partition to insert buckets => {}, | |
UpdateLocations mapped to buckets =>{files-0000=0} | |
21/12/01 01:21:11 INFO BaseSparkCommitActionExecutor: no validators configured. | |
21/12/01 01:21:11 INFO BaseCommitActionExecutor: Auto commit enabled: Committing 20211201010227744 | |
21/12/01 01:21:11 INFO SparkContext: Starting job: collect at BaseSparkCommitActionExecutor.java:274 | |
21/12/01 01:21:11 INFO DAGScheduler: Registering RDD 2667 (mapToPair at BaseSparkCommitActionExecutor.java:225) as input to shuffle 262 | |
21/12/01 01:21:11 INFO DAGScheduler: Got job 797 (collect at BaseSparkCommitActionExecutor.java:274) with 1 output partitions | |
21/12/01 01:21:11 INFO DAGScheduler: Final stage: ResultStage 1174 (collect at BaseSparkCommitActionExecutor.java:274) | |
21/12/01 01:21:11 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1173) | |
21/12/01 01:21:11 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1173) | |
21/12/01 01:21:11 INFO DAGScheduler: Submitting ShuffleMapStage 1173 (MapPartitionsRDD[2667] at mapToPair at BaseSparkCommitActionExecutor.java:225), which has no missing parents | |
21/12/01 01:21:11 INFO MemoryStore: Block broadcast_1088 stored as values in memory (estimated size 321.5 KiB, free 2.3 MiB) | |
21/12/01 01:21:11 INFO MemoryStore: Block broadcast_1088_piece0 stored as bytes in memory (estimated size 113.0 KiB, free 2.2 MiB) | |
21/12/01 01:21:11 INFO BlockManagerInfo: Added broadcast_1088_piece0 in memory on 192.168.1.48:56496 (size: 113.0 KiB, free: 365.7 MiB) | |
21/12/01 01:21:11 INFO SparkContext: Created broadcast 1088 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:11 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1173 (MapPartitionsRDD[2667] at mapToPair at BaseSparkCommitActionExecutor.java:225) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:11 INFO TaskSchedulerImpl: Adding task set 1173.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:11 INFO TaskSetManager: Starting task 0.0 in stage 1173.0 (TID 2215) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4898 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:11 INFO Executor: Running task 0.0 in stage 1173.0 (TID 2215) | |
21/12/01 01:21:11 INFO BlockManager: Found block rdd_2656_0 locally | |
21/12/01 01:21:11 INFO Executor: Finished task 0.0 in stage 1173.0 (TID 2215). 1043 bytes result sent to driver | |
21/12/01 01:21:11 INFO TaskSetManager: Finished task 0.0 in stage 1173.0 (TID 2215) in 16 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:11 INFO TaskSchedulerImpl: Removed TaskSet 1173.0, whose tasks have all completed, from pool | |
21/12/01 01:21:11 INFO DAGScheduler: ShuffleMapStage 1173 (mapToPair at BaseSparkCommitActionExecutor.java:225) finished in 0.056 s | |
21/12/01 01:21:11 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:21:11 INFO DAGScheduler: running: Set(ResultStage 1167) | |
21/12/01 01:21:11 INFO DAGScheduler: waiting: Set(ResultStage 1174) | |
21/12/01 01:21:11 INFO DAGScheduler: failed: Set() | |
21/12/01 01:21:11 INFO DAGScheduler: Submitting ResultStage 1174 (MapPartitionsRDD[2672] at map at BaseSparkCommitActionExecutor.java:274), which has no missing parents | |
21/12/01 01:21:11 INFO BlockManagerInfo: Removed broadcast_1083_piece0 on 192.168.1.48:56496 in memory (size: 5.2 KiB, free: 365.7 MiB) | |
21/12/01 01:21:11 INFO BlockManagerInfo: Removed broadcast_1087_piece0 on 192.168.1.48:56496 in memory (size: 110.4 KiB, free: 365.8 MiB) | |
21/12/01 01:21:11 INFO BlockManagerInfo: Removed broadcast_1085_piece0 on 192.168.1.48:56496 in memory (size: 5.1 KiB, free: 365.8 MiB) | |
21/12/01 01:21:11 INFO BlockManagerInfo: Removed broadcast_1086_piece0 on 192.168.1.48:56496 in memory (size: 3.5 KiB, free: 365.8 MiB) | |
21/12/01 01:21:11 INFO BlockManagerInfo: Removed broadcast_1084_piece0 on 192.168.1.48:56496 in memory (size: 3.2 KiB, free: 365.8 MiB) | |
21/12/01 01:21:11 INFO MemoryStore: Block broadcast_1089 stored as values in memory (estimated size 424.7 KiB, free 2.2 MiB) | |
21/12/01 01:21:11 INFO MemoryStore: Block broadcast_1089_piece0 stored as bytes in memory (estimated size 150.2 KiB, free 2.1 MiB) | |
21/12/01 01:21:11 INFO BlockManagerInfo: Added broadcast_1089_piece0 in memory on 192.168.1.48:56496 (size: 150.2 KiB, free: 365.6 MiB) | |
21/12/01 01:21:11 INFO SparkContext: Created broadcast 1089 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:11 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1174 (MapPartitionsRDD[2672] at map at BaseSparkCommitActionExecutor.java:274) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:11 INFO TaskSchedulerImpl: Adding task set 1174.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:11 INFO TaskSetManager: Starting task 0.0 in stage 1174.0 (TID 2216) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:11 INFO Executor: Running task 0.0 in stage 1174.0 (TID 2216) | |
21/12/01 01:21:11 INFO ShuffleBlockFetcherIterator: Getting 1 (652.0 B) non-empty blocks including 1 (652.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:21:11 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:21:11 INFO AbstractSparkDeltaCommitActionExecutor: Merging updates for commit 20211201010227744 for file files-0000 | |
21/12/01 01:21:11 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:21:11 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:21:11 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata | |
21/12/01 01:21:11 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:11 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:11 INFO AbstractTableFileSystemView: Building file system view for partition (files) | |
21/12/01 01:21:11 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=12, NumFileGroups=1, FileGroupsCreationTime=1, StoreTimeTaken=0 | |
21/12/01 01:21:13 INFO DirectWriteMarkers: Creating Marker Path=s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.temp/20211201010227744/files/files-0000_0-1174-2216_20211201004828250001.hfile.marker.APPEND | |
21/12/01 01:21:14 INFO DirectWriteMarkers: [direct] Created marker file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.temp/20211201010227744/files/files-0000_0-1174-2216_20211201004828250001.hfile.marker.APPEND in 2148 ms | |
21/12/01 01:21:14 INFO HoodieLogFormat$WriterBuilder: Building HoodieLogFormat Writer | |
21/12/01 01:21:14 INFO HoodieLogFormat$WriterBuilder: HoodieLogFile on path s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/files/.files-0000_20211201004828250001.log.10_0-1152-2179 | |
21/12/01 01:21:14 INFO HoodieLogFormatWriter: Append not supported.. Rolling over to HoodieLogFile{pathStr='s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/files/.files-0000_20211201004828250001.log.11_0-1174-2216', fileLen=0} | |
21/12/01 01:21:14 INFO CacheConfig: Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=406512, freeSize=394696944, maxSize=395103456, heapSize=406512, minSize=375348288, minFactor=0.95, multiSize=187674144, multiFactor=0.5, singleSize=93837072, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false | |
21/12/01 01:21:14 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:21:14 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:21:14 INFO HoodieAppendHandle: AppendHandle for partitionPath files filePath files/.files-0000_20211201004828250001.log.11_0-1174-2216, took 3358 ms. | |
21/12/01 01:21:15 INFO MemoryStore: Block rdd_2671_0 stored as values in memory (estimated size 957.0 B, free 2.1 MiB) | |
21/12/01 01:21:15 INFO BlockManagerInfo: Added rdd_2671_0 in memory on 192.168.1.48:56496 (size: 957.0 B, free: 365.6 MiB) | |
21/12/01 01:21:15 INFO Executor: Finished task 0.0 in stage 1174.0 (TID 2216). 2106 bytes result sent to driver | |
21/12/01 01:21:15 INFO TaskSetManager: Finished task 0.0 in stage 1174.0 (TID 2216) in 4365 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:15 INFO TaskSchedulerImpl: Removed TaskSet 1174.0, whose tasks have all completed, from pool | |
21/12/01 01:21:15 INFO DAGScheduler: ResultStage 1174 (collect at BaseSparkCommitActionExecutor.java:274) finished in 4.462 s | |
21/12/01 01:21:15 INFO DAGScheduler: Job 797 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:15 INFO TaskSchedulerImpl: Killing all running tasks in stage 1174: Stage finished | |
21/12/01 01:21:15 INFO DAGScheduler: Job 797 finished: collect at BaseSparkCommitActionExecutor.java:274, took 4.519841 s | |
21/12/01 01:21:15 INFO BaseSparkCommitActionExecutor: Committing 20211201010227744, action Type deltacommit | |
21/12/01 01:21:16 INFO SparkContext: Starting job: collect at HoodieSparkEngineContext.java:134 | |
21/12/01 01:21:16 INFO DAGScheduler: Got job 798 (collect at HoodieSparkEngineContext.java:134) with 1 output partitions | |
21/12/01 01:21:16 INFO DAGScheduler: Final stage: ResultStage 1175 (collect at HoodieSparkEngineContext.java:134) | |
21/12/01 01:21:16 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:21:16 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:21:16 INFO DAGScheduler: Submitting ResultStage 1175 (MapPartitionsRDD[2674] at flatMap at HoodieSparkEngineContext.java:134), which has no missing parents | |
21/12/01 01:21:16 INFO MemoryStore: Block broadcast_1090 stored as values in memory (estimated size 99.4 KiB, free 2033.0 KiB) | |
21/12/01 01:21:16 INFO MemoryStore: Block broadcast_1090_piece0 stored as bytes in memory (estimated size 35.3 KiB, free 1997.8 KiB) | |
21/12/01 01:21:16 INFO BlockManagerInfo: Added broadcast_1090_piece0 in memory on 192.168.1.48:56496 (size: 35.3 KiB, free: 365.6 MiB) | |
21/12/01 01:21:16 INFO SparkContext: Created broadcast 1090 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:16 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1175 (MapPartitionsRDD[2674] at flatMap at HoodieSparkEngineContext.java:134) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:16 INFO TaskSchedulerImpl: Adding task set 1175.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:16 INFO TaskSetManager: Starting task 0.0 in stage 1175.0 (TID 2217) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:16 INFO Executor: Running task 0.0 in stage 1175.0 (TID 2217) | |
21/12/01 01:21:16 INFO Executor: Finished task 0.0 in stage 1175.0 (TID 2217). 796 bytes result sent to driver | |
21/12/01 01:21:16 INFO TaskSetManager: Finished task 0.0 in stage 1175.0 (TID 2217) in 133 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:16 INFO TaskSchedulerImpl: Removed TaskSet 1175.0, whose tasks have all completed, from pool | |
21/12/01 01:21:16 INFO DAGScheduler: ResultStage 1175 (collect at HoodieSparkEngineContext.java:134) finished in 0.147 s | |
21/12/01 01:21:16 INFO DAGScheduler: Job 798 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:16 INFO TaskSchedulerImpl: Killing all running tasks in stage 1175: Stage finished | |
21/12/01 01:21:16 INFO DAGScheduler: Job 798 finished: collect at HoodieSparkEngineContext.java:134, took 0.147941 s | |
21/12/01 01:21:16 INFO CommitUtils: Creating metadata for UPSERT_PREPPED numWriteStats:1numReplaceFileIds:0 | |
21/12/01 01:21:16 INFO HoodieActiveTimeline: Marking instant complete [==>20211201010227744__deltacommit__INFLIGHT] | |
21/12/01 01:21:16 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201010227744.deltacommit.inflight | |
21/12/01 01:21:17 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201010227744.deltacommit | |
21/12/01 01:21:17 INFO HoodieActiveTimeline: Completed [==>20211201010227744__deltacommit__INFLIGHT] | |
21/12/01 01:21:17 INFO BaseSparkCommitActionExecutor: Committed 20211201010227744 | |
21/12/01 01:21:18 INFO SparkContext: Starting job: collectAsMap at HoodieSparkEngineContext.java:148 | |
21/12/01 01:21:18 INFO DAGScheduler: Got job 799 (collectAsMap at HoodieSparkEngineContext.java:148) with 1 output partitions | |
21/12/01 01:21:18 INFO DAGScheduler: Final stage: ResultStage 1176 (collectAsMap at HoodieSparkEngineContext.java:148) | |
21/12/01 01:21:18 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:21:18 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:21:18 INFO DAGScheduler: Submitting ResultStage 1176 (MapPartitionsRDD[2676] at mapToPair at HoodieSparkEngineContext.java:145), which has no missing parents | |
21/12/01 01:21:18 INFO MemoryStore: Block broadcast_1091 stored as values in memory (estimated size 99.6 KiB, free 1898.2 KiB) | |
21/12/01 01:21:18 INFO MemoryStore: Block broadcast_1091_piece0 stored as bytes in memory (estimated size 35.3 KiB, free 1862.9 KiB) | |
21/12/01 01:21:18 INFO BlockManagerInfo: Added broadcast_1091_piece0 in memory on 192.168.1.48:56496 (size: 35.3 KiB, free: 365.6 MiB) | |
21/12/01 01:21:18 INFO SparkContext: Created broadcast 1091 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:18 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1176 (MapPartitionsRDD[2676] at mapToPair at HoodieSparkEngineContext.java:145) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:18 INFO TaskSchedulerImpl: Adding task set 1176.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:18 INFO TaskSetManager: Starting task 0.0 in stage 1176.0 (TID 2218) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:18 INFO Executor: Running task 0.0 in stage 1176.0 (TID 2218) | |
21/12/01 01:21:18 INFO HoodieCreateHandle: CreateHandle for partitionPath americas/united_states/san_francisco fileID c6d17cfb-140b-485b-b9ef-0cec97daa7e8-1, took 19768 ms. | |
21/12/01 01:21:18 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/create?markername=asia%2Findia%2Fchennai%2Fc6d17cfb-140b-485b-b9ef-0cec97daa7e8-2_0-1167-2209_20211201011944112.parquet.marker.CREATE&markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201011944112) | |
21/12/01 01:21:18 INFO MarkerHandler: Request: create marker s3a://hudi-testing/test_hoodie_table_2/.hoodie/.temp/20211201011944112 asia/india/chennai/c6d17cfb-140b-485b-b9ef-0cec97daa7e8-2_0-1167-2209_20211201011944112.parquet.marker.CREATE | |
21/12/01 01:21:19 INFO Executor: Finished task 0.0 in stage 1176.0 (TID 2218). 898 bytes result sent to driver | |
21/12/01 01:21:19 INFO TaskSetManager: Finished task 0.0 in stage 1176.0 (TID 2218) in 1153 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:19 INFO TaskSchedulerImpl: Removed TaskSet 1176.0, whose tasks have all completed, from pool | |
21/12/01 01:21:19 INFO DAGScheduler: ResultStage 1176 (collectAsMap at HoodieSparkEngineContext.java:148) finished in 1.166 s | |
21/12/01 01:21:19 INFO DAGScheduler: Job 799 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:19 INFO TaskSchedulerImpl: Killing all running tasks in stage 1176: Stage finished | |
21/12/01 01:21:19 INFO DAGScheduler: Job 799 finished: collectAsMap at HoodieSparkEngineContext.java:148, took 1.166884 s | |
21/12/01 01:21:19 INFO TimelineServerBasedWriteMarkers: [timeline-server-based] Created marker file asia/india/chennai/c6d17cfb-140b-485b-b9ef-0cec97daa7e8-2_0-1167-2209_20211201011944112.parquet.marker.CREATE in 529 ms | |
21/12/01 01:21:19 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:21:19 INFO HoodieCreateHandle: New CreateHandle for partition :asia/india/chennai with fileId c6d17cfb-140b-485b-b9ef-0cec97daa7e8-2 | |
21/12/01 01:21:19 INFO BlockManagerInfo: Removed broadcast_1088_piece0 on 192.168.1.48:56496 in memory (size: 113.0 KiB, free: 365.7 MiB) | |
21/12/01 01:21:19 INFO BlockManagerInfo: Removed broadcast_1089_piece0 on 192.168.1.48:56496 in memory (size: 150.2 KiB, free: 365.8 MiB) | |
21/12/01 01:21:19 INFO BlockManagerInfo: Removed broadcast_1090_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 365.9 MiB) | |
21/12/01 01:21:19 INFO BlockManagerInfo: Removed broadcast_1091_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 365.9 MiB) | |
21/12/01 01:21:20 INFO FSUtils: Removed directory at s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.temp/20211201010227744 | |
21/12/01 01:21:20 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011818630__deltacommit__COMPLETED]} | |
21/12/01 01:21:20 INFO HoodieTimelineArchiveLog: No Instants to archive | |
21/12/01 01:21:20 INFO HoodieHeartbeatClient: Stopping heartbeat for instant 20211201010227744 | |
21/12/01 01:21:20 INFO HoodieHeartbeatClient: Stopped heartbeat for instant 20211201010227744 | |
21/12/01 01:21:20 INFO HeartbeatUtils: Deleted the heartbeat for instant 20211201010227744 | |
21/12/01 01:21:20 INFO HoodieHeartbeatClient: Deleted heartbeat file for instant 20211201010227744 | |
21/12/01 01:21:21 INFO SparkContext: Starting job: collect at SparkHoodieBackedTableMetadataWriter.java:146 | |
21/12/01 01:21:21 INFO DAGScheduler: Got job 800 (collect at SparkHoodieBackedTableMetadataWriter.java:146) with 1 output partitions | |
21/12/01 01:21:21 INFO DAGScheduler: Final stage: ResultStage 1178 (collect at SparkHoodieBackedTableMetadataWriter.java:146) | |
21/12/01 01:21:21 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1177) | |
21/12/01 01:21:21 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:21:21 INFO DAGScheduler: Submitting ResultStage 1178 (MapPartitionsRDD[2671] at flatMap at BaseSparkCommitActionExecutor.java:176), which has no missing parents | |
21/12/01 01:21:21 INFO MemoryStore: Block broadcast_1092 stored as values in memory (estimated size 424.3 KiB, free 2.7 MiB) | |
21/12/01 01:21:21 INFO MemoryStore: Block broadcast_1092_piece0 stored as bytes in memory (estimated size 150.1 KiB, free 2.5 MiB) | |
21/12/01 01:21:21 INFO BlockManagerInfo: Added broadcast_1092_piece0 in memory on 192.168.1.48:56496 (size: 150.1 KiB, free: 365.8 MiB) | |
21/12/01 01:21:21 INFO SparkContext: Created broadcast 1092 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:21 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1178 (MapPartitionsRDD[2671] at flatMap at BaseSparkCommitActionExecutor.java:176) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:21 INFO TaskSchedulerImpl: Adding task set 1178.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:21 INFO TaskSetManager: Starting task 0.0 in stage 1178.0 (TID 2219) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:21 INFO Executor: Running task 0.0 in stage 1178.0 (TID 2219) | |
21/12/01 01:21:21 INFO BlockManager: Found block rdd_2671_0 locally | |
21/12/01 01:21:21 INFO Executor: Finished task 0.0 in stage 1178.0 (TID 2219). 1799 bytes result sent to driver | |
21/12/01 01:21:21 INFO TaskSetManager: Finished task 0.0 in stage 1178.0 (TID 2219) in 17 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:21 INFO TaskSchedulerImpl: Removed TaskSet 1178.0, whose tasks have all completed, from pool | |
21/12/01 01:21:21 INFO DAGScheduler: ResultStage 1178 (collect at SparkHoodieBackedTableMetadataWriter.java:146) finished in 0.068 s | |
21/12/01 01:21:21 INFO DAGScheduler: Job 800 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:21 INFO TaskSchedulerImpl: Killing all running tasks in stage 1178: Stage finished | |
21/12/01 01:21:21 INFO DAGScheduler: Job 800 finished: collect at SparkHoodieBackedTableMetadataWriter.java:146, took 0.068098 s | |
21/12/01 01:21:21 INFO BlockManager: Removing RDD 2671 | |
21/12/01 01:21:21 INFO BlockManagerInfo: Removed broadcast_1092_piece0 on 192.168.1.48:56496 in memory (size: 150.1 KiB, free: 365.9 MiB) | |
21/12/01 01:21:21 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011818630__deltacommit__COMPLETED]} | |
21/12/01 01:21:21 INFO SparkRDDWriteClient: Committing Clustering 20211201010227744. Finished with result HoodieReplaceMetadata{partitionToWriteStats={americas/brazil/sao_paulo=[HoodieWriteStat{fileId='354cffd7-15cd-4805-91bd-751cd2f50027-0', path='americas/brazil/sao_paulo/354cffd7-15cd-4805-91bd-751cd2f50027-0_0-1162-2194_20211201010227744.parquet', prevCommit='null', numWrites=200177, numDeletes=0, numUpdateWrites=0, totalWriteBytes=18625669, totalWriteErrors=0, tempPath='null', partitionPath='americas/brazil/sao_paulo', totalLogRecords=0, totalLogFilesCompacted=0, totalLogSizeCompacted=0, totalUpdatedRecordsCompacted=0, totalLogBlocks=0, totalCorruptLogBlock=0, totalRollbackBlocks=0}], americas/united_states/san_francisco=[HoodieWriteStat{fileId='f67453ae-b123-452c-85f7-1c49289786be-0', path='americas/united_states/san_francisco/f67453ae-b123-452c-85f7-1c49289786be-0_1-1162-2195_20211201010227744.parquet', prevCommit='null', numWrites=200194, numDeletes=0, numUpdateWrites=0, totalWriteBytes=18661050, totalWriteErrors=0, tempPath='null', partitionPath='americas/united_states/san_francisco', totalLogRecords=0, totalLogFilesCompacted=0, totalLogSizeCompacted=0, totalUpdatedRecordsCompacted=0, totalLogBlocks=0, totalCorruptLogBlock=0, totalRollbackBlocks=0}], asia/india/chennai=[HoodieWriteStat{fileId='1f337654-ab8a-46eb-92d5-87c4b70a7864-0', path='asia/india/chennai/1f337654-ab8a-46eb-92d5-87c4b70a7864-0_2-1162-2196_20211201010227744.parquet', prevCommit='null', numWrites=199629, numDeletes=0, numUpdateWrites=0, totalWriteBytes=18554632, totalWriteErrors=0, tempPath='null', partitionPath='asia/india/chennai', totalLogRecords=0, totalLogFilesCompacted=0, totalLogSizeCompacted=0, totalUpdatedRecordsCompacted=0, totalLogBlocks=0, totalCorruptLogBlock=0, totalRollbackBlocks=0}]}, partitionToReplaceFileIds={americas/brazil/sao_paulo=[b38c1920-eead-4d80-94ff-f54a8b97f14d-0], americas/united_states/san_francisco=[b38c1920-eead-4d80-94ff-f54a8b97f14d-1], asia/india/chennai=[b38c1920-eead-4d80-94ff-f54a8b97f14d-2]}, compacted=false, extraMetadata={schema={"type":"record","name":"triprec","fields":[{"name":"begin_lat","type":"double"},{"name":"begin_lon","type":"double"},{"name":"driver","type":"string"},{"name":"end_lat","type":"double"},{"name":"end_lon","type":"double"},{"name":"fare","type":"double"},{"name":"partitionpath","type":"string"},{"name":"rider","type":"string"},{"name":"ts","type":"long"},{"name":"uuid","type":"string"}]}}, operationType=CLUSTER} | |
21/12/01 01:21:21 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201010227744.replacecommit.inflight | |
21/12/01 01:21:21 INFO BlockManager: Removing RDD 2656 | |
21/12/01 01:21:22 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201010227744.replacecommit | |
21/12/01 01:21:22 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/dir/delete?markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201010227744) | |
21/12/01 01:21:22 INFO IteratorBasedQueueProducer: finished buffering records | |
21/12/01 01:21:22 INFO HoodieCreateHandle: Closing the file c6d17cfb-140b-485b-b9ef-0cec97daa7e8-2 as we are done with all the records 199842 | |
21/12/01 01:21:23 INFO SparkContext: Starting job: collectAsMap at HoodieSparkEngineContext.java:148 | |
21/12/01 01:21:23 INFO DAGScheduler: Got job 801 (collectAsMap at HoodieSparkEngineContext.java:148) with 2 output partitions | |
21/12/01 01:21:23 INFO DAGScheduler: Final stage: ResultStage 1179 (collectAsMap at HoodieSparkEngineContext.java:148) | |
21/12/01 01:21:23 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:21:23 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:21:23 INFO DAGScheduler: Submitting ResultStage 1179 (MapPartitionsRDD[2678] at mapToPair at HoodieSparkEngineContext.java:145), which has no missing parents | |
21/12/01 01:21:23 INFO MemoryStore: Block broadcast_1093 stored as values in memory (estimated size 99.6 KiB, free 364.4 MiB) | |
21/12/01 01:21:23 INFO MemoryStore: Block broadcast_1093_piece0 stored as bytes in memory (estimated size 35.3 KiB, free 364.4 MiB) | |
21/12/01 01:21:23 INFO BlockManagerInfo: Added broadcast_1093_piece0 in memory on 192.168.1.48:56496 (size: 35.3 KiB, free: 365.9 MiB) | |
21/12/01 01:21:23 INFO SparkContext: Created broadcast 1093 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:23 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 1179 (MapPartitionsRDD[2678] at mapToPair at HoodieSparkEngineContext.java:145) (first 15 tasks are for partitions Vector(0, 1)) | |
21/12/01 01:21:23 INFO TaskSchedulerImpl: Adding task set 1179.0 with 2 tasks resource profile 0 | |
21/12/01 01:21:23 INFO TaskSetManager: Starting task 0.0 in stage 1179.0 (TID 2220) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4418 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:23 INFO TaskSetManager: Starting task 1.0 in stage 1179.0 (TID 2221) (192.168.1.48, executor driver, partition 1, PROCESS_LOCAL, 4414 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:23 INFO Executor: Running task 0.0 in stage 1179.0 (TID 2220) | |
21/12/01 01:21:23 INFO Executor: Running task 1.0 in stage 1179.0 (TID 2221) | |
21/12/01 01:21:23 INFO Executor: Finished task 1.0 in stage 1179.0 (TID 2221). 884 bytes result sent to driver | |
21/12/01 01:21:23 INFO TaskSetManager: Finished task 1.0 in stage 1179.0 (TID 2221) in 329 ms on 192.168.1.48 (executor driver) (1/2) | |
21/12/01 01:21:24 INFO Executor: Finished task 0.0 in stage 1179.0 (TID 2220). 888 bytes result sent to driver | |
21/12/01 01:21:24 INFO TaskSetManager: Finished task 0.0 in stage 1179.0 (TID 2220) in 883 ms on 192.168.1.48 (executor driver) (2/2) | |
21/12/01 01:21:24 INFO TaskSchedulerImpl: Removed TaskSet 1179.0, whose tasks have all completed, from pool | |
21/12/01 01:21:24 INFO DAGScheduler: ResultStage 1179 (collectAsMap at HoodieSparkEngineContext.java:148) finished in 0.899 s | |
21/12/01 01:21:24 INFO DAGScheduler: Job 801 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:24 INFO TaskSchedulerImpl: Killing all running tasks in stage 1179: Stage finished | |
21/12/01 01:21:24 INFO DAGScheduler: Job 801 finished: collectAsMap at HoodieSparkEngineContext.java:148, took 0.899571 s | |
21/12/01 01:21:24 INFO FSUtils: Removed directory at s3a://hudi-testing/test_hoodie_table_2/.hoodie/.temp/20211201010227744 | |
21/12/01 01:21:24 INFO SparkRDDWriteClient: Clustering successfully on commit 20211201010227744 | |
21/12/01 01:21:24 INFO AsyncClusteringService: Finished clustering for instant [==>20211201010227744__replacecommit__REQUESTED] | |
21/12/01 01:21:24 INFO HoodieAsyncService: Waiting for next instant upto 10 seconds | |
21/12/01 01:21:24 INFO AsyncClusteringService: Starting clustering for instant [==>20211201011347895__replacecommit__REQUESTED] | |
21/12/01 01:21:24 INFO HoodieSparkClusteringClient: Executing clustering instance [==>20211201011347895__replacecommit__REQUESTED] | |
21/12/01 01:21:24 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:24 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:21:24 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:24 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:25 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011944112__commit__INFLIGHT]} | |
21/12/01 01:21:25 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:25 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:21:25 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:25 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:25 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:21:25 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:25 INFO FileSystemViewManager: Creating View Manager with storage type :REMOTE_FIRST | |
21/12/01 01:21:25 INFO FileSystemViewManager: Creating remote first table view | |
21/12/01 01:21:25 INFO FileSystemViewManager: Creating remote view for basePath s3a://hudi-testing/test_hoodie_table_2. Server=192.168.1.48:56507, Timeout=300 | |
21/12/01 01:21:25 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2 | |
21/12/01 01:21:27 INFO AbstractTableFileSystemView: Took 1957 ms to read 9 instants, 66 replaced file groups | |
21/12/01 01:21:28 INFO ClusteringUtils: Found 9 files in pending clustering operations | |
21/12/01 01:21:28 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/refresh/?basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011818630&timelinehash=2c6fcddbbde6555c67569b39a63937bbe026f05ac6b0cf3d76179991f9893481) | |
21/12/01 01:21:30 INFO AbstractTableFileSystemView: Took 2155 ms to read 9 instants, 66 replaced file groups | |
21/12/01 01:21:31 INFO ClusteringUtils: Found 9 files in pending clustering operations | |
21/12/01 01:21:32 INFO AsyncCleanerService: Async auto cleaning is not enabled. Not running cleaner now | |
21/12/01 01:21:32 INFO SparkRDDWriteClient: Starting clustering at 20211201011347895 | |
21/12/01 01:21:32 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201011347895.replacecommit.requested | |
21/12/01 01:21:33 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201011347895.replacecommit.inflight | |
21/12/01 01:21:33 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011944112__commit__INFLIGHT]} | |
21/12/01 01:21:33 INFO SparkSortAndSizeExecutionStrategy: Starting clustering for a group, parallelism:1 commit:20211201011347895 | |
21/12/01 01:21:33 INFO SparkSortAndSizeExecutionStrategy: Starting clustering for a group, parallelism:1 commit:20211201011347895 | |
21/12/01 01:21:33 INFO BlockManagerInfo: Removed broadcast_1093_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 365.9 MiB) | |
21/12/01 01:21:34 INFO SparkSortAndSizeExecutionStrategy: Starting clustering for a group, parallelism:1 commit:20211201011347895 | |
21/12/01 01:21:34 INFO SparkContext: Starting job: collect at SparkExecuteClusteringCommitActionExecutor.java:85 | |
21/12/01 01:21:34 INFO DAGScheduler: Registering RDD 2698 (sortBy at GlobalSortPartitioner.java:41) as input to shuffle 263 | |
21/12/01 01:21:34 INFO DAGScheduler: Registering RDD 2690 (sortBy at GlobalSortPartitioner.java:41) as input to shuffle 264 | |
21/12/01 01:21:34 INFO DAGScheduler: Registering RDD 2682 (sortBy at GlobalSortPartitioner.java:41) as input to shuffle 265 | |
21/12/01 01:21:34 INFO DAGScheduler: Got job 802 (collect at SparkExecuteClusteringCommitActionExecutor.java:85) with 3 output partitions | |
21/12/01 01:21:34 INFO DAGScheduler: Final stage: ResultStage 1183 (collect at SparkExecuteClusteringCommitActionExecutor.java:85) | |
21/12/01 01:21:34 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1181, ShuffleMapStage 1182, ShuffleMapStage 1180) | |
21/12/01 01:21:34 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1181, ShuffleMapStage 1182, ShuffleMapStage 1180) | |
21/12/01 01:21:34 INFO DAGScheduler: Submitting ShuffleMapStage 1180 (MapPartitionsRDD[2698] at sortBy at GlobalSortPartitioner.java:41), which has no missing parents | |
21/12/01 01:21:34 INFO MemoryStore: Block broadcast_1094 stored as values in memory (estimated size 512.3 KiB, free 364.0 MiB) | |
21/12/01 01:21:34 INFO MemoryStore: Block broadcast_1094_piece0 stored as bytes in memory (estimated size 179.3 KiB, free 363.8 MiB) | |
21/12/01 01:21:34 INFO BlockManagerInfo: Added broadcast_1094_piece0 in memory on 192.168.1.48:56496 (size: 179.3 KiB, free: 365.7 MiB) | |
21/12/01 01:21:34 INFO SparkContext: Created broadcast 1094 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:34 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1180 (MapPartitionsRDD[2698] at sortBy at GlobalSortPartitioner.java:41) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:34 INFO TaskSchedulerImpl: Adding task set 1180.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:34 INFO DAGScheduler: Submitting ShuffleMapStage 1181 (MapPartitionsRDD[2690] at sortBy at GlobalSortPartitioner.java:41), which has no missing parents | |
21/12/01 01:21:34 INFO TaskSetManager: Starting task 0.0 in stage 1180.0 (TID 2222) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4595 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:34 INFO Executor: Running task 0.0 in stage 1180.0 (TID 2222) | |
21/12/01 01:21:34 INFO MemoryStore: Block broadcast_1095 stored as values in memory (estimated size 512.3 KiB, free 363.3 MiB) | |
21/12/01 01:21:34 INFO MemoryStore: Block broadcast_1095_piece0 stored as bytes in memory (estimated size 179.3 KiB, free 363.2 MiB) | |
21/12/01 01:21:34 INFO BlockManagerInfo: Added broadcast_1095_piece0 in memory on 192.168.1.48:56496 (size: 179.3 KiB, free: 365.6 MiB) | |
21/12/01 01:21:34 INFO SparkContext: Created broadcast 1095 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:34 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1181 (MapPartitionsRDD[2690] at sortBy at GlobalSortPartitioner.java:41) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:34 INFO TaskSchedulerImpl: Adding task set 1181.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:34 INFO DAGScheduler: Submitting ShuffleMapStage 1182 (MapPartitionsRDD[2682] at sortBy at GlobalSortPartitioner.java:41), which has no missing parents | |
21/12/01 01:21:34 INFO TaskSetManager: Starting task 0.0 in stage 1181.0 (TID 2223) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4631 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:34 INFO Executor: Running task 0.0 in stage 1181.0 (TID 2223) | |
21/12/01 01:21:34 INFO MemoryStore: Block broadcast_1096 stored as values in memory (estimated size 512.3 KiB, free 362.7 MiB) | |
21/12/01 01:21:34 INFO MemoryStore: Block broadcast_1096_piece0 stored as bytes in memory (estimated size 179.3 KiB, free 362.5 MiB) | |
21/12/01 01:21:34 INFO BlockManagerInfo: Added broadcast_1096_piece0 in memory on 192.168.1.48:56496 (size: 179.3 KiB, free: 365.4 MiB) | |
21/12/01 01:21:34 INFO SparkContext: Created broadcast 1096 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:34 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1182 (MapPartitionsRDD[2682] at sortBy at GlobalSortPartitioner.java:41) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:34 INFO TaskSchedulerImpl: Adding task set 1182.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:34 INFO TaskSetManager: Starting task 0.0 in stage 1182.0 (TID 2224) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4609 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:34 INFO Executor: Running task 0.0 in stage 1182.0 (TID 2224) | |
21/12/01 01:21:34 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:21:35 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:21:35 INFO S3AInputStream: Switching to Random IO seek policy | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: RecordReader initialized will read a total of 199893 records. | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: at row 0. reading next block | |
21/12/01 01:21:35 INFO CodecPool: Got brand-new decompressor [.gz] | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: block read in memory in 143 ms. row count = 3157 | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: Assembled and processed 3157 records from 15 columns in 20 ms: 157.85 rec/ms, 2367.75 cell/ms | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: time spent so far 87% reading (143 ms) and 12% processing (20 ms) | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: at row 3157. reading next block | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: block read in memory in 140 ms. row count = 3157 | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: Assembled and processed 6314 records from 15 columns in 40 ms: 157.85 rec/ms, 2367.75 cell/ms | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: time spent so far 87% reading (283 ms) and 12% processing (40 ms) | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: at row 6314. reading next block | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: RecordReader initialized will read a total of 200342 records. | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: at row 0. reading next block | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: block read in memory in 130 ms. row count = 3157 | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: Assembled and processed 9471 records from 15 columns in 58 ms: 163.2931 rec/ms, 2449.3965 cell/ms | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: time spent so far 87% reading (413 ms) and 12% processing (58 ms) | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: at row 9471. reading next block | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: RecordReader initialized will read a total of 199765 records. | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: at row 0. reading next block | |
21/12/01 01:21:35 INFO CodecPool: Got brand-new decompressor [.gz] | |
21/12/01 01:21:35 INFO InternalParquetRecordReader: block read in memory in 139 ms. row count = 3029 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 3029 records from 15 columns in 20 ms: 151.45 rec/ms, 2271.75 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 87% reading (139 ms) and 12% processing (20 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 3029. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 129 ms. row count = 3157 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 12628 records from 15 columns in 77 ms: 164.0 rec/ms, 2460.0 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 87% reading (542 ms) and 12% processing (77 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 12628. reading next block | |
21/12/01 01:21:36 INFO CodecPool: Got brand-new decompressor [.gz] | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 151 ms. row count = 2848 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 2848 records from 15 columns in 19 ms: 149.89473 rec/ms, 2248.4211 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 88% reading (151 ms) and 11% processing (19 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 2848. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 124 ms. row count = 3029 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 6058 records from 15 columns in 38 ms: 159.42105 rec/ms, 2391.3157 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 87% reading (263 ms) and 12% processing (38 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 6058. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 122 ms. row count = 3157 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 135 ms. row count = 2848 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 15785 records from 15 columns in 97 ms: 162.73196 rec/ms, 2440.9795 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 87% reading (664 ms) and 12% processing (97 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 15785. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 5696 records from 15 columns in 36 ms: 158.22223 rec/ms, 2373.3333 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 88% reading (286 ms) and 11% processing (36 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 5696. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 134 ms. row count = 3029 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 9087 records from 15 columns in 55 ms: 165.21819 rec/ms, 2478.2727 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 87% reading (397 ms) and 12% processing (55 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 9087. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 130 ms. row count = 3157 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 126 ms. row count = 2848 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 8544 records from 15 columns in 54 ms: 158.22223 rec/ms, 2373.3333 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 88% reading (412 ms) and 11% processing (54 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 8544. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 18942 records from 15 columns in 117 ms: 161.89743 rec/ms, 2428.4614 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 87% reading (794 ms) and 12% processing (117 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 18942. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 126 ms. row count = 3029 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 12116 records from 15 columns in 73 ms: 165.9726 rec/ms, 2489.589 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 87% reading (523 ms) and 12% processing (73 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 12116. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 3157 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 127 ms. row count = 2848 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 22099 records from 15 columns in 136 ms: 162.49265 rec/ms, 2437.3896 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 87% reading (915 ms) and 12% processing (136 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 22099. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 11392 records from 15 columns in 72 ms: 158.22223 rec/ms, 2373.3333 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 88% reading (539 ms) and 11% processing (72 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 11392. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 127 ms. row count = 3029 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 15145 records from 15 columns in 91 ms: 166.42857 rec/ms, 2496.4285 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 87% reading (650 ms) and 12% processing (91 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 15145. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 127 ms. row count = 2848 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 14240 records from 15 columns in 89 ms: 160.0 rec/ms, 2400.0 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 88% reading (666 ms) and 11% processing (89 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 14240. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 167 ms. row count = 3157 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 128 ms. row count = 3029 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 25256 records from 15 columns in 154 ms: 164.0 rec/ms, 2460.0 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 87% reading (1082 ms) and 12% processing (154 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 25256. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 18174 records from 15 columns in 109 ms: 166.73395 rec/ms, 2501.0093 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 87% reading (778 ms) and 12% processing (109 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 18174. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 2848 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 17088 records from 15 columns in 106 ms: 161.20755 rec/ms, 2418.1133 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 88% reading (787 ms) and 11% processing (106 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 17088. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 134 ms. row count = 3157 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 133 ms. row count = 3029 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 28413 records from 15 columns in 172 ms: 165.19186 rec/ms, 2477.878 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 87% reading (1216 ms) and 12% processing (172 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 28413. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 21203 records from 15 columns in 127 ms: 166.95276 rec/ms, 2504.2913 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 87% reading (911 ms) and 12% processing (127 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 21203. reading next block | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: block read in memory in 123 ms. row count = 2848 | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: Assembled and processed 19936 records from 15 columns in 123 ms: 162.0813 rec/ms, 2431.2195 cell/ms | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: time spent so far 88% reading (910 ms) and 11% processing (123 ms) | |
21/12/01 01:21:36 INFO InternalParquetRecordReader: at row 19936. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 139 ms. row count = 3157 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 131 ms. row count = 3029 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 31570 records from 15 columns in 190 ms: 166.1579 rec/ms, 2492.3684 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 87% reading (1355 ms) and 12% processing (190 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 31570. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 24232 records from 15 columns in 144 ms: 168.27777 rec/ms, 2524.1667 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 87% reading (1042 ms) and 12% processing (144 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 24232. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 137 ms. row count = 2848 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 22784 records from 15 columns in 139 ms: 163.91367 rec/ms, 2458.705 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 88% reading (1047 ms) and 11% processing (139 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 22784. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 138 ms. row count = 3157 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 34727 records from 15 columns in 207 ms: 167.76329 rec/ms, 2516.4492 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 87% reading (1493 ms) and 12% processing (207 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 34727. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 180 ms. row count = 3029 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 27261 records from 15 columns in 162 ms: 168.27777 rec/ms, 2524.1667 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 88% reading (1222 ms) and 11% processing (162 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 27261. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 187 ms. row count = 2848 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 25632 records from 15 columns in 174 ms: 147.31035 rec/ms, 2209.6553 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 87% reading (1234 ms) and 12% processing (174 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 25632. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 157 ms. row count = 3157 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 37884 records from 15 columns in 225 ms: 168.37334 rec/ms, 2525.6 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 88% reading (1650 ms) and 12% processing (225 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 37884. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 152 ms. row count = 3029 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 30290 records from 15 columns in 179 ms: 169.21788 rec/ms, 2538.268 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 88% reading (1374 ms) and 11% processing (179 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 30290. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 142 ms. row count = 2848 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 28480 records from 15 columns in 190 ms: 149.89473 rec/ms, 2248.4211 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 87% reading (1376 ms) and 12% processing (190 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 28480. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 138 ms. row count = 3157 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 41041 records from 15 columns in 243 ms: 168.893 rec/ms, 2533.395 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 88% reading (1788 ms) and 11% processing (243 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 41041. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 142 ms. row count = 3029 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 33319 records from 15 columns in 196 ms: 169.9949 rec/ms, 2549.9236 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 88% reading (1516 ms) and 11% processing (196 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 33319. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 134 ms. row count = 2848 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 31328 records from 15 columns in 206 ms: 152.07767 rec/ms, 2281.165 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 87% reading (1510 ms) and 12% processing (206 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 31328. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 144 ms. row count = 3029 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 36348 records from 15 columns in 213 ms: 170.64789 rec/ms, 2559.7183 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 88% reading (1660 ms) and 11% processing (213 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 36348. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 232 ms. row count = 3157 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 44198 records from 15 columns in 260 ms: 169.99231 rec/ms, 2549.8845 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 88% reading (2020 ms) and 11% processing (260 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 44198. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 147 ms. row count = 2848 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 34176 records from 15 columns in 222 ms: 153.94595 rec/ms, 2309.1892 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 88% reading (1657 ms) and 11% processing (222 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 34176. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 151 ms. row count = 3029 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 39377 records from 15 columns in 230 ms: 171.20435 rec/ms, 2568.0652 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 88% reading (1811 ms) and 11% processing (230 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 39377. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 149 ms. row count = 3157 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 47355 records from 15 columns in 278 ms: 170.34172 rec/ms, 2555.126 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 88% reading (2169 ms) and 11% processing (278 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 47355. reading next block | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: block read in memory in 141 ms. row count = 2848 | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: Assembled and processed 37024 records from 15 columns in 239 ms: 154.91214 rec/ms, 2323.6821 cell/ms | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: time spent so far 88% reading (1798 ms) and 11% processing (239 ms) | |
21/12/01 01:21:37 INFO InternalParquetRecordReader: at row 37024. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 138 ms. row count = 3029 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 42406 records from 15 columns in 247 ms: 171.6842 rec/ms, 2575.2632 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 88% reading (1949 ms) and 11% processing (247 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 42406. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 138 ms. row count = 3157 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 50512 records from 15 columns in 295 ms: 171.22711 rec/ms, 2568.4067 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 88% reading (2307 ms) and 11% processing (295 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 50512. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 140 ms. row count = 2848 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 39872 records from 15 columns in 255 ms: 156.36078 rec/ms, 2345.4119 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 88% reading (1938 ms) and 11% processing (255 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 39872. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 155 ms. row count = 3029 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 45435 records from 15 columns in 264 ms: 172.10228 rec/ms, 2581.5342 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 88% reading (2104 ms) and 11% processing (264 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 45435. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 173 ms. row count = 3157 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 144 ms. row count = 2848 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 53669 records from 15 columns in 313 ms: 171.46646 rec/ms, 2571.9968 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 88% reading (2480 ms) and 11% processing (313 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 53669. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 42720 records from 15 columns in 271 ms: 157.63838 rec/ms, 2364.5757 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 88% reading (2082 ms) and 11% processing (271 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 42720. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 161 ms. row count = 3029 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 48464 records from 15 columns in 281 ms: 172.46976 rec/ms, 2587.0461 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 88% reading (2265 ms) and 11% processing (281 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 48464. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 146 ms. row count = 3157 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 146 ms. row count = 2848 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 56826 records from 15 columns in 330 ms: 172.2 rec/ms, 2583.0 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 88% reading (2626 ms) and 11% processing (330 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 56826. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 45568 records from 15 columns in 305 ms: 149.40327 rec/ms, 2241.049 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 87% reading (2228 ms) and 12% processing (305 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 45568. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 150 ms. row count = 3157 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 197 ms. row count = 3029 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 148 ms. row count = 2848 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 59983 records from 15 columns in 349 ms: 171.87106 rec/ms, 2578.066 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 88% reading (2776 ms) and 11% processing (349 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 59983. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 51493 records from 15 columns in 298 ms: 172.7953 rec/ms, 2591.9294 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 89% reading (2462 ms) and 10% processing (298 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 51493. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 48416 records from 15 columns in 321 ms: 150.82866 rec/ms, 2262.43 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 88% reading (2376 ms) and 11% processing (321 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 48416. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 164 ms. row count = 3157 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 162 ms. row count = 3029 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 63140 records from 15 columns in 367 ms: 172.0436 rec/ms, 2580.654 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 88% reading (2940 ms) and 11% processing (367 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 63140. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 54522 records from 15 columns in 316 ms: 172.53798 rec/ms, 2588.0696 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 89% reading (2624 ms) and 10% processing (316 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 54522. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 262 ms. row count = 2848 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: Assembled and processed 51264 records from 15 columns in 338 ms: 151.66864 rec/ms, 2275.0295 cell/ms | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: time spent so far 88% reading (2638 ms) and 11% processing (338 ms) | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: at row 51264. reading next block | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 162 ms. row count = 3029 | |
21/12/01 01:21:38 INFO InternalParquetRecordReader: block read in memory in 169 ms. row count = 3157 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 57551 records from 15 columns in 333 ms: 172.82582 rec/ms, 2592.3875 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 89% reading (2786 ms) and 10% processing (333 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 57551. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 66297 records from 15 columns in 385 ms: 172.2 rec/ms, 2583.0 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 88% reading (3109 ms) and 11% processing (385 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 66297. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 178 ms. row count = 2848 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 54112 records from 15 columns in 355 ms: 152.42816 rec/ms, 2286.4226 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 88% reading (2816 ms) and 11% processing (355 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 54112. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 155 ms. row count = 3029 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 154 ms. row count = 3157 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 60580 records from 15 columns in 350 ms: 173.08571 rec/ms, 2596.2856 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 89% reading (2941 ms) and 10% processing (350 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 60580. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 69454 records from 15 columns in 403 ms: 172.34244 rec/ms, 2585.1365 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 89% reading (3263 ms) and 10% processing (403 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 69454. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 158 ms. row count = 2848 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 56960 records from 15 columns in 372 ms: 153.11829 rec/ms, 2296.7742 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 88% reading (2974 ms) and 11% processing (372 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 56960. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 163 ms. row count = 3029 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 167 ms. row count = 3157 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 63609 records from 15 columns in 367 ms: 173.32153 rec/ms, 2599.823 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 89% reading (3104 ms) and 10% processing (367 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 63609. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 72611 records from 15 columns in 421 ms: 172.47269 rec/ms, 2587.0903 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 89% reading (3430 ms) and 10% processing (421 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 72611. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 172 ms. row count = 2848 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 59808 records from 15 columns in 389 ms: 153.74808 rec/ms, 2306.2212 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 88% reading (3146 ms) and 11% processing (389 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 59808. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 176 ms. row count = 3029 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 176 ms. row count = 3157 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 66638 records from 15 columns in 391 ms: 170.42967 rec/ms, 2556.445 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 89% reading (3280 ms) and 10% processing (391 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 66638. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 75768 records from 15 columns in 446 ms: 169.8834 rec/ms, 2548.2512 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 88% reading (3606 ms) and 11% processing (446 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 75768. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 155 ms. row count = 2848 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 62656 records from 15 columns in 408 ms: 153.56863 rec/ms, 2303.5293 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 88% reading (3301 ms) and 11% processing (408 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 62656. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 160 ms. row count = 3029 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 69667 records from 15 columns in 408 ms: 170.75246 rec/ms, 2561.2869 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 89% reading (3440 ms) and 10% processing (408 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 69667. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 257 ms. row count = 3157 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 162 ms. row count = 2848 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 78925 records from 15 columns in 465 ms: 169.73119 rec/ms, 2545.9678 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 89% reading (3863 ms) and 10% processing (465 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 78925. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 65504 records from 15 columns in 443 ms: 147.86456 rec/ms, 2217.9685 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 88% reading (3463 ms) and 11% processing (443 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 65504. reading next block | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: block read in memory in 152 ms. row count = 3029 | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: Assembled and processed 72696 records from 15 columns in 425 ms: 171.04941 rec/ms, 2565.7412 cell/ms | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: time spent so far 89% reading (3592 ms) and 10% processing (425 ms) | |
21/12/01 01:21:39 INFO InternalParquetRecordReader: at row 72696. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 164 ms. row count = 2848 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 189 ms. row count = 3157 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 68352 records from 15 columns in 460 ms: 148.59131 rec/ms, 2228.8696 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 88% reading (3627 ms) and 11% processing (460 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 68352. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 82082 records from 15 columns in 482 ms: 170.2946 rec/ms, 2554.4192 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (4052 ms) and 10% processing (482 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 82082. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 159 ms. row count = 3029 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 75725 records from 15 columns in 442 ms: 171.32353 rec/ms, 2569.853 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (3751 ms) and 10% processing (442 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 75725. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 163 ms. row count = 3157 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 85239 records from 15 columns in 500 ms: 170.478 rec/ms, 2557.17 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (4215 ms) and 10% processing (500 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 85239. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 195 ms. row count = 2848 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 71200 records from 15 columns in 477 ms: 149.26625 rec/ms, 2238.9937 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 88% reading (3822 ms) and 11% processing (477 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 71200. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 202 ms. row count = 3029 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 78754 records from 15 columns in 459 ms: 171.57735 rec/ms, 2573.6602 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (3953 ms) and 10% processing (459 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 78754. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 167 ms. row count = 3157 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 88396 records from 15 columns in 518 ms: 170.64865 rec/ms, 2559.7297 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (4382 ms) and 10% processing (518 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 88396. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 172 ms. row count = 2848 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 74048 records from 15 columns in 493 ms: 150.19878 rec/ms, 2252.9817 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (3994 ms) and 10% processing (493 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 74048. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 153 ms. row count = 3029 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 81783 records from 15 columns in 476 ms: 171.81302 rec/ms, 2577.1953 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (4106 ms) and 10% processing (476 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 81783. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 174 ms. row count = 3157 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 91553 records from 15 columns in 536 ms: 170.80783 rec/ms, 2562.1174 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (4556 ms) and 10% processing (536 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 91553. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 166 ms. row count = 3029 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 194 ms. row count = 2848 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 84812 records from 15 columns in 493 ms: 172.03246 rec/ms, 2580.4868 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (4272 ms) and 10% processing (493 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 84812. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 76896 records from 15 columns in 509 ms: 151.0727 rec/ms, 2266.0903 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (4188 ms) and 10% processing (509 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 76896. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 134 ms. row count = 3157 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 94710 records from 15 columns in 553 ms: 171.26582 rec/ms, 2568.9873 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (4690 ms) and 10% processing (553 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 94710. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 119 ms. row count = 3029 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 131 ms. row count = 2848 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 87841 records from 15 columns in 510 ms: 172.23726 rec/ms, 2583.5588 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (4391 ms) and 10% processing (510 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 87841. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 79744 records from 15 columns in 525 ms: 151.89333 rec/ms, 2278.4 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (4319 ms) and 10% processing (525 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 79744. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 129 ms. row count = 3157 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 97867 records from 15 columns in 571 ms: 171.3958 rec/ms, 2570.937 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (4819 ms) and 10% processing (571 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 97867. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 124 ms. row count = 3029 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 2848 | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 90870 records from 15 columns in 527 ms: 172.42885 rec/ms, 2586.4326 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (4515 ms) and 10% processing (527 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 90870. reading next block | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: Assembled and processed 82592 records from 15 columns in 542 ms: 152.38376 rec/ms, 2285.7563 cell/ms | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: time spent so far 89% reading (4440 ms) and 10% processing (542 ms) | |
21/12/01 01:21:40 INFO InternalParquetRecordReader: at row 82592. reading next block | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 127 ms. row count = 3157 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 120 ms. row count = 3029 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 133 ms. row count = 2848 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 101024 records from 15 columns in 607 ms: 166.43163 rec/ms, 2496.4744 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (4946 ms) and 10% processing (607 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 101024. reading next block | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 93899 records from 15 columns in 563 ms: 166.78331 rec/ms, 2501.7495 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (4635 ms) and 10% processing (563 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 93899. reading next block | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 85440 records from 15 columns in 559 ms: 152.84436 rec/ms, 2292.6655 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (4573 ms) and 10% processing (559 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 85440. reading next block | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 123 ms. row count = 3029 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 126 ms. row count = 2848 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 154 ms. row count = 3157 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 96928 records from 15 columns in 581 ms: 166.8296 rec/ms, 2502.444 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (4758 ms) and 10% processing (581 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 96928. reading next block | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 88288 records from 15 columns in 575 ms: 153.54434 rec/ms, 2303.1653 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (4699 ms) and 10% processing (575 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 88288. reading next block | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 104181 records from 15 columns in 625 ms: 166.6896 rec/ms, 2500.344 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (5100 ms) and 10% processing (625 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 104181. reading next block | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 120 ms. row count = 3029 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 122 ms. row count = 2848 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 99957 records from 15 columns in 598 ms: 167.15218 rec/ms, 2507.2827 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (4878 ms) and 10% processing (598 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 99957. reading next block | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 91136 records from 15 columns in 590 ms: 154.4678 rec/ms, 2317.0168 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (4821 ms) and 10% processing (590 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 91136. reading next block | |
21/12/01 01:21:41 INFO HoodieCreateHandle: CreateHandle for partitionPath asia/india/chennai fileID c6d17cfb-140b-485b-b9ef-0cec97daa7e8-2, took 23129 ms. | |
21/12/01 01:21:41 INFO BoundedInMemoryExecutor: Queue Consumption is done; notifying producer threads | |
21/12/01 01:21:41 WARN MemoryStore: Not enough space to cache rdd_2652_0 in memory! (computed 0.0 B so far) | |
21/12/01 01:21:41 INFO MemoryStore: Memory use = 3.8 MiB (blocks) + 0.0 B (scratch space shared across 0 tasks(s)) = 3.8 MiB. Storage limit = 366.3 MiB. | |
21/12/01 01:21:41 WARN BlockManager: Persisting block rdd_2652_0 to disk instead. | |
21/12/01 01:21:41 INFO BlockManagerInfo: Added rdd_2652_0 on disk on 192.168.1.48:56496 (size: 1253.0 B) | |
21/12/01 01:21:41 INFO MemoryStore: Block rdd_2652_0 stored as bytes in memory (estimated size 1253.0 B, free 362.5 MiB) | |
21/12/01 01:21:41 INFO Executor: Finished task 0.0 in stage 1167.0 (TID 2209). 1275 bytes result sent to driver | |
21/12/01 01:21:41 INFO TaskSetManager: Finished task 0.0 in stage 1167.0 (TID 2209) in 98857 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:41 INFO TaskSchedulerImpl: Removed TaskSet 1167.0, whose tasks have all completed, from pool | |
21/12/01 01:21:41 INFO DAGScheduler: ResultStage 1167 (sum at DeltaSync.java:519) finished in 98.934 s | |
21/12/01 01:21:41 INFO DAGScheduler: Job 793 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:41 INFO TaskSchedulerImpl: Killing all running tasks in stage 1167: Stage finished | |
21/12/01 01:21:41 INFO DAGScheduler: Job 793 finished: sum at DeltaSync.java:519, took 103.210433 s | |
21/12/01 01:21:41 INFO SparkContext: Starting job: sum at DeltaSync.java:520 | |
21/12/01 01:21:41 INFO DAGScheduler: Got job 803 (sum at DeltaSync.java:520) with 1 output partitions | |
21/12/01 01:21:41 INFO DAGScheduler: Final stage: ResultStage 1185 (sum at DeltaSync.java:520) | |
21/12/01 01:21:41 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1184) | |
21/12/01 01:21:41 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:21:41 INFO DAGScheduler: Submitting ResultStage 1185 (MapPartitionsRDD[2705] at mapToDouble at DeltaSync.java:520), which has no missing parents | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 126 ms. row count = 2848 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 143 ms. row count = 3029 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 93984 records from 15 columns in 603 ms: 155.8607 rec/ms, 2337.9104 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (4947 ms) and 10% processing (603 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 93984. reading next block | |
21/12/01 01:21:41 INFO MemoryStore: Block broadcast_1097 stored as values in memory (estimated size 513.8 KiB, free 362.0 MiB) | |
21/12/01 01:21:41 INFO MemoryStore: Block broadcast_1097_piece0 stored as bytes in memory (estimated size 179.8 KiB, free 361.8 MiB) | |
21/12/01 01:21:41 INFO BlockManagerInfo: Added broadcast_1097_piece0 in memory on 192.168.1.48:56496 (size: 179.8 KiB, free: 365.2 MiB) | |
21/12/01 01:21:41 INFO SparkContext: Created broadcast 1097 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:41 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1185 (MapPartitionsRDD[2705] at mapToDouble at DeltaSync.java:520) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:41 INFO TaskSchedulerImpl: Adding task set 1185.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:41 INFO TaskSetManager: Starting task 0.0 in stage 1185.0 (TID 2225) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:41 INFO Executor: Running task 0.0 in stage 1185.0 (TID 2225) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 102986 records from 15 columns in 611 ms: 168.55319 rec/ms, 2528.2979 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (5021 ms) and 10% processing (611 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 102986. reading next block | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 284 ms. row count = 3157 | |
21/12/01 01:21:41 INFO BlockManager: Found block rdd_2652_0 locally | |
21/12/01 01:21:41 INFO Executor: Finished task 0.0 in stage 1185.0 (TID 2225). 845 bytes result sent to driver | |
21/12/01 01:21:41 INFO TaskSetManager: Finished task 0.0 in stage 1185.0 (TID 2225) in 21 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:41 INFO TaskSchedulerImpl: Removed TaskSet 1185.0, whose tasks have all completed, from pool | |
21/12/01 01:21:41 INFO DAGScheduler: ResultStage 1185 (sum at DeltaSync.java:520) finished in 0.100 s | |
21/12/01 01:21:41 INFO DAGScheduler: Job 803 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:41 INFO TaskSchedulerImpl: Killing all running tasks in stage 1185: Stage finished | |
21/12/01 01:21:41 INFO DAGScheduler: Job 803 finished: sum at DeltaSync.java:520, took 0.100587 s | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 107338 records from 15 columns in 640 ms: 167.71562 rec/ms, 2515.7344 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (5384 ms) and 10% processing (640 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 107338. reading next block | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 126 ms. row count = 2848 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 124 ms. row count = 3029 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 96832 records from 15 columns in 615 ms: 157.45041 rec/ms, 2361.756 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (5073 ms) and 10% processing (615 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 96832. reading next block | |
21/12/01 01:21:41 INFO SparkContext: Starting job: collect at SparkRDDWriteClient.java:123 | |
21/12/01 01:21:41 INFO DAGScheduler: Got job 804 (collect at SparkRDDWriteClient.java:123) with 1 output partitions | |
21/12/01 01:21:41 INFO DAGScheduler: Final stage: ResultStage 1187 (collect at SparkRDDWriteClient.java:123) | |
21/12/01 01:21:41 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1186) | |
21/12/01 01:21:41 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:21:41 INFO DAGScheduler: Submitting ResultStage 1187 (MapPartitionsRDD[2707] at map at SparkRDDWriteClient.java:123), which has no missing parents | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 106015 records from 15 columns in 624 ms: 169.89583 rec/ms, 2548.4375 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (5145 ms) and 10% processing (624 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 106015. reading next block | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 128 ms. row count = 3157 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 110495 records from 15 columns in 653 ms: 169.21133 rec/ms, 2538.17 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (5512 ms) and 10% processing (653 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 110495. reading next block | |
21/12/01 01:21:41 INFO MemoryStore: Block broadcast_1098 stored as values in memory (estimated size 513.6 KiB, free 361.3 MiB) | |
21/12/01 01:21:41 INFO MemoryStore: Block broadcast_1098_piece0 stored as bytes in memory (estimated size 179.7 KiB, free 361.1 MiB) | |
21/12/01 01:21:41 INFO BlockManagerInfo: Added broadcast_1098_piece0 in memory on 192.168.1.48:56496 (size: 179.7 KiB, free: 365.0 MiB) | |
21/12/01 01:21:41 INFO SparkContext: Created broadcast 1098 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:41 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1187 (MapPartitionsRDD[2707] at map at SparkRDDWriteClient.java:123) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:41 INFO TaskSchedulerImpl: Adding task set 1187.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:41 INFO TaskSetManager: Starting task 0.0 in stage 1187.0 (TID 2226) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:41 INFO Executor: Running task 0.0 in stage 1187.0 (TID 2226) | |
21/12/01 01:21:41 INFO BlockManager: Found block rdd_2652_0 locally | |
21/12/01 01:21:41 INFO Executor: Finished task 0.0 in stage 1187.0 (TID 2226). 1590 bytes result sent to driver | |
21/12/01 01:21:41 INFO TaskSetManager: Finished task 0.0 in stage 1187.0 (TID 2226) in 18 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:41 INFO TaskSchedulerImpl: Removed TaskSet 1187.0, whose tasks have all completed, from pool | |
21/12/01 01:21:41 INFO DAGScheduler: ResultStage 1187 (collect at SparkRDDWriteClient.java:123) finished in 0.079 s | |
21/12/01 01:21:41 INFO DAGScheduler: Job 804 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:41 INFO TaskSchedulerImpl: Killing all running tasks in stage 1187: Stage finished | |
21/12/01 01:21:41 INFO DAGScheduler: Job 804 finished: collect at SparkRDDWriteClient.java:123, took 0.079476 s | |
21/12/01 01:21:41 INFO AbstractHoodieWriteClient: Committing 20211201011944112 action commit | |
21/12/01 01:21:41 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 131 ms. row count = 2848 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 123 ms. row count = 3029 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 141 ms. row count = 3157 | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 99680 records from 15 columns in 656 ms: 151.95122 rec/ms, 2279.2683 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 88% reading (5204 ms) and 11% processing (656 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 99680. reading next block | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 109044 records from 15 columns in 666 ms: 163.72974 rec/ms, 2455.946 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 88% reading (5268 ms) and 11% processing (666 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 109044. reading next block | |
21/12/01 01:21:41 INFO BlockManagerInfo: Removed broadcast_1098_piece0 on 192.168.1.48:56496 in memory (size: 179.7 KiB, free: 365.2 MiB) | |
21/12/01 01:21:41 INFO BlockManagerInfo: Removed broadcast_1097_piece0 on 192.168.1.48:56496 in memory (size: 179.8 KiB, free: 365.4 MiB) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: Assembled and processed 113652 records from 15 columns in 672 ms: 169.125 rec/ms, 2536.875 cell/ms | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: time spent so far 89% reading (5653 ms) and 10% processing (672 ms) | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: at row 113652. reading next block | |
21/12/01 01:21:41 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 3029 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 112073 records from 15 columns in 684 ms: 163.84941 rec/ms, 2457.7412 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 88% reading (5389 ms) and 11% processing (684 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 112073. reading next block | |
21/12/01 01:21:42 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 133 ms. row count = 3157 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 116809 records from 15 columns in 690 ms: 169.2884 rec/ms, 2539.3262 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 89% reading (5786 ms) and 10% processing (690 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 116809. reading next block | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 223 ms. row count = 2848 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 102528 records from 15 columns in 672 ms: 152.57143 rec/ms, 2288.5715 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 88% reading (5427 ms) and 11% processing (672 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 102528. reading next block | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 119 ms. row count = 3029 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 115102 records from 15 columns in 701 ms: 164.19687 rec/ms, 2462.953 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 88% reading (5508 ms) and 11% processing (701 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 115102. reading next block | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 122 ms. row count = 3157 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 119966 records from 15 columns in 708 ms: 169.4435 rec/ms, 2541.6526 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 89% reading (5908 ms) and 10% processing (708 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 119966. reading next block | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 134 ms. row count = 2848 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 105376 records from 15 columns in 688 ms: 153.1628 rec/ms, 2297.442 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 88% reading (5561 ms) and 11% processing (688 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 105376. reading next block | |
21/12/01 01:21:42 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:42 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 130 ms. row count = 3029 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 118131 records from 15 columns in 719 ms: 164.29903 rec/ms, 2464.4854 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 88% reading (5638 ms) and 11% processing (719 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 118131. reading next block | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 126 ms. row count = 3157 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 123123 records from 15 columns in 726 ms: 169.59091 rec/ms, 2543.8635 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 89% reading (6034 ms) and 10% processing (726 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 123123. reading next block | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 130 ms. row count = 2848 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 108224 records from 15 columns in 705 ms: 153.50922 rec/ms, 2302.6382 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 88% reading (5691 ms) and 11% processing (705 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 108224. reading next block | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 270 ms. row count = 3029 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 240 ms. row count = 3157 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 121160 records from 15 columns in 737 ms: 164.3962 rec/ms, 2465.943 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 88% reading (5908 ms) and 11% processing (737 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 121160. reading next block | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 126280 records from 15 columns in 744 ms: 169.73119 rec/ms, 2545.9678 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 89% reading (6274 ms) and 10% processing (744 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 126280. reading next block | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 374 ms. row count = 2848 | |
21/12/01 01:21:42 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011944112__commit__INFLIGHT]} | |
21/12/01 01:21:42 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 198 ms. row count = 3029 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 198 ms. row count = 3157 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 111072 records from 15 columns in 722 ms: 153.83934 rec/ms, 2307.59 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 89% reading (6065 ms) and 10% processing (722 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 111072. reading next block | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 124189 records from 15 columns in 755 ms: 164.48874 rec/ms, 2467.331 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 88% reading (6106 ms) and 11% processing (755 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 124189. reading next block | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: Assembled and processed 129437 records from 15 columns in 763 ms: 169.6422 rec/ms, 2544.633 cell/ms | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: time spent so far 89% reading (6472 ms) and 10% processing (763 ms) | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: at row 129437. reading next block | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 175 ms. row count = 2848 | |
21/12/01 01:21:42 INFO InternalParquetRecordReader: block read in memory in 170 ms. row count = 3157 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 189 ms. row count = 3029 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 113920 records from 15 columns in 738 ms: 154.36314 rec/ms, 2315.4473 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6240 ms) and 10% processing (738 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 113920. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 132594 records from 15 columns in 781 ms: 169.77464 rec/ms, 2546.6196 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6642 ms) and 10% processing (781 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 132594. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 127218 records from 15 columns in 774 ms: 164.36433 rec/ms, 2465.465 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6295 ms) and 10% processing (774 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 127218. reading next block | |
21/12/01 01:21:43 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 120 ms. row count = 2848 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 125 ms. row count = 3157 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 116768 records from 15 columns in 757 ms: 154.25099 rec/ms, 2313.765 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6360 ms) and 10% processing (757 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 116768. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 150 ms. row count = 3029 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 135751 records from 15 columns in 822 ms: 165.1472 rec/ms, 2477.208 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6767 ms) and 10% processing (822 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 135751. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 130247 records from 15 columns in 791 ms: 164.6612 rec/ms, 2469.9177 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6445 ms) and 10% processing (791 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 130247. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 127 ms. row count = 2848 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 119616 records from 15 columns in 774 ms: 154.54263 rec/ms, 2318.1396 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6487 ms) and 10% processing (774 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 119616. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 120 ms. row count = 3157 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 3029 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 138908 records from 15 columns in 840 ms: 165.36667 rec/ms, 2480.5 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6887 ms) and 10% processing (840 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 138908. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 133276 records from 15 columns in 808 ms: 164.94554 rec/ms, 2474.183 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6566 ms) and 10% processing (808 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 133276. reading next block | |
21/12/01 01:21:43 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:43 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 141 ms. row count = 2848 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 131 ms. row count = 3157 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 122464 records from 15 columns in 798 ms: 153.46365 rec/ms, 2301.9548 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6628 ms) and 10% processing (798 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 122464. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 126 ms. row count = 3029 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 142065 records from 15 columns in 858 ms: 165.57692 rec/ms, 2483.6538 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (7018 ms) and 10% processing (858 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 142065. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 136305 records from 15 columns in 826 ms: 165.01816 rec/ms, 2475.2725 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6692 ms) and 10% processing (826 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 136305. reading next block | |
21/12/01 01:21:43 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 124 ms. row count = 2848 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 130 ms. row count = 3157 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 125312 records from 15 columns in 815 ms: 153.75705 rec/ms, 2306.3557 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6752 ms) and 10% processing (815 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 125312. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 145222 records from 15 columns in 876 ms: 165.77853 rec/ms, 2486.678 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (7148 ms) and 10% processing (876 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 145222. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 156 ms. row count = 3029 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 139334 records from 15 columns in 843 ms: 165.28351 rec/ms, 2479.2527 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6848 ms) and 10% processing (843 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 139334. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 3157 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 148379 records from 15 columns in 894 ms: 165.97203 rec/ms, 2489.5806 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (7269 ms) and 10% processing (894 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 148379. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 120 ms. row count = 3029 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 142363 records from 15 columns in 860 ms: 165.53838 rec/ms, 2483.0757 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6968 ms) and 10% processing (860 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 142363. reading next block | |
21/12/01 01:21:43 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:43 INFO FileSystemViewManager: Creating View Manager with storage type :REMOTE_FIRST | |
21/12/01 01:21:43 INFO FileSystemViewManager: Creating remote first table view | |
21/12/01 01:21:43 INFO CommitUtils: Creating metadata for CLUSTER numWriteStats:3numReplaceFileIds:0 | |
21/12/01 01:21:43 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 237 ms. row count = 2848 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 128160 records from 15 columns in 831 ms: 154.22383 rec/ms, 2313.3574 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (6989 ms) and 10% processing (831 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 128160. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 124 ms. row count = 3157 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 151536 records from 15 columns in 911 ms: 166.34029 rec/ms, 2495.1042 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (7393 ms) and 10% processing (911 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 151536. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 126 ms. row count = 3029 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 145392 records from 15 columns in 877 ms: 165.78336 rec/ms, 2486.7502 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 88% reading (7094 ms) and 11% processing (877 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 145392. reading next block | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 2848 | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: Assembled and processed 131008 records from 15 columns in 847 ms: 154.67296 rec/ms, 2320.0945 cell/ms | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: time spent so far 89% reading (7110 ms) and 10% processing (847 ms) | |
21/12/01 01:21:43 INFO InternalParquetRecordReader: at row 131008. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 119 ms. row count = 3029 | |
21/12/01 01:21:44 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 148421 records from 15 columns in 895 ms: 165.83353 rec/ms, 2487.5027 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 88% reading (7213 ms) and 11% processing (895 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 148421. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 199 ms. row count = 3157 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 122 ms. row count = 2848 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 154693 records from 15 columns in 929 ms: 166.51561 rec/ms, 2497.7341 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 89% reading (7592 ms) and 10% processing (929 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 154693. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 133856 records from 15 columns in 863 ms: 155.10545 rec/ms, 2326.5818 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 89% reading (7232 ms) and 10% processing (863 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 133856. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 123 ms. row count = 3029 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 151450 records from 15 columns in 912 ms: 166.0636 rec/ms, 2490.9539 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 88% reading (7336 ms) and 11% processing (912 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 151450. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 123 ms. row count = 3157 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 133 ms. row count = 2848 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 157850 records from 15 columns in 963 ms: 163.91486 rec/ms, 2458.7227 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 88% reading (7715 ms) and 11% processing (963 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 157850. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 136704 records from 15 columns in 879 ms: 155.52219 rec/ms, 2332.8328 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 89% reading (7365 ms) and 10% processing (879 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 136704. reading next block | |
21/12/01 01:21:44 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:44 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 122 ms. row count = 3157 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 188 ms. row count = 3029 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 130 ms. row count = 2848 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 161007 records from 15 columns in 981 ms: 164.12538 rec/ms, 2461.8806 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 88% reading (7837 ms) and 11% processing (981 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 161007. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 154479 records from 15 columns in 930 ms: 166.10645 rec/ms, 2491.5967 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 88% reading (7524 ms) and 11% processing (930 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 154479. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 139552 records from 15 columns in 895 ms: 155.92403 rec/ms, 2338.8604 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 89% reading (7495 ms) and 10% processing (895 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 139552. reading next block | |
21/12/01 01:21:44 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011944112__commit__INFLIGHT]} | |
21/12/01 01:21:44 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 122 ms. row count = 3029 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 132 ms. row count = 3157 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 130 ms. row count = 2848 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 157508 records from 15 columns in 947 ms: 166.32312 rec/ms, 2494.847 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 88% reading (7646 ms) and 11% processing (947 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 157508. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 164164 records from 15 columns in 999 ms: 164.32832 rec/ms, 2464.9248 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 88% reading (7969 ms) and 11% processing (999 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 164164. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 142400 records from 15 columns in 911 ms: 156.31175 rec/ms, 2344.6763 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 89% reading (7625 ms) and 10% processing (911 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 142400. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 126 ms. row count = 3029 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 129 ms. row count = 3157 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 125 ms. row count = 2848 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 160537 records from 15 columns in 965 ms: 166.35959 rec/ms, 2495.3938 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 88% reading (7772 ms) and 11% processing (965 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 160537. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 167321 records from 15 columns in 1017 ms: 164.5241 rec/ms, 2467.8613 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 88% reading (8098 ms) and 11% processing (1017 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 167321. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 145248 records from 15 columns in 928 ms: 156.51724 rec/ms, 2347.7585 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 89% reading (7750 ms) and 10% processing (928 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 145248. reading next block | |
21/12/01 01:21:44 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 3029 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 123 ms. row count = 3157 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 128 ms. row count = 2848 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 163566 records from 15 columns in 982 ms: 166.56415 rec/ms, 2498.4624 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 88% reading (7893 ms) and 11% processing (982 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 163566. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 170478 records from 15 columns in 1035 ms: 164.71304 rec/ms, 2470.6956 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 88% reading (8221 ms) and 11% processing (1035 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 170478. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: Assembled and processed 148096 records from 15 columns in 944 ms: 156.88136 rec/ms, 2353.2205 cell/ms | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: time spent so far 89% reading (7878 ms) and 10% processing (944 ms) | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: at row 148096. reading next block | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 128 ms. row count = 3029 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 123 ms. row count = 3157 | |
21/12/01 01:21:44 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 2848 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 166595 records from 15 columns in 1000 ms: 166.595 rec/ms, 2498.925 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (8021 ms) and 11% processing (1000 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 166595. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 173635 records from 15 columns in 1054 ms: 164.73909 rec/ms, 2471.0864 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (8344 ms) and 11% processing (1054 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 173635. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 150944 records from 15 columns in 960 ms: 157.23334 rec/ms, 2358.5 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 89% reading (7999 ms) and 10% processing (960 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 150944. reading next block | |
21/12/01 01:21:45 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:45 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 123 ms. row count = 3157 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 139 ms. row count = 3029 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 176792 records from 15 columns in 1072 ms: 164.91791 rec/ms, 2473.7686 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (8467 ms) and 11% processing (1072 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 176792. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 169624 records from 15 columns in 1017 ms: 166.78859 rec/ms, 2501.8289 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (8160 ms) and 11% processing (1017 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 169624. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 222 ms. row count = 2848 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 153792 records from 15 columns in 977 ms: 157.41249 rec/ms, 2361.1873 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 89% reading (8221 ms) and 10% processing (977 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 153792. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 130 ms. row count = 3157 | |
21/12/01 01:21:45 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 129 ms. row count = 3029 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 179949 records from 15 columns in 1105 ms: 162.84978 rec/ms, 2442.7466 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (8597 ms) and 11% processing (1105 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 179949. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 172653 records from 15 columns in 1034 ms: 166.97581 rec/ms, 2504.6375 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (8289 ms) and 11% processing (1034 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 172653. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 122 ms. row count = 2848 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 156640 records from 15 columns in 993 ms: 157.74422 rec/ms, 2366.163 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 89% reading (8343 ms) and 10% processing (993 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 156640. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 123 ms. row count = 3157 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 3029 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 183106 records from 15 columns in 1123 ms: 163.05075 rec/ms, 2445.7615 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (8720 ms) and 11% processing (1123 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 183106. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 175682 records from 15 columns in 1050 ms: 167.3162 rec/ms, 2509.743 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (8410 ms) and 11% processing (1050 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 175682. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 117 ms. row count = 2848 | |
21/12/01 01:21:45 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:45 INFO FileSystemViewManager: Creating View Manager with storage type :REMOTE_FIRST | |
21/12/01 01:21:45 INFO FileSystemViewManager: Creating remote first table view | |
21/12/01 01:21:45 INFO AbstractHoodieWriteClient: Committing 20211201011944112 action commit | |
21/12/01 01:21:45 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/dir/exists?markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201011944112) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 159488 records from 15 columns in 1009 ms: 158.06541 rec/ms, 2370.9812 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 89% reading (8460 ms) and 10% processing (1009 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 159488. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 126 ms. row count = 3157 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 186263 records from 15 columns in 1141 ms: 163.24539 rec/ms, 2448.681 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (8846 ms) and 11% processing (1141 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 186263. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 149 ms. row count = 3029 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 178711 records from 15 columns in 1067 ms: 167.48923 rec/ms, 2512.3384 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (8559 ms) and 11% processing (1067 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 178711. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 125 ms. row count = 2848 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 162336 records from 15 columns in 1025 ms: 158.37659 rec/ms, 2375.6487 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 89% reading (8585 ms) and 10% processing (1025 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 162336. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 130 ms. row count = 3157 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 120 ms. row count = 3029 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 189420 records from 15 columns in 1159 ms: 163.43399 rec/ms, 2451.51 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (8976 ms) and 11% processing (1159 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 189420. reading next block | |
21/12/01 01:21:45 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/create-and-merge?markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201011944112) | |
21/12/01 01:21:45 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 181740 records from 15 columns in 1084 ms: 167.65683 rec/ms, 2514.8523 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (8679 ms) and 11% processing (1084 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 181740. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 119 ms. row count = 2848 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 165184 records from 15 columns in 1041 ms: 158.67819 rec/ms, 2380.1729 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 89% reading (8704 ms) and 10% processing (1041 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 165184. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 128 ms. row count = 3157 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 128 ms. row count = 3029 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 192577 records from 15 columns in 1177 ms: 163.61682 rec/ms, 2454.2524 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (9104 ms) and 11% processing (1177 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 192577. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 184769 records from 15 columns in 1101 ms: 167.81926 rec/ms, 2517.2888 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 88% reading (8807 ms) and 11% processing (1101 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 184769. reading next block | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: block read in memory in 118 ms. row count = 2848 | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: Assembled and processed 168032 records from 15 columns in 1057 ms: 158.97067 rec/ms, 2384.56 cell/ms | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: time spent so far 89% reading (8822 ms) and 10% processing (1057 ms) | |
21/12/01 01:21:45 INFO InternalParquetRecordReader: at row 168032. reading next block | |
21/12/01 01:21:46 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 127 ms. row count = 3157 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 130 ms. row count = 3029 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 195734 records from 15 columns in 1197 ms: 163.52046 rec/ms, 2452.8071 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 88% reading (9231 ms) and 11% processing (1197 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 195734. reading next block | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 117 ms. row count = 2848 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 187798 records from 15 columns in 1125 ms: 166.93155 rec/ms, 2503.9734 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 88% reading (8937 ms) and 11% processing (1125 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 187798. reading next block | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 170880 records from 15 columns in 1076 ms: 158.81041 rec/ms, 2382.1562 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 89% reading (8939 ms) and 10% processing (1076 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 170880. reading next block | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 127 ms. row count = 3157 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 3029 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 119 ms. row count = 2848 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 198891 records from 15 columns in 1215 ms: 163.69629 rec/ms, 2455.4443 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 88% reading (9358 ms) and 11% processing (1215 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 198891. reading next block | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 190827 records from 15 columns in 1142 ms: 167.09895 rec/ms, 2506.4841 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 88% reading (9058 ms) and 11% processing (1142 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 190827. reading next block | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 173728 records from 15 columns in 1092 ms: 159.09157 rec/ms, 2386.3735 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 89% reading (9058 ms) and 10% processing (1092 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 173728. reading next block | |
21/12/01 01:21:46 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 114 ms. row count = 2848 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 128 ms. row count = 1002 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 122 ms. row count = 3029 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 176576 records from 15 columns in 1122 ms: 157.37611 rec/ms, 2360.6416 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 89% reading (9172 ms) and 10% processing (1122 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 176576. reading next block | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 193856 records from 15 columns in 1159 ms: 167.26143 rec/ms, 2508.9214 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 88% reading (9180 ms) and 11% processing (1159 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 193856. reading next block | |
21/12/01 01:21:46 INFO Executor: Finished task 0.0 in stage 1180.0 (TID 2222). 1000 bytes result sent to driver | |
21/12/01 01:21:46 INFO TaskSetManager: Finished task 0.0 in stage 1180.0 (TID 2222) in 12029 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:46 INFO TaskSchedulerImpl: Removed TaskSet 1180.0, whose tasks have all completed, from pool | |
21/12/01 01:21:46 INFO DAGScheduler: ShuffleMapStage 1180 (sortBy at GlobalSortPartitioner.java:41) finished in 12.091 s | |
21/12/01 01:21:46 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:21:46 INFO DAGScheduler: running: Set(ShuffleMapStage 1181, ShuffleMapStage 1182) | |
21/12/01 01:21:46 INFO DAGScheduler: waiting: Set(ResultStage 1183) | |
21/12/01 01:21:46 INFO DAGScheduler: failed: Set() | |
21/12/01 01:21:46 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 122 ms. row count = 2848 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 131 ms. row count = 3029 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 179424 records from 15 columns in 1139 ms: 157.52765 rec/ms, 2362.9148 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 89% reading (9294 ms) and 10% processing (1139 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 179424. reading next block | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 196885 records from 15 columns in 1176 ms: 167.41922 rec/ms, 2511.2883 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 88% reading (9311 ms) and 11% processing (1176 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 196885. reading next block | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 122 ms. row count = 2848 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 120 ms. row count = 3029 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 182272 records from 15 columns in 1160 ms: 157.13103 rec/ms, 2356.9656 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 89% reading (9416 ms) and 10% processing (1160 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 182272. reading next block | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 199914 records from 15 columns in 1193 ms: 167.57251 rec/ms, 2513.5876 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 88% reading (9431 ms) and 11% processing (1193 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 199914. reading next block | |
21/12/01 01:21:46 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 116 ms. row count = 2848 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 128 ms. row count = 428 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 185120 records from 15 columns in 1177 ms: 157.28122 rec/ms, 2359.2183 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 89% reading (9532 ms) and 10% processing (1177 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 185120. reading next block | |
21/12/01 01:21:46 INFO Executor: Finished task 0.0 in stage 1182.0 (TID 2224). 1000 bytes result sent to driver | |
21/12/01 01:21:46 INFO TaskSetManager: Finished task 0.0 in stage 1182.0 (TID 2224) in 12339 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:46 INFO TaskSchedulerImpl: Removed TaskSet 1182.0, whose tasks have all completed, from pool | |
21/12/01 01:21:46 INFO DAGScheduler: ShuffleMapStage 1182 (sortBy at GlobalSortPartitioner.java:41) finished in 12.400 s | |
21/12/01 01:21:46 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:21:46 INFO DAGScheduler: running: Set(ShuffleMapStage 1181) | |
21/12/01 01:21:46 INFO DAGScheduler: waiting: Set(ResultStage 1183) | |
21/12/01 01:21:46 INFO DAGScheduler: failed: Set() | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: block read in memory in 124 ms. row count = 2848 | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: Assembled and processed 187968 records from 15 columns in 1194 ms: 157.42714 rec/ms, 2361.407 cell/ms | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: time spent so far 88% reading (9656 ms) and 11% processing (1194 ms) | |
21/12/01 01:21:46 INFO InternalParquetRecordReader: at row 187968. reading next block | |
21/12/01 01:21:46 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 2848 | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: Assembled and processed 190816 records from 15 columns in 1210 ms: 157.69917 rec/ms, 2365.4875 cell/ms | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: time spent so far 88% reading (9777 ms) and 11% processing (1210 ms) | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: at row 190816. reading next block | |
21/12/01 01:21:47 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011818630__deltacommit__COMPLETED]} | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: block read in memory in 118 ms. row count = 2848 | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: Assembled and processed 193664 records from 15 columns in 1226 ms: 157.96411 rec/ms, 2369.4617 cell/ms | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: time spent so far 88% reading (9895 ms) and 11% processing (1226 ms) | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: at row 193664. reading next block | |
21/12/01 01:21:47 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011944112__commit__INFLIGHT]} | |
21/12/01 01:21:47 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: block read in memory in 125 ms. row count = 2848 | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: Assembled and processed 196512 records from 15 columns in 1243 ms: 158.09492 rec/ms, 2371.424 cell/ms | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: time spent so far 88% reading (10020 ms) and 11% processing (1243 ms) | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: at row 196512. reading next block | |
21/12/01 01:21:47 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: block read in memory in 120 ms. row count = 2848 | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: Assembled and processed 199360 records from 15 columns in 1260 ms: 158.22223 rec/ms, 2373.3333 cell/ms | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: time spent so far 88% reading (10140 ms) and 11% processing (1260 ms) | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: at row 199360. reading next block | |
21/12/01 01:21:47 INFO InternalParquetRecordReader: block read in memory in 121 ms. row count = 405 | |
21/12/01 01:21:47 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:21:47 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:47 INFO Executor: Finished task 0.0 in stage 1181.0 (TID 2223). 1000 bytes result sent to driver | |
21/12/01 01:21:47 INFO TaskSetManager: Finished task 0.0 in stage 1181.0 (TID 2223) in 13239 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:47 INFO TaskSchedulerImpl: Removed TaskSet 1181.0, whose tasks have all completed, from pool | |
21/12/01 01:21:47 INFO DAGScheduler: ShuffleMapStage 1181 (sortBy at GlobalSortPartitioner.java:41) finished in 13.301 s | |
21/12/01 01:21:47 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:21:47 INFO DAGScheduler: running: Set() | |
21/12/01 01:21:47 INFO DAGScheduler: waiting: Set(ResultStage 1183) | |
21/12/01 01:21:47 INFO DAGScheduler: failed: Set() | |
21/12/01 01:21:47 INFO DAGScheduler: Submitting ResultStage 1183 (MapPartitionsRDD[2704] at map at SparkExecuteClusteringCommitActionExecutor.java:85), which has no missing parents | |
21/12/01 01:21:47 INFO MemoryStore: Block broadcast_1099 stored as values in memory (estimated size 554.0 KiB, free 362.0 MiB) | |
21/12/01 01:21:47 INFO MemoryStore: Block broadcast_1099_piece0 stored as bytes in memory (estimated size 189.5 KiB, free 361.8 MiB) | |
21/12/01 01:21:47 INFO BlockManagerInfo: Added broadcast_1099_piece0 in memory on 192.168.1.48:56496 (size: 189.5 KiB, free: 365.2 MiB) | |
21/12/01 01:21:47 INFO SparkContext: Created broadcast 1099 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:47 INFO DAGScheduler: Submitting 3 missing tasks from ResultStage 1183 (MapPartitionsRDD[2704] at map at SparkExecuteClusteringCommitActionExecutor.java:85) (first 15 tasks are for partitions Vector(0, 1, 2)) | |
21/12/01 01:21:47 INFO TaskSchedulerImpl: Adding task set 1183.0 with 3 tasks resource profile 0 | |
21/12/01 01:21:47 INFO TaskSetManager: Starting task 0.0 in stage 1183.0 (TID 2227) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4380 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:47 INFO TaskSetManager: Starting task 1.0 in stage 1183.0 (TID 2228) (192.168.1.48, executor driver, partition 1, NODE_LOCAL, 4380 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:47 INFO TaskSetManager: Starting task 2.0 in stage 1183.0 (TID 2229) (192.168.1.48, executor driver, partition 2, NODE_LOCAL, 4380 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:47 INFO Executor: Running task 1.0 in stage 1183.0 (TID 2228) | |
21/12/01 01:21:47 INFO Executor: Running task 2.0 in stage 1183.0 (TID 2229) | |
21/12/01 01:21:47 INFO Executor: Running task 0.0 in stage 1183.0 (TID 2227) | |
21/12/01 01:21:47 INFO ShuffleBlockFetcherIterator: Getting 1 (26.9 MiB) non-empty blocks including 1 (26.9 MiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:21:47 INFO ShuffleBlockFetcherIterator: Getting 1 (26.9 MiB) non-empty blocks including 1 (26.9 MiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:21:47 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:21:47 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:21:47 INFO ShuffleBlockFetcherIterator: Getting 1 (26.9 MiB) non-empty blocks including 1 (26.9 MiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:21:47 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:21:47 INFO BlockManagerInfo: Removed broadcast_1094_piece0 on 192.168.1.48:56496 in memory (size: 179.3 KiB, free: 365.4 MiB) | |
21/12/01 01:21:47 INFO BlockManagerInfo: Removed broadcast_1096_piece0 on 192.168.1.48:56496 in memory (size: 179.3 KiB, free: 365.5 MiB) | |
21/12/01 01:21:47 INFO BlockManagerInfo: Removed broadcast_1095_piece0 on 192.168.1.48:56496 in memory (size: 179.3 KiB, free: 365.7 MiB) | |
21/12/01 01:21:47 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:21:48 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:49 INFO BlockManagerInfo: Removed broadcast_1080_piece0 on 192.168.1.48:56496 in memory (size: 189.5 KiB, free: 365.9 MiB) | |
21/12/01 01:21:49 INFO BlockManagerInfo: Removed broadcast_1082_piece0 on 192.168.1.48:56496 in memory (size: 179.8 KiB, free: 366.1 MiB) | |
21/12/01 01:21:49 INFO BlockManager: Removing RDD 2637 | |
21/12/01 01:21:49 INFO ExternalSorter: Thread 12438 spilling in-memory map of 126.8 MiB to disk (1 time so far) | |
21/12/01 01:21:49 INFO ExternalSorter: Thread 14195 spilling in-memory map of 126.7 MiB to disk (1 time so far) | |
21/12/01 01:21:49 INFO ExternalSorter: Thread 12437 spilling in-memory map of 129.0 MiB to disk (1 time so far) | |
21/12/01 01:21:49 INFO HoodieTableMetadataUtil: Updating at 20211201011944112 from Commit/CLUSTER. #partitions_updated=4 | |
21/12/01 01:21:49 INFO HoodieTableMetadataUtil: Loading file groups for metadata table partition files | |
21/12/01 01:21:49 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011818630__deltacommit__COMPLETED]} | |
21/12/01 01:21:49 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:49 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:49 INFO AbstractTableFileSystemView: Building file system view for partition (files) | |
21/12/01 01:21:49 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=13, NumFileGroups=1, FileGroupsCreationTime=2, StoreTimeTaken=0 | |
21/12/01 01:21:49 INFO AbstractHoodieClient: Embedded Timeline Server is disabled. Not starting timeline service | |
21/12/01 01:21:49 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:49 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:21:50 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:50 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:50 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011818630__deltacommit__COMPLETED]} | |
21/12/01 01:21:50 INFO AbstractHoodieWriteClient: Generate a new instant time: 20211201011944112 action: deltacommit | |
21/12/01 01:21:50 INFO HoodieHeartbeatClient: Received request to start heartbeat for instant time 20211201011944112 | |
21/12/01 01:21:50 INFO ExternalSorter: Thread 12438 spilling in-memory map of 126.8 MiB to disk (2 times so far) | |
21/12/01 01:21:50 INFO ExternalSorter: Thread 14195 spilling in-memory map of 126.7 MiB to disk (2 times so far) | |
21/12/01 01:21:50 INFO ExternalSorter: Thread 12437 spilling in-memory map of 129.0 MiB to disk (2 times so far) | |
21/12/01 01:21:51 INFO HoodieActiveTimeline: Creating a new instant [==>20211201011944112__deltacommit__REQUESTED] | |
21/12/01 01:21:51 INFO IteratorBasedQueueProducer: starting to buffer records | |
21/12/01 01:21:51 INFO BoundedInMemoryExecutor: starting consumer thread | |
21/12/01 01:21:51 INFO IteratorBasedQueueProducer: starting to buffer records | |
21/12/01 01:21:51 INFO BoundedInMemoryExecutor: starting consumer thread | |
21/12/01 01:21:51 INFO IteratorBasedQueueProducer: starting to buffer records | |
21/12/01 01:21:51 INFO BoundedInMemoryExecutor: starting consumer thread | |
21/12/01 01:21:51 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:51 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/create?markername=americas%2Funited_states%2Fsan_francisco%2F96ab0122-d7c8-4038-b5d1-9592dfd9e29f-0_1-1183-2228_20211201011347895.parquet.marker.CREATE&markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201011347895) | |
21/12/01 01:21:51 INFO MarkerHandler: Request: create marker s3a://hudi-testing/test_hoodie_table_2/.hoodie/.temp/20211201011347895 americas/united_states/san_francisco/96ab0122-d7c8-4038-b5d1-9592dfd9e29f-0_1-1183-2228_20211201011347895.parquet.marker.CREATE | |
21/12/01 01:21:51 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/create?markername=asia%2Findia%2Fchennai%2Fd2e90ef3-db5d-4488-a112-512bd5889d86-0_2-1183-2229_20211201011347895.parquet.marker.CREATE&markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201011347895) | |
21/12/01 01:21:51 INFO MarkerHandler: Request: create marker s3a://hudi-testing/test_hoodie_table_2/.hoodie/.temp/20211201011347895 asia/india/chennai/d2e90ef3-db5d-4488-a112-512bd5889d86-0_2-1183-2229_20211201011347895.parquet.marker.CREATE | |
21/12/01 01:21:52 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:21:52 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/create?markername=americas%2Fbrazil%2Fsao_paulo%2Fc0bf7539-9317-4e0f-b82f-875fc3a17625-0_0-1183-2227_20211201011347895.parquet.marker.CREATE&markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201011347895) | |
21/12/01 01:21:52 INFO MarkerHandler: Request: create marker s3a://hudi-testing/test_hoodie_table_2/.hoodie/.temp/20211201011347895 americas/brazil/sao_paulo/c0bf7539-9317-4e0f-b82f-875fc3a17625-0_0-1183-2227_20211201011347895.parquet.marker.CREATE | |
21/12/01 01:21:52 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:52 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:21:52 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011944112__deltacommit__REQUESTED]} | |
21/12/01 01:21:52 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:21:52 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:21:52 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata | |
21/12/01 01:21:52 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:52 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:52 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011944112__deltacommit__REQUESTED]} | |
21/12/01 01:21:52 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:53 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:53 INFO AsyncCleanerService: Async auto cleaning is not enabled. Not running cleaner now | |
21/12/01 01:21:53 INFO SparkContext: Starting job: countByKey at BaseSparkCommitActionExecutor.java:191 | |
21/12/01 01:21:53 INFO DAGScheduler: Registering RDD 2711 (countByKey at BaseSparkCommitActionExecutor.java:191) as input to shuffle 266 | |
21/12/01 01:21:53 INFO DAGScheduler: Got job 805 (countByKey at BaseSparkCommitActionExecutor.java:191) with 1 output partitions | |
21/12/01 01:21:53 INFO DAGScheduler: Final stage: ResultStage 1189 (countByKey at BaseSparkCommitActionExecutor.java:191) | |
21/12/01 01:21:53 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1188) | |
21/12/01 01:21:53 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1188) | |
21/12/01 01:21:53 INFO DAGScheduler: Submitting ShuffleMapStage 1188 (MapPartitionsRDD[2711] at countByKey at BaseSparkCommitActionExecutor.java:191), which has no missing parents | |
21/12/01 01:21:53 INFO MemoryStore: Block broadcast_1100 stored as values in memory (estimated size 10.4 KiB, free 277.2 MiB) | |
21/12/01 01:21:53 INFO MemoryStore: Block broadcast_1100_piece0 stored as bytes in memory (estimated size 5.2 KiB, free 277.2 MiB) | |
21/12/01 01:21:53 INFO BlockManagerInfo: Added broadcast_1100_piece0 in memory on 192.168.1.48:56496 (size: 5.2 KiB, free: 366.1 MiB) | |
21/12/01 01:21:53 INFO SparkContext: Created broadcast 1100 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:53 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1188 (MapPartitionsRDD[2711] at countByKey at BaseSparkCommitActionExecutor.java:191) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:53 INFO TaskSchedulerImpl: Adding task set 1188.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:53 INFO TaskSetManager: Starting task 0.0 in stage 1188.0 (TID 2230) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4898 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:53 INFO Executor: Running task 0.0 in stage 1188.0 (TID 2230) | |
21/12/01 01:21:53 INFO MemoryStore: Block rdd_2709_0 stored as values in memory (estimated size 1337.0 B, free 277.2 MiB) | |
21/12/01 01:21:53 INFO BlockManagerInfo: Added rdd_2709_0 in memory on 192.168.1.48:56496 (size: 1337.0 B, free: 366.1 MiB) | |
21/12/01 01:21:53 INFO Executor: Finished task 0.0 in stage 1188.0 (TID 2230). 1043 bytes result sent to driver | |
21/12/01 01:21:53 INFO TaskSetManager: Finished task 0.0 in stage 1188.0 (TID 2230) in 5 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:53 INFO TaskSchedulerImpl: Removed TaskSet 1188.0, whose tasks have all completed, from pool | |
21/12/01 01:21:53 INFO DAGScheduler: ShuffleMapStage 1188 (countByKey at BaseSparkCommitActionExecutor.java:191) finished in 0.006 s | |
21/12/01 01:21:53 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:21:53 INFO DAGScheduler: running: Set(ResultStage 1183) | |
21/12/01 01:21:53 INFO DAGScheduler: waiting: Set(ResultStage 1189) | |
21/12/01 01:21:53 INFO DAGScheduler: failed: Set() | |
21/12/01 01:21:53 INFO DAGScheduler: Submitting ResultStage 1189 (ShuffledRDD[2712] at countByKey at BaseSparkCommitActionExecutor.java:191), which has no missing parents | |
21/12/01 01:21:53 INFO MemoryStore: Block broadcast_1101 stored as values in memory (estimated size 5.6 KiB, free 277.2 MiB) | |
21/12/01 01:21:53 INFO MemoryStore: Block broadcast_1101_piece0 stored as bytes in memory (estimated size 3.2 KiB, free 277.2 MiB) | |
21/12/01 01:21:53 INFO BlockManagerInfo: Added broadcast_1101_piece0 in memory on 192.168.1.48:56496 (size: 3.2 KiB, free: 366.1 MiB) | |
21/12/01 01:21:53 INFO SparkContext: Created broadcast 1101 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:53 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1189 (ShuffledRDD[2712] at countByKey at BaseSparkCommitActionExecutor.java:191) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:53 INFO TaskSchedulerImpl: Adding task set 1189.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:53 INFO TaskSetManager: Starting task 0.0 in stage 1189.0 (TID 2231) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:53 INFO Executor: Running task 0.0 in stage 1189.0 (TID 2231) | |
21/12/01 01:21:53 INFO ShuffleBlockFetcherIterator: Getting 1 (156.0 B) non-empty blocks including 1 (156.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:21:53 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:21:53 INFO Executor: Finished task 0.0 in stage 1189.0 (TID 2231). 1318 bytes result sent to driver | |
21/12/01 01:21:53 INFO TaskSetManager: Finished task 0.0 in stage 1189.0 (TID 2231) in 3 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:53 INFO TaskSchedulerImpl: Removed TaskSet 1189.0, whose tasks have all completed, from pool | |
21/12/01 01:21:53 INFO DAGScheduler: ResultStage 1189 (countByKey at BaseSparkCommitActionExecutor.java:191) finished in 0.005 s | |
21/12/01 01:21:53 INFO DAGScheduler: Job 805 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:53 INFO TaskSchedulerImpl: Killing all running tasks in stage 1189: Stage finished | |
21/12/01 01:21:53 INFO DAGScheduler: Job 805 finished: countByKey at BaseSparkCommitActionExecutor.java:191, took 0.012893 s | |
21/12/01 01:21:53 INFO BaseSparkCommitActionExecutor: Workload profile :WorkloadProfile {globalStat=WorkloadStat {numInserts=0, numUpdates=4}, partitionStat={files=WorkloadStat {numInserts=0, numUpdates=4}}, operationType=UPSERT_PREPPED} | |
21/12/01 01:21:53 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201011944112.deltacommit.requested | |
21/12/01 01:21:54 INFO TimelineServerBasedWriteMarkers: [timeline-server-based] Created marker file americas/united_states/san_francisco/96ab0122-d7c8-4038-b5d1-9592dfd9e29f-0_1-1183-2228_20211201011347895.parquet.marker.CREATE in 2549 ms | |
21/12/01 01:21:54 INFO TimelineServerBasedWriteMarkers: [timeline-server-based] Created marker file americas/brazil/sao_paulo/c0bf7539-9317-4e0f-b82f-875fc3a17625-0_0-1183-2227_20211201011347895.parquet.marker.CREATE in 2209 ms | |
21/12/01 01:21:54 INFO TimelineServerBasedWriteMarkers: [timeline-server-based] Created marker file asia/india/chennai/d2e90ef3-db5d-4488-a112-512bd5889d86-0_2-1183-2229_20211201011347895.parquet.marker.CREATE in 2499 ms | |
21/12/01 01:21:54 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:21:54 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:21:54 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:21:54 INFO HoodieCreateHandle: New CreateHandle for partition :americas/brazil/sao_paulo with fileId c0bf7539-9317-4e0f-b82f-875fc3a17625-0 | |
21/12/01 01:21:54 INFO HoodieCreateHandle: New CreateHandle for partition :asia/india/chennai with fileId d2e90ef3-db5d-4488-a112-512bd5889d86-0 | |
21/12/01 01:21:54 INFO HoodieCreateHandle: New CreateHandle for partition :americas/united_states/san_francisco with fileId 96ab0122-d7c8-4038-b5d1-9592dfd9e29f-0 | |
21/12/01 01:21:55 INFO BlockManagerInfo: Removed broadcast_1101_piece0 on 192.168.1.48:56496 in memory (size: 3.2 KiB, free: 366.1 MiB) | |
21/12/01 01:21:55 INFO BlockManagerInfo: Removed broadcast_1100_piece0 on 192.168.1.48:56496 in memory (size: 5.2 KiB, free: 366.1 MiB) | |
21/12/01 01:21:55 INFO HoodieActiveTimeline: Created a new file in meta path: s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201011944112.deltacommit.inflight | |
21/12/01 01:21:55 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201011944112.deltacommit.inflight | |
21/12/01 01:21:55 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:55 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:55 INFO SparkContext: Starting job: collect at SparkRejectUpdateStrategy.java:52 | |
21/12/01 01:21:55 INFO DAGScheduler: Registering RDD 2715 (distinct at SparkRejectUpdateStrategy.java:52) as input to shuffle 267 | |
21/12/01 01:21:55 INFO DAGScheduler: Got job 806 (collect at SparkRejectUpdateStrategy.java:52) with 1 output partitions | |
21/12/01 01:21:55 INFO DAGScheduler: Final stage: ResultStage 1191 (collect at SparkRejectUpdateStrategy.java:52) | |
21/12/01 01:21:55 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1190) | |
21/12/01 01:21:55 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1190) | |
21/12/01 01:21:55 INFO DAGScheduler: Submitting ShuffleMapStage 1190 (MapPartitionsRDD[2715] at distinct at SparkRejectUpdateStrategy.java:52), which has no missing parents | |
21/12/01 01:21:55 INFO MemoryStore: Block broadcast_1102 stored as values in memory (estimated size 10.5 KiB, free 277.2 MiB) | |
21/12/01 01:21:55 INFO MemoryStore: Block broadcast_1102_piece0 stored as bytes in memory (estimated size 5.1 KiB, free 277.2 MiB) | |
21/12/01 01:21:55 INFO BlockManagerInfo: Added broadcast_1102_piece0 in memory on 192.168.1.48:56496 (size: 5.1 KiB, free: 366.1 MiB) | |
21/12/01 01:21:55 INFO SparkContext: Created broadcast 1102 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:55 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1190 (MapPartitionsRDD[2715] at distinct at SparkRejectUpdateStrategy.java:52) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:55 INFO TaskSchedulerImpl: Adding task set 1190.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:55 INFO TaskSetManager: Starting task 0.0 in stage 1190.0 (TID 2232) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4898 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:55 INFO Executor: Running task 0.0 in stage 1190.0 (TID 2232) | |
21/12/01 01:21:55 INFO BlockManager: Found block rdd_2709_0 locally | |
21/12/01 01:21:55 INFO Executor: Finished task 0.0 in stage 1190.0 (TID 2232). 1129 bytes result sent to driver | |
21/12/01 01:21:55 INFO TaskSetManager: Finished task 0.0 in stage 1190.0 (TID 2232) in 4 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:55 INFO TaskSchedulerImpl: Removed TaskSet 1190.0, whose tasks have all completed, from pool | |
21/12/01 01:21:55 INFO DAGScheduler: ShuffleMapStage 1190 (distinct at SparkRejectUpdateStrategy.java:52) finished in 0.006 s | |
21/12/01 01:21:55 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:21:55 INFO DAGScheduler: running: Set(ResultStage 1183) | |
21/12/01 01:21:55 INFO DAGScheduler: waiting: Set(ResultStage 1191) | |
21/12/01 01:21:55 INFO DAGScheduler: failed: Set() | |
21/12/01 01:21:55 INFO DAGScheduler: Submitting ResultStage 1191 (MapPartitionsRDD[2717] at distinct at SparkRejectUpdateStrategy.java:52), which has no missing parents | |
21/12/01 01:21:55 INFO MemoryStore: Block broadcast_1103 stored as values in memory (estimated size 6.4 KiB, free 277.2 MiB) | |
21/12/01 01:21:55 INFO MemoryStore: Block broadcast_1103_piece0 stored as bytes in memory (estimated size 3.5 KiB, free 277.2 MiB) | |
21/12/01 01:21:55 INFO BlockManagerInfo: Added broadcast_1103_piece0 in memory on 192.168.1.48:56496 (size: 3.5 KiB, free: 366.1 MiB) | |
21/12/01 01:21:55 INFO SparkContext: Created broadcast 1103 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:55 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1191 (MapPartitionsRDD[2717] at distinct at SparkRejectUpdateStrategy.java:52) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:55 INFO TaskSchedulerImpl: Adding task set 1191.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:55 INFO TaskSetManager: Starting task 0.0 in stage 1191.0 (TID 2233) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:55 INFO Executor: Running task 0.0 in stage 1191.0 (TID 2233) | |
21/12/01 01:21:55 INFO ShuffleBlockFetcherIterator: Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:21:55 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:21:55 INFO BlockManagerInfo: Removed broadcast_1102_piece0 on 192.168.1.48:56496 in memory (size: 5.1 KiB, free: 366.1 MiB) | |
21/12/01 01:21:55 INFO Executor: Finished task 0.0 in stage 1191.0 (TID 2233). 1335 bytes result sent to driver | |
21/12/01 01:21:55 INFO TaskSetManager: Finished task 0.0 in stage 1191.0 (TID 2233) in 14 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:55 INFO TaskSchedulerImpl: Removed TaskSet 1191.0, whose tasks have all completed, from pool | |
21/12/01 01:21:55 INFO DAGScheduler: ResultStage 1191 (collect at SparkRejectUpdateStrategy.java:52) finished in 0.015 s | |
21/12/01 01:21:55 INFO DAGScheduler: Job 806 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:55 INFO TaskSchedulerImpl: Killing all running tasks in stage 1191: Stage finished | |
21/12/01 01:21:55 INFO DAGScheduler: Job 806 finished: collect at SparkRejectUpdateStrategy.java:52, took 0.022942 s | |
21/12/01 01:21:55 INFO BlockManagerInfo: Removed broadcast_1103_piece0 on 192.168.1.48:56496 in memory (size: 3.5 KiB, free: 366.1 MiB) | |
21/12/01 01:21:56 INFO UpsertPartitioner: AvgRecordSize => 1024 | |
21/12/01 01:21:56 INFO SparkContext: Starting job: collectAsMap at UpsertPartitioner.java:256 | |
21/12/01 01:21:56 INFO DAGScheduler: Got job 807 (collectAsMap at UpsertPartitioner.java:256) with 1 output partitions | |
21/12/01 01:21:56 INFO DAGScheduler: Final stage: ResultStage 1192 (collectAsMap at UpsertPartitioner.java:256) | |
21/12/01 01:21:56 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:21:56 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:21:56 INFO DAGScheduler: Submitting ResultStage 1192 (MapPartitionsRDD[2719] at mapToPair at UpsertPartitioner.java:255), which has no missing parents | |
21/12/01 01:21:56 INFO MemoryStore: Block broadcast_1104 stored as values in memory (estimated size 316.5 KiB, free 276.9 MiB) | |
21/12/01 01:21:56 INFO MemoryStore: Block broadcast_1104_piece0 stored as bytes in memory (estimated size 110.4 KiB, free 276.8 MiB) | |
21/12/01 01:21:56 INFO BlockManagerInfo: Added broadcast_1104_piece0 in memory on 192.168.1.48:56496 (size: 110.4 KiB, free: 366.0 MiB) | |
21/12/01 01:21:56 INFO SparkContext: Created broadcast 1104 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:56 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1192 (MapPartitionsRDD[2719] at mapToPair at UpsertPartitioner.java:255) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:56 INFO TaskSchedulerImpl: Adding task set 1192.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:56 INFO TaskSetManager: Starting task 0.0 in stage 1192.0 (TID 2234) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4338 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:56 INFO Executor: Running task 0.0 in stage 1192.0 (TID 2234) | |
21/12/01 01:21:56 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:21:56 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:21:56 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata | |
21/12/01 01:21:56 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:56 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:56 INFO AbstractTableFileSystemView: Building file system view for partition (files) | |
21/12/01 01:21:56 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=13, NumFileGroups=1, FileGroupsCreationTime=2, StoreTimeTaken=0 | |
21/12/01 01:21:56 INFO Executor: Finished task 0.0 in stage 1192.0 (TID 2234). 872 bytes result sent to driver | |
21/12/01 01:21:56 INFO TaskSetManager: Finished task 0.0 in stage 1192.0 (TID 2234) in 373 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:56 INFO TaskSchedulerImpl: Removed TaskSet 1192.0, whose tasks have all completed, from pool | |
21/12/01 01:21:56 INFO DAGScheduler: ResultStage 1192 (collectAsMap at UpsertPartitioner.java:256) finished in 0.417 s | |
21/12/01 01:21:56 INFO DAGScheduler: Job 807 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:21:56 INFO TaskSchedulerImpl: Killing all running tasks in stage 1192: Stage finished | |
21/12/01 01:21:56 INFO DAGScheduler: Job 807 finished: collectAsMap at UpsertPartitioner.java:256, took 0.417795 s | |
21/12/01 01:21:56 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:57 INFO BlockManagerInfo: Removed broadcast_1104_piece0 on 192.168.1.48:56496 in memory (size: 110.4 KiB, free: 366.1 MiB) | |
21/12/01 01:21:57 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:57 INFO UpsertPartitioner: Total Buckets :1, buckets info => {0=BucketInfo {bucketType=UPDATE, fileIdPrefix=files-0000, partitionPath=files}}, | |
Partition to insert buckets => {}, | |
UpdateLocations mapped to buckets =>{files-0000=0} | |
21/12/01 01:21:57 INFO BaseSparkCommitActionExecutor: no validators configured. | |
21/12/01 01:21:57 INFO BaseCommitActionExecutor: Auto commit enabled: Committing 20211201011944112 | |
21/12/01 01:21:57 INFO SparkContext: Starting job: collect at BaseSparkCommitActionExecutor.java:274 | |
21/12/01 01:21:57 INFO DAGScheduler: Registering RDD 2720 (mapToPair at BaseSparkCommitActionExecutor.java:225) as input to shuffle 268 | |
21/12/01 01:21:57 INFO DAGScheduler: Got job 808 (collect at BaseSparkCommitActionExecutor.java:274) with 1 output partitions | |
21/12/01 01:21:57 INFO DAGScheduler: Final stage: ResultStage 1194 (collect at BaseSparkCommitActionExecutor.java:274) | |
21/12/01 01:21:57 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1193) | |
21/12/01 01:21:57 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1193) | |
21/12/01 01:21:57 INFO DAGScheduler: Submitting ShuffleMapStage 1193 (MapPartitionsRDD[2720] at mapToPair at BaseSparkCommitActionExecutor.java:225), which has no missing parents | |
21/12/01 01:21:57 INFO MemoryStore: Block broadcast_1105 stored as values in memory (estimated size 321.6 KiB, free 276.9 MiB) | |
21/12/01 01:21:57 INFO MemoryStore: Block broadcast_1105_piece0 stored as bytes in memory (estimated size 113.3 KiB, free 276.8 MiB) | |
21/12/01 01:21:57 INFO BlockManagerInfo: Added broadcast_1105_piece0 in memory on 192.168.1.48:56496 (size: 113.3 KiB, free: 366.0 MiB) | |
21/12/01 01:21:57 INFO SparkContext: Created broadcast 1105 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:57 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1193 (MapPartitionsRDD[2720] at mapToPair at BaseSparkCommitActionExecutor.java:225) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:57 INFO TaskSchedulerImpl: Adding task set 1193.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:57 INFO TaskSetManager: Starting task 0.0 in stage 1193.0 (TID 2235) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4898 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:57 INFO Executor: Running task 0.0 in stage 1193.0 (TID 2235) | |
21/12/01 01:21:57 INFO BlockManager: Found block rdd_2709_0 locally | |
21/12/01 01:21:57 INFO Executor: Finished task 0.0 in stage 1193.0 (TID 2235). 1043 bytes result sent to driver | |
21/12/01 01:21:57 INFO TaskSetManager: Finished task 0.0 in stage 1193.0 (TID 2235) in 18 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:21:57 INFO TaskSchedulerImpl: Removed TaskSet 1193.0, whose tasks have all completed, from pool | |
21/12/01 01:21:57 INFO DAGScheduler: ShuffleMapStage 1193 (mapToPair at BaseSparkCommitActionExecutor.java:225) finished in 0.077 s | |
21/12/01 01:21:57 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:21:57 INFO DAGScheduler: running: Set(ResultStage 1183) | |
21/12/01 01:21:57 INFO DAGScheduler: waiting: Set(ResultStage 1194) | |
21/12/01 01:21:57 INFO DAGScheduler: failed: Set() | |
21/12/01 01:21:57 INFO DAGScheduler: Submitting ResultStage 1194 (MapPartitionsRDD[2725] at map at BaseSparkCommitActionExecutor.java:274), which has no missing parents | |
21/12/01 01:21:57 INFO MemoryStore: Block broadcast_1106 stored as values in memory (estimated size 424.9 KiB, free 276.4 MiB) | |
21/12/01 01:21:57 INFO MemoryStore: Block broadcast_1106_piece0 stored as bytes in memory (estimated size 150.2 KiB, free 276.2 MiB) | |
21/12/01 01:21:57 INFO BlockManagerInfo: Added broadcast_1106_piece0 in memory on 192.168.1.48:56496 (size: 150.2 KiB, free: 365.8 MiB) | |
21/12/01 01:21:57 INFO SparkContext: Created broadcast 1106 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:21:57 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1194 (MapPartitionsRDD[2725] at map at BaseSparkCommitActionExecutor.java:274) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:21:57 INFO TaskSchedulerImpl: Adding task set 1194.0 with 1 tasks resource profile 0 | |
21/12/01 01:21:57 INFO TaskSetManager: Starting task 0.0 in stage 1194.0 (TID 2236) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:21:57 INFO Executor: Running task 0.0 in stage 1194.0 (TID 2236) | |
21/12/01 01:21:57 INFO ShuffleBlockFetcherIterator: Getting 1 (539.0 B) non-empty blocks including 1 (539.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:21:57 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:21:57 INFO AbstractSparkDeltaCommitActionExecutor: Merging updates for commit 20211201011944112 for file files-0000 | |
21/12/01 01:21:57 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:21:57 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:21:57 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata | |
21/12/01 01:21:57 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:21:57 INFO BlockManagerInfo: Removed broadcast_1105_piece0 on 192.168.1.48:56496 in memory (size: 113.3 KiB, free: 365.9 MiB) | |
21/12/01 01:21:57 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:21:57 INFO AbstractTableFileSystemView: Building file system view for partition (files) | |
21/12/01 01:21:57 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=13, NumFileGroups=1, FileGroupsCreationTime=2, StoreTimeTaken=0 | |
21/12/01 01:21:58 INFO IteratorBasedQueueProducer: finished buffering records | |
21/12/01 01:21:58 INFO IteratorBasedQueueProducer: finished buffering records | |
21/12/01 01:21:58 INFO IteratorBasedQueueProducer: finished buffering records | |
21/12/01 01:21:59 INFO DirectWriteMarkers: Creating Marker Path=s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.temp/20211201011944112/files/files-0000_0-1194-2236_20211201004828250001.hfile.marker.APPEND | |
21/12/01 01:21:59 INFO HoodieCreateHandle: Closing the file c0bf7539-9317-4e0f-b82f-875fc3a17625-0 as we are done with all the records 200342 | |
21/12/01 01:21:59 INFO HoodieCreateHandle: Closing the file 96ab0122-d7c8-4038-b5d1-9592dfd9e29f-0 as we are done with all the records 199765 | |
21/12/01 01:21:59 INFO HoodieCreateHandle: Closing the file d2e90ef3-db5d-4488-a112-512bd5889d86-0 as we are done with all the records 199893 | |
21/12/01 01:21:59 INFO DirectWriteMarkers: [direct] Created marker file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.temp/20211201011944112/files/files-0000_0-1194-2236_20211201004828250001.hfile.marker.APPEND in 1795 ms | |
21/12/01 01:21:59 INFO HoodieLogFormat$WriterBuilder: Building HoodieLogFormat Writer | |
21/12/01 01:21:59 INFO HoodieLogFormat$WriterBuilder: HoodieLogFile on path s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/files/.files-0000_20211201004828250001.log.11_0-1174-2216 | |
21/12/01 01:22:00 INFO HoodieLogFormatWriter: Append not supported.. Rolling over to HoodieLogFile{pathStr='s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/files/.files-0000_20211201004828250001.log.12_0-1194-2236', fileLen=0} | |
21/12/01 01:22:00 INFO CacheConfig: Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=406512, freeSize=394696944, maxSize=395103456, heapSize=406512, minSize=375348288, minFactor=0.95, multiSize=187674144, multiFactor=0.5, singleSize=93837072, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false | |
21/12/01 01:22:00 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:22:00 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:22:00 INFO HoodieAppendHandle: AppendHandle for partitionPath files filePath files/.files-0000_20211201004828250001.log.12_0-1194-2236, took 3059 ms. | |
21/12/01 01:22:01 INFO MemoryStore: Block rdd_2724_0 stored as values in memory (estimated size 1010.0 B, free 364.6 MiB) | |
21/12/01 01:22:01 INFO BlockManagerInfo: Added rdd_2724_0 in memory on 192.168.1.48:56496 (size: 1010.0 B, free: 365.9 MiB) | |
21/12/01 01:22:01 INFO Executor: Finished task 0.0 in stage 1194.0 (TID 2236). 2202 bytes result sent to driver | |
21/12/01 01:22:01 INFO TaskSetManager: Finished task 0.0 in stage 1194.0 (TID 2236) in 4086 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:01 INFO TaskSchedulerImpl: Removed TaskSet 1194.0, whose tasks have all completed, from pool | |
21/12/01 01:22:01 INFO DAGScheduler: ResultStage 1194 (collect at BaseSparkCommitActionExecutor.java:274) finished in 4.170 s | |
21/12/01 01:22:01 INFO DAGScheduler: Job 808 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:01 INFO TaskSchedulerImpl: Killing all running tasks in stage 1194: Stage finished | |
21/12/01 01:22:01 INFO DAGScheduler: Job 808 finished: collect at BaseSparkCommitActionExecutor.java:274, took 4.248776 s | |
21/12/01 01:22:01 INFO BaseSparkCommitActionExecutor: Committing 20211201011944112, action Type deltacommit | |
21/12/01 01:22:02 INFO SparkContext: Starting job: collect at HoodieSparkEngineContext.java:134 | |
21/12/01 01:22:02 INFO DAGScheduler: Got job 809 (collect at HoodieSparkEngineContext.java:134) with 1 output partitions | |
21/12/01 01:22:02 INFO DAGScheduler: Final stage: ResultStage 1195 (collect at HoodieSparkEngineContext.java:134) | |
21/12/01 01:22:02 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:22:02 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:22:02 INFO DAGScheduler: Submitting ResultStage 1195 (MapPartitionsRDD[2727] at flatMap at HoodieSparkEngineContext.java:134), which has no missing parents | |
21/12/01 01:22:02 INFO MemoryStore: Block broadcast_1107 stored as values in memory (estimated size 99.4 KiB, free 364.5 MiB) | |
21/12/01 01:22:02 INFO MemoryStore: Block broadcast_1107_piece0 stored as bytes in memory (estimated size 35.3 KiB, free 364.5 MiB) | |
21/12/01 01:22:02 INFO BlockManagerInfo: Added broadcast_1107_piece0 in memory on 192.168.1.48:56496 (size: 35.3 KiB, free: 365.9 MiB) | |
21/12/01 01:22:02 INFO SparkContext: Created broadcast 1107 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:02 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1195 (MapPartitionsRDD[2727] at flatMap at HoodieSparkEngineContext.java:134) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:02 INFO TaskSchedulerImpl: Adding task set 1195.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:02 INFO TaskSetManager: Starting task 0.0 in stage 1195.0 (TID 2237) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:02 INFO Executor: Running task 0.0 in stage 1195.0 (TID 2237) | |
21/12/01 01:22:02 INFO Executor: Finished task 0.0 in stage 1195.0 (TID 2237). 796 bytes result sent to driver | |
21/12/01 01:22:02 INFO TaskSetManager: Finished task 0.0 in stage 1195.0 (TID 2237) in 123 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:02 INFO TaskSchedulerImpl: Removed TaskSet 1195.0, whose tasks have all completed, from pool | |
21/12/01 01:22:02 INFO DAGScheduler: ResultStage 1195 (collect at HoodieSparkEngineContext.java:134) finished in 0.139 s | |
21/12/01 01:22:02 INFO DAGScheduler: Job 809 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:02 INFO TaskSchedulerImpl: Killing all running tasks in stage 1195: Stage finished | |
21/12/01 01:22:02 INFO DAGScheduler: Job 809 finished: collect at HoodieSparkEngineContext.java:134, took 0.139406 s | |
21/12/01 01:22:02 INFO CommitUtils: Creating metadata for UPSERT_PREPPED numWriteStats:1numReplaceFileIds:0 | |
21/12/01 01:22:02 INFO HoodieActiveTimeline: Marking instant complete [==>20211201011944112__deltacommit__INFLIGHT] | |
21/12/01 01:22:02 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201011944112.deltacommit.inflight | |
21/12/01 01:22:02 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201011944112.deltacommit | |
21/12/01 01:22:02 INFO HoodieActiveTimeline: Completed [==>20211201011944112__deltacommit__INFLIGHT] | |
21/12/01 01:22:02 INFO BaseSparkCommitActionExecutor: Committed 20211201011944112 | |
21/12/01 01:22:03 INFO SparkContext: Starting job: collectAsMap at HoodieSparkEngineContext.java:148 | |
21/12/01 01:22:03 INFO DAGScheduler: Got job 810 (collectAsMap at HoodieSparkEngineContext.java:148) with 1 output partitions | |
21/12/01 01:22:03 INFO DAGScheduler: Final stage: ResultStage 1196 (collectAsMap at HoodieSparkEngineContext.java:148) | |
21/12/01 01:22:03 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:22:03 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:22:03 INFO DAGScheduler: Submitting ResultStage 1196 (MapPartitionsRDD[2729] at mapToPair at HoodieSparkEngineContext.java:145), which has no missing parents | |
21/12/01 01:22:03 INFO MemoryStore: Block broadcast_1108 stored as values in memory (estimated size 99.6 KiB, free 364.4 MiB) | |
21/12/01 01:22:03 INFO MemoryStore: Block broadcast_1108_piece0 stored as bytes in memory (estimated size 35.3 KiB, free 364.4 MiB) | |
21/12/01 01:22:03 INFO BlockManagerInfo: Added broadcast_1108_piece0 in memory on 192.168.1.48:56496 (size: 35.3 KiB, free: 365.9 MiB) | |
21/12/01 01:22:03 INFO SparkContext: Created broadcast 1108 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:03 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1196 (MapPartitionsRDD[2729] at mapToPair at HoodieSparkEngineContext.java:145) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:03 INFO TaskSchedulerImpl: Adding task set 1196.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:03 INFO TaskSetManager: Starting task 0.0 in stage 1196.0 (TID 2238) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:03 INFO Executor: Running task 0.0 in stage 1196.0 (TID 2238) | |
21/12/01 01:22:05 INFO Executor: Finished task 0.0 in stage 1196.0 (TID 2238). 898 bytes result sent to driver | |
21/12/01 01:22:05 INFO TaskSetManager: Finished task 0.0 in stage 1196.0 (TID 2238) in 1491 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:05 INFO TaskSchedulerImpl: Removed TaskSet 1196.0, whose tasks have all completed, from pool | |
21/12/01 01:22:05 INFO DAGScheduler: ResultStage 1196 (collectAsMap at HoodieSparkEngineContext.java:148) finished in 1.509 s | |
21/12/01 01:22:05 INFO DAGScheduler: Job 810 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:05 INFO TaskSchedulerImpl: Killing all running tasks in stage 1196: Stage finished | |
21/12/01 01:22:05 INFO DAGScheduler: Job 810 finished: collectAsMap at HoodieSparkEngineContext.java:148, took 1.509447 s | |
21/12/01 01:22:06 INFO FSUtils: Removed directory at s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.temp/20211201011944112 | |
21/12/01 01:22:07 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011944112__deltacommit__COMPLETED]} | |
21/12/01 01:22:07 INFO HoodieTimelineArchiveLog: No Instants to archive | |
21/12/01 01:22:07 INFO HoodieHeartbeatClient: Stopping heartbeat for instant 20211201011944112 | |
21/12/01 01:22:07 INFO HoodieHeartbeatClient: Stopped heartbeat for instant 20211201011944112 | |
21/12/01 01:22:07 INFO HeartbeatUtils: Deleted the heartbeat for instant 20211201011944112 | |
21/12/01 01:22:07 INFO HoodieHeartbeatClient: Deleted heartbeat file for instant 20211201011944112 | |
21/12/01 01:22:07 INFO SparkContext: Starting job: collect at SparkHoodieBackedTableMetadataWriter.java:146 | |
21/12/01 01:22:07 INFO DAGScheduler: Got job 811 (collect at SparkHoodieBackedTableMetadataWriter.java:146) with 1 output partitions | |
21/12/01 01:22:07 INFO DAGScheduler: Final stage: ResultStage 1198 (collect at SparkHoodieBackedTableMetadataWriter.java:146) | |
21/12/01 01:22:07 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1197) | |
21/12/01 01:22:07 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:22:07 INFO DAGScheduler: Submitting ResultStage 1198 (MapPartitionsRDD[2724] at flatMap at BaseSparkCommitActionExecutor.java:176), which has no missing parents | |
21/12/01 01:22:07 INFO MemoryStore: Block broadcast_1109 stored as values in memory (estimated size 424.5 KiB, free 364.0 MiB) | |
21/12/01 01:22:07 INFO MemoryStore: Block broadcast_1109_piece0 stored as bytes in memory (estimated size 150.1 KiB, free 363.8 MiB) | |
21/12/01 01:22:07 INFO BlockManagerInfo: Added broadcast_1109_piece0 in memory on 192.168.1.48:56496 (size: 150.1 KiB, free: 365.7 MiB) | |
21/12/01 01:22:07 INFO SparkContext: Created broadcast 1109 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:07 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1198 (MapPartitionsRDD[2724] at flatMap at BaseSparkCommitActionExecutor.java:176) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:07 INFO TaskSchedulerImpl: Adding task set 1198.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:07 INFO TaskSetManager: Starting task 0.0 in stage 1198.0 (TID 2239) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:07 INFO Executor: Running task 0.0 in stage 1198.0 (TID 2239) | |
21/12/01 01:22:07 INFO BlockManager: Found block rdd_2724_0 locally | |
21/12/01 01:22:07 INFO Executor: Finished task 0.0 in stage 1198.0 (TID 2239). 1852 bytes result sent to driver | |
21/12/01 01:22:07 INFO TaskSetManager: Finished task 0.0 in stage 1198.0 (TID 2239) in 14 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:07 INFO TaskSchedulerImpl: Removed TaskSet 1198.0, whose tasks have all completed, from pool | |
21/12/01 01:22:07 INFO DAGScheduler: ResultStage 1198 (collect at SparkHoodieBackedTableMetadataWriter.java:146) finished in 0.064 s | |
21/12/01 01:22:07 INFO DAGScheduler: Job 811 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:07 INFO TaskSchedulerImpl: Killing all running tasks in stage 1198: Stage finished | |
21/12/01 01:22:07 INFO DAGScheduler: Job 811 finished: collect at SparkHoodieBackedTableMetadataWriter.java:146, took 0.065329 s | |
21/12/01 01:22:08 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011944112__deltacommit__COMPLETED]} | |
21/12/01 01:22:08 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201011944112__commit__INFLIGHT]} | |
21/12/01 01:22:08 INFO HoodieBackedTableMetadataWriter: Cannot compact metadata table as there are 2 inflight instants before latest deltacommit 20211201011944112: [[==>20211201011347895__replacecommit__INFLIGHT], [==>20211201011906814__replacecommit__REQUESTED]] | |
21/12/01 01:22:08 INFO AbstractHoodieWriteClient: Scheduling cleaning at instant time :20211201011944112002 | |
21/12/01 01:22:08 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:08 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:22:09 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:09 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:09 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011944112__deltacommit__COMPLETED]} | |
21/12/01 01:22:09 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:22:09 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:22:09 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata | |
21/12/01 01:22:09 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:09 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:09 INFO CleanPlanner: Incremental Cleaning mode is enabled. Looking up partition-paths that have since changed since last cleaned at 20211201005144720. New Instant to retain : Option{val=[20211201011429621__deltacommit__COMPLETED]} | |
21/12/01 01:22:11 INFO CleanPlanner: Total Partitions to clean : 1, with policy KEEP_LATEST_COMMITS | |
21/12/01 01:22:11 INFO CleanPlanner: Using cleanerParallelism: 1 | |
21/12/01 01:22:12 INFO SparkContext: Starting job: collect at HoodieSparkEngineContext.java:100 | |
21/12/01 01:22:12 INFO DAGScheduler: Got job 812 (collect at HoodieSparkEngineContext.java:100) with 1 output partitions | |
21/12/01 01:22:12 INFO DAGScheduler: Final stage: ResultStage 1199 (collect at HoodieSparkEngineContext.java:100) | |
21/12/01 01:22:12 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:22:12 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:22:12 INFO DAGScheduler: Submitting ResultStage 1199 (MapPartitionsRDD[2731] at map at HoodieSparkEngineContext.java:100), which has no missing parents | |
21/12/01 01:22:12 INFO MemoryStore: Block broadcast_1110 stored as values in memory (estimated size 320.0 KiB, free 363.5 MiB) | |
21/12/01 01:22:12 INFO MemoryStore: Block broadcast_1110_piece0 stored as bytes in memory (estimated size 111.5 KiB, free 363.4 MiB) | |
21/12/01 01:22:12 INFO BlockManagerInfo: Added broadcast_1110_piece0 in memory on 192.168.1.48:56496 (size: 111.5 KiB, free: 365.6 MiB) | |
21/12/01 01:22:12 INFO SparkContext: Created broadcast 1110 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:12 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1199 (MapPartitionsRDD[2731] at map at HoodieSparkEngineContext.java:100) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:12 INFO TaskSchedulerImpl: Adding task set 1199.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:12 INFO TaskSetManager: Starting task 0.0 in stage 1199.0 (TID 2240) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4338 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:12 INFO Executor: Running task 0.0 in stage 1199.0 (TID 2240) | |
21/12/01 01:22:12 INFO CleanPlanner: Cleaning files, retaining latest 3 commits. | |
21/12/01 01:22:12 INFO AbstractTableFileSystemView: Building file system view for partition (files) | |
21/12/01 01:22:12 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=14, NumFileGroups=1, FileGroupsCreationTime=2, StoreTimeTaken=0 | |
21/12/01 01:22:12 INFO CleanPlanner: 0 patterns used to delete in partition path:files | |
21/12/01 01:22:12 INFO Executor: Finished task 0.0 in stage 1199.0 (TID 2240). 881 bytes result sent to driver | |
21/12/01 01:22:12 INFO TaskSetManager: Finished task 0.0 in stage 1199.0 (TID 2240) in 151 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:12 INFO TaskSchedulerImpl: Removed TaskSet 1199.0, whose tasks have all completed, from pool | |
21/12/01 01:22:12 INFO DAGScheduler: ResultStage 1199 (collect at HoodieSparkEngineContext.java:100) finished in 0.189 s | |
21/12/01 01:22:12 INFO DAGScheduler: Job 812 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:12 INFO TaskSchedulerImpl: Killing all running tasks in stage 1199: Stage finished | |
21/12/01 01:22:12 INFO DAGScheduler: Job 812 finished: collect at HoodieSparkEngineContext.java:100, took 0.189480 s | |
21/12/01 01:22:12 INFO AbstractHoodieWriteClient: Cleaner started | |
21/12/01 01:22:12 INFO AbstractHoodieWriteClient: Cleaned failed attempts if any | |
21/12/01 01:22:12 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:12 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:22:12 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:12 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:12 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011944112__deltacommit__COMPLETED]} | |
21/12/01 01:22:12 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:22:12 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:22:12 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:13 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:22:13 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:13 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:13 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011944112__deltacommit__COMPLETED]} | |
21/12/01 01:22:13 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:22:13 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:22:13 INFO HoodieActiveTimeline: Marking instant complete [==>20211201011944112__commit__INFLIGHT] | |
21/12/01 01:22:13 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201011944112.inflight | |
21/12/01 01:22:14 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201011944112.commit | |
21/12/01 01:22:14 INFO HoodieActiveTimeline: Completed [==>20211201011944112__commit__INFLIGHT] | |
21/12/01 01:22:14 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/dir/delete?markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201011944112) | |
21/12/01 01:22:14 INFO SparkContext: Starting job: collectAsMap at HoodieSparkEngineContext.java:148 | |
21/12/01 01:22:14 INFO DAGScheduler: Got job 813 (collectAsMap at HoodieSparkEngineContext.java:148) with 3 output partitions | |
21/12/01 01:22:14 INFO DAGScheduler: Final stage: ResultStage 1200 (collectAsMap at HoodieSparkEngineContext.java:148) | |
21/12/01 01:22:14 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:22:14 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:22:14 INFO DAGScheduler: Submitting ResultStage 1200 (MapPartitionsRDD[2733] at mapToPair at HoodieSparkEngineContext.java:145), which has no missing parents | |
21/12/01 01:22:14 INFO MemoryStore: Block broadcast_1111 stored as values in memory (estimated size 99.6 KiB, free 363.3 MiB) | |
21/12/01 01:22:14 INFO MemoryStore: Block broadcast_1111_piece0 stored as bytes in memory (estimated size 35.3 KiB, free 363.3 MiB) | |
21/12/01 01:22:14 INFO BlockManagerInfo: Added broadcast_1111_piece0 in memory on 192.168.1.48:56496 (size: 35.3 KiB, free: 365.6 MiB) | |
21/12/01 01:22:14 INFO SparkContext: Created broadcast 1111 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:14 INFO DAGScheduler: Submitting 3 missing tasks from ResultStage 1200 (MapPartitionsRDD[2733] at mapToPair at HoodieSparkEngineContext.java:145) (first 15 tasks are for partitions Vector(0, 1, 2)) | |
21/12/01 01:22:14 INFO TaskSchedulerImpl: Adding task set 1200.0 with 3 tasks resource profile 0 | |
21/12/01 01:22:14 INFO TaskSetManager: Starting task 0.0 in stage 1200.0 (TID 2241) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4418 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:14 INFO TaskSetManager: Starting task 1.0 in stage 1200.0 (TID 2242) (192.168.1.48, executor driver, partition 1, PROCESS_LOCAL, 4414 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:14 INFO TaskSetManager: Starting task 2.0 in stage 1200.0 (TID 2243) (192.168.1.48, executor driver, partition 2, PROCESS_LOCAL, 4415 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:14 INFO Executor: Running task 0.0 in stage 1200.0 (TID 2241) | |
21/12/01 01:22:14 INFO Executor: Running task 1.0 in stage 1200.0 (TID 2242) | |
21/12/01 01:22:14 INFO Executor: Running task 2.0 in stage 1200.0 (TID 2243) | |
21/12/01 01:22:15 INFO Executor: Finished task 1.0 in stage 1200.0 (TID 2242). 884 bytes result sent to driver | |
21/12/01 01:22:15 INFO TaskSetManager: Finished task 1.0 in stage 1200.0 (TID 2242) in 418 ms on 192.168.1.48 (executor driver) (1/3) | |
21/12/01 01:22:15 INFO Executor: Finished task 0.0 in stage 1200.0 (TID 2241). 888 bytes result sent to driver | |
21/12/01 01:22:15 INFO TaskSetManager: Finished task 0.0 in stage 1200.0 (TID 2241) in 1055 ms on 192.168.1.48 (executor driver) (2/3) | |
21/12/01 01:22:15 INFO Executor: Finished task 2.0 in stage 1200.0 (TID 2243). 885 bytes result sent to driver | |
21/12/01 01:22:15 INFO TaskSetManager: Finished task 2.0 in stage 1200.0 (TID 2243) in 1078 ms on 192.168.1.48 (executor driver) (3/3) | |
21/12/01 01:22:15 INFO TaskSchedulerImpl: Removed TaskSet 1200.0, whose tasks have all completed, from pool | |
21/12/01 01:22:15 INFO DAGScheduler: ResultStage 1200 (collectAsMap at HoodieSparkEngineContext.java:148) finished in 1.096 s | |
21/12/01 01:22:15 INFO DAGScheduler: Job 813 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:15 INFO TaskSchedulerImpl: Killing all running tasks in stage 1200: Stage finished | |
21/12/01 01:22:15 INFO DAGScheduler: Job 813 finished: collectAsMap at HoodieSparkEngineContext.java:148, took 1.096593 s | |
21/12/01 01:22:16 INFO FSUtils: Removed directory at s3a://hudi-testing/test_hoodie_table_2/.hoodie/.temp/20211201011944112 | |
21/12/01 01:22:16 INFO AbstractHoodieWriteClient: Auto cleaning is enabled. Running cleaner now | |
21/12/01 01:22:16 INFO AbstractHoodieWriteClient: Scheduling cleaning at instant time :20211201012216421 | |
21/12/01 01:22:16 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:16 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:22:16 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:16 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:17 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011944112__commit__COMPLETED]} | |
21/12/01 01:22:17 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:17 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:22:17 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:17 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:17 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:22:17 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:17 INFO FileSystemViewManager: Creating View Manager with storage type :REMOTE_FIRST | |
21/12/01 01:22:17 INFO FileSystemViewManager: Creating remote first table view | |
21/12/01 01:22:17 INFO FileSystemViewManager: Creating remote view for basePath s3a://hudi-testing/test_hoodie_table_2. Server=192.168.1.48:56507, Timeout=300 | |
21/12/01 01:22:17 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2 | |
21/12/01 01:22:20 INFO AbstractTableFileSystemView: Took 2734 ms to read 9 instants, 66 replaced file groups | |
21/12/01 01:22:21 INFO ClusteringUtils: Found 9 files in pending clustering operations | |
21/12/01 01:22:21 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/refresh/?basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011944112&timelinehash=38e7e6cda07b7589a95b1f03fa2afeb8cb80ee04a59b25f0400b5d88b14aa236) | |
21/12/01 01:22:23 INFO AbstractTableFileSystemView: Took 2049 ms to read 9 instants, 66 replaced file groups | |
21/12/01 01:22:24 INFO ClusteringUtils: Found 9 files in pending clustering operations | |
21/12/01 01:22:24 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/compactions/pending/?basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011944112&timelinehash=38e7e6cda07b7589a95b1f03fa2afeb8cb80ee04a59b25f0400b5d88b14aa236) | |
21/12/01 01:22:24 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2 | |
21/12/01 01:22:24 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:22:24 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2 | |
21/12/01 01:22:25 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011944112__commit__COMPLETED]} | |
21/12/01 01:22:25 INFO RocksDBDAO: DELETING RocksDB persisted at /tmp/hoodie_timeline_rocksdb/s3a:__hudi-testing_test_hoodie_table_2/c6ca1df8-1188-43fe-b30c-64d686cbf1f5 | |
21/12/01 01:22:25 INFO RocksDBDAO: No column family found. Loading default | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : RocksDB version: 6.20.3 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Git sha 8608d75d85f8e1b3b64b73a4fb6d19baec61ba5c | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Compile date 2021-05-05 13:35:30 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : DB SUMMARY | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : DB Session ID: 2AUDFYNS4Y3XJXI8SPWA | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : SST files in /tmp/hoodie_timeline_rocksdb/s3a:__hudi-testing_test_hoodie_table_2/c6ca1df8-1188-43fe-b30c-64d686cbf1f5 dir, Total Num: 0, files: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Write Ahead Log file in /tmp/hoodie_timeline_rocksdb/s3a:__hudi-testing_test_hoodie_table_2/c6ca1df8-1188-43fe-b30c-64d686cbf1f5: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.error_if_exists: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.create_if_missing: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.paranoid_checks: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.track_and_verify_wals_in_manifest: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.env: 0x125238928 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.fs: Posix File System | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.info_log: 0x6000015a9208 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_file_opening_threads: 16 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.statistics: 0x600000e40360 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.use_fsync: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_log_file_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_manifest_file_size: 1073741824 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.log_file_time_to_roll: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.keep_log_file_num: 1000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.recycle_log_file_num: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.allow_fallocate: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.allow_mmap_reads: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.allow_mmap_writes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.use_direct_reads: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.use_direct_io_for_flush_and_compaction: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.create_missing_column_families: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.db_log_dir: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.wal_dir: /tmp/hoodie_timeline_rocksdb/s3a:__hudi-testing_test_hoodie_table_2/c6ca1df8-1188-43fe-b30c-64d686cbf1f5 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_cache_numshardbits: 6 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.WAL_ttl_seconds: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.WAL_size_limit_MB: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_batch_group_size_bytes: 1048576 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.manifest_preallocation_size: 4194304 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.is_fd_close_on_exec: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.advise_random_on_open: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.db_write_buffer_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.write_buffer_manager: 0x60000315e790 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.access_hint_on_compaction_start: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.new_table_reader_for_compaction_inputs: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.random_access_max_buffer_size: 1048576 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.use_adaptive_mutex: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.rate_limiter: 0x0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.sst_file_manager.rate_bytes_per_sec: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.wal_recovery_mode: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_thread_tracking: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_pipelined_write: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.unordered_write: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.allow_concurrent_memtable_write: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_write_thread_adaptive_yield: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.write_thread_max_yield_usec: 100 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.write_thread_slow_yield_usec: 3 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.row_cache: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.wal_filter: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.avoid_flush_during_recovery: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.allow_ingest_behind: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.preserve_deletes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.two_write_queues: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.manual_wal_flush: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.atomic_flush: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.avoid_unnecessary_blocking_io: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.persist_stats_to_disk: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.write_dbid_to_manifest: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.log_readahead_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.file_checksum_gen_factory: Unknown | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.best_efforts_recovery: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bgerror_resume_count: 2147483647 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bgerror_resume_retry_interval: 1000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.allow_data_in_errors: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.db_host_id: __hostname__ | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_background_jobs: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_background_compactions: -1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_subcompactions: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.avoid_flush_during_shutdown: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.writable_file_max_buffer_size: 1048576 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.delayed_write_rate : 16777216 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_total_wal_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.delete_obsolete_files_period_micros: 21600000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.stats_dump_period_sec: 300 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.stats_persist_period_sec: 600 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.stats_history_buffer_size: 1048576 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_open_files: -1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bytes_per_sync: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.wal_bytes_per_sync: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.strict_bytes_per_sync: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_readahead_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_background_flushes: -1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Compression algorithms supported: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : kZSTD supported: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : kZlibCompression supported: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : kXpressCompression supported: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : kSnappyCompression supported: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : kZSTDNotFinalCompression supported: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : kLZ4HCCompression supported: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : kLZ4Compression supported: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : kBZip2Compression supported: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Fast CRC32 supported: Not supported on x86 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl_open.cc:285] Creating manifest 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/version_set.cc:4627] Recovering from manifest file: /tmp/hoodie_timeline_rocksdb/s3a:__hudi-testing_test_hoodie_table_2/c6ca1df8-1188-43fe-b30c-64d686cbf1f5/MANIFEST-000001 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/column_family.cc:598] --------------- Options for column family [default]: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.comparator: leveldb.BytewiseComparator | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.merge_operator: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.sst_partitioner_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_factory: SkipListFactory | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_factory: BlockBasedTable | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x600003fff950) | |
cache_index_and_filter_blocks: 0 | |
cache_index_and_filter_blocks_with_high_priority: 1 | |
pin_l0_filter_and_index_blocks_in_cache: 0 | |
pin_top_level_index_and_filter: 1 | |
index_type: 0 | |
data_block_index_type: 0 | |
index_shortening: 1 | |
data_block_hash_table_util_ratio: 0.750000 | |
hash_index_allow_collision: 1 | |
checksum: 1 | |
no_block_cache: 0 | |
block_cache: 0x600000c85558 | |
block_cache_name: LRUCache | |
block_cache_options: | |
capacity : 8388608 | |
num_shard_bits : 4 | |
strict_capacity_limit : 0 | |
memory_allocator : None | |
high_pri_pool_ratio: 0.000 | |
block_cache_compressed: 0x0 | |
persistent_cache: 0x0 | |
block_size: 4096 | |
block_size_deviation: 10 | |
block_restart_interval: 16 | |
index_block_restart_interval: 1 | |
metadata_block_size: 4096 | |
partition_filters: 0 | |
use_delta_encoding: 1 | |
filter_policy: nullptr | |
whole_key_filtering: 1 | |
verify_compression: 0 | |
read_amp_bytes_per_bit: 0 | |
format_version: 5 | |
enable_index_compression: 1 | |
block_align: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.write_buffer_size: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression: Snappy | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression: Disabled | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_insert_with_hint_prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.num_levels: 7 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_write_buffer_number_to_merge: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_size_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_file_num_compaction_trigger: 4 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_slowdown_writes_trigger: 20 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_stop_writes_trigger: 36 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_base: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_multiplier: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_base: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level_compaction_dynamic_level_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier: 10.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[0]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[1]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[2]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[3]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[4]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[5]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[6]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_sequential_skip_in_iterations: 8 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_compaction_bytes: 1677721600 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.arena_block_size: 8388608 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.soft_pending_compaction_bytes_limit: 68719476736 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.hard_pending_compaction_bytes_limit: 274877906944 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.rate_limit_delay_max_milliseconds: 100 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.disable_auto_compactions: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_style: kCompactionStyleLevel | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_pri: kMinOverlappingRatio | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.size_ratio: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.min_merge_width: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_merge_width: 4294967295 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_size_amplification_percent: 200 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.compression_size_percent: -1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.max_table_files_size: 1073741824 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.allow_compaction: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_properties_collectors: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_support: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_num_locks: 10000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_prefix_bloom_size_ratio: 0.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_whole_key_filtering: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_huge_page_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bloom_locality: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_successive_merges: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.optimize_filters_for_hits: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.paranoid_file_checks: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.force_consistency_checks: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.report_bg_io_stats: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.ttl: 2592000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.periodic_compaction_seconds: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_files: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_blob_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_file_size: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_compression_type: NoCompression | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_garbage_collection: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_garbage_collection_age_cutoff: 0.250000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/version_set.cc:4675] Recovered from manifest file:/tmp/hoodie_timeline_rocksdb/s3a:__hudi-testing_test_hoodie_table_2/c6ca1df8-1188-43fe-b30c-64d686cbf1f5/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/version_set.cc:4684] Column family [default] (ID 0), log number is 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/version_set.cc:4119] Creating manifest 4 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl_open.cc:1757] SstFileManager instance 0x7ff16b326630 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : DB pointer 0x7ff17c5c1600 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/column_family.cc:598] --------------- Options for column family [hudi_view_s3a:__hudi-testing_test_hoodie_table_2]: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.comparator: leveldb.BytewiseComparator | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.merge_operator: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.sst_partitioner_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_factory: SkipListFactory | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_factory: BlockBasedTable | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x600003fc8f90) | |
cache_index_and_filter_blocks: 0 | |
cache_index_and_filter_blocks_with_high_priority: 1 | |
pin_l0_filter_and_index_blocks_in_cache: 0 | |
pin_top_level_index_and_filter: 1 | |
index_type: 0 | |
data_block_index_type: 0 | |
index_shortening: 1 | |
data_block_hash_table_util_ratio: 0.750000 | |
hash_index_allow_collision: 1 | |
checksum: 1 | |
no_block_cache: 0 | |
block_cache: 0x600000b68658 | |
block_cache_name: LRUCache | |
block_cache_options: | |
capacity : 8388608 | |
num_shard_bits : 4 | |
strict_capacity_limit : 0 | |
memory_allocator : None | |
high_pri_pool_ratio: 0.000 | |
block_cache_compressed: 0x0 | |
persistent_cache: 0x0 | |
block_size: 4096 | |
block_size_deviation: 10 | |
block_restart_interval: 16 | |
index_block_restart_interval: 1 | |
metadata_block_size: 4096 | |
partition_filters: 0 | |
use_delta_encoding: 1 | |
filter_policy: nullptr | |
whole_key_filtering: 1 | |
verify_compression: 0 | |
read_amp_bytes_per_bit: 0 | |
format_version: 5 | |
enable_index_compression: 1 | |
block_align: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.write_buffer_size: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression: Snappy | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression: Disabled | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_insert_with_hint_prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.num_levels: 7 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_write_buffer_number_to_merge: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_size_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_file_num_compaction_trigger: 4 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_slowdown_writes_trigger: 20 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_stop_writes_trigger: 36 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_base: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_multiplier: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_base: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level_compaction_dynamic_level_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier: 10.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[0]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[1]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[2]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[3]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[4]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[5]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[6]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_sequential_skip_in_iterations: 8 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_compaction_bytes: 1677721600 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.arena_block_size: 8388608 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.soft_pending_compaction_bytes_limit: 68719476736 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.hard_pending_compaction_bytes_limit: 274877906944 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.rate_limit_delay_max_milliseconds: 100 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.disable_auto_compactions: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_style: kCompactionStyleLevel | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_pri: kMinOverlappingRatio | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.size_ratio: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.min_merge_width: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_merge_width: 4294967295 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_size_amplification_percent: 200 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.compression_size_percent: -1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.max_table_files_size: 1073741824 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.allow_compaction: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_properties_collectors: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_support: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_num_locks: 10000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_prefix_bloom_size_ratio: 0.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_whole_key_filtering: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_huge_page_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bloom_locality: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_successive_merges: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.optimize_filters_for_hits: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.paranoid_file_checks: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.force_consistency_checks: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.report_bg_io_stats: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.ttl: 2592000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.periodic_compaction_seconds: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_files: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_blob_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_file_size: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_compression_type: NoCompression | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_garbage_collection: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_garbage_collection_age_cutoff: 0.250000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl.cc:2662] Created column family [hudi_view_s3a:__hudi-testing_test_hoodie_table_2] (ID 1) | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/column_family.cc:598] --------------- Options for column family [hudi_pending_compaction_s3a:__hudi-testing_test_hoodie_table_2]: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.comparator: leveldb.BytewiseComparator | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.merge_operator: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.sst_partitioner_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_factory: SkipListFactory | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_factory: BlockBasedTable | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x600003f33140) | |
cache_index_and_filter_blocks: 0 | |
cache_index_and_filter_blocks_with_high_priority: 1 | |
pin_l0_filter_and_index_blocks_in_cache: 0 | |
pin_top_level_index_and_filter: 1 | |
index_type: 0 | |
data_block_index_type: 0 | |
index_shortening: 1 | |
data_block_hash_table_util_ratio: 0.750000 | |
hash_index_allow_collision: 1 | |
checksum: 1 | |
no_block_cache: 0 | |
block_cache: 0x600000c95238 | |
block_cache_name: LRUCache | |
block_cache_options: | |
capacity : 8388608 | |
num_shard_bits : 4 | |
strict_capacity_limit : 0 | |
memory_allocator : None | |
high_pri_pool_ratio: 0.000 | |
block_cache_compressed: 0x0 | |
persistent_cache: 0x0 | |
block_size: 4096 | |
block_size_deviation: 10 | |
block_restart_interval: 16 | |
index_block_restart_interval: 1 | |
metadata_block_size: 4096 | |
partition_filters: 0 | |
use_delta_encoding: 1 | |
filter_policy: nullptr | |
whole_key_filtering: 1 | |
verify_compression: 0 | |
read_amp_bytes_per_bit: 0 | |
format_version: 5 | |
enable_index_compression: 1 | |
block_align: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.write_buffer_size: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression: Snappy | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression: Disabled | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_insert_with_hint_prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.num_levels: 7 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_write_buffer_number_to_merge: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_size_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_file_num_compaction_trigger: 4 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_slowdown_writes_trigger: 20 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_stop_writes_trigger: 36 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_base: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_multiplier: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_base: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level_compaction_dynamic_level_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier: 10.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[0]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[1]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[2]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[3]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[4]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[5]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[6]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_sequential_skip_in_iterations: 8 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_compaction_bytes: 1677721600 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.arena_block_size: 8388608 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.soft_pending_compaction_bytes_limit: 68719476736 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.hard_pending_compaction_bytes_limit: 274877906944 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.rate_limit_delay_max_milliseconds: 100 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.disable_auto_compactions: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_style: kCompactionStyleLevel | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_pri: kMinOverlappingRatio | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.size_ratio: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.min_merge_width: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_merge_width: 4294967295 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_size_amplification_percent: 200 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.compression_size_percent: -1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.max_table_files_size: 1073741824 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.allow_compaction: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_properties_collectors: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_support: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_num_locks: 10000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_prefix_bloom_size_ratio: 0.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_whole_key_filtering: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_huge_page_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bloom_locality: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_successive_merges: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.optimize_filters_for_hits: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.paranoid_file_checks: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.force_consistency_checks: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.report_bg_io_stats: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.ttl: 2592000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.periodic_compaction_seconds: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_files: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_blob_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_file_size: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_compression_type: NoCompression | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_garbage_collection: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_garbage_collection_age_cutoff: 0.250000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl.cc:2662] Created column family [hudi_pending_compaction_s3a:__hudi-testing_test_hoodie_table_2] (ID 2) | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/column_family.cc:598] --------------- Options for column family [hudi_bootstrap_basefile_s3a:__hudi-testing_test_hoodie_table_2]: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.comparator: leveldb.BytewiseComparator | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.merge_operator: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.sst_partitioner_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_factory: SkipListFactory | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_factory: BlockBasedTable | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x600003f330d0) | |
cache_index_and_filter_blocks: 0 | |
cache_index_and_filter_blocks_with_high_priority: 1 | |
pin_l0_filter_and_index_blocks_in_cache: 0 | |
pin_top_level_index_and_filter: 1 | |
index_type: 0 | |
data_block_index_type: 0 | |
index_shortening: 1 | |
data_block_hash_table_util_ratio: 0.750000 | |
hash_index_allow_collision: 1 | |
checksum: 1 | |
no_block_cache: 0 | |
block_cache: 0x600000c954b8 | |
block_cache_name: LRUCache | |
block_cache_options: | |
capacity : 8388608 | |
num_shard_bits : 4 | |
strict_capacity_limit : 0 | |
memory_allocator : None | |
high_pri_pool_ratio: 0.000 | |
block_cache_compressed: 0x0 | |
persistent_cache: 0x0 | |
block_size: 4096 | |
block_size_deviation: 10 | |
block_restart_interval: 16 | |
index_block_restart_interval: 1 | |
metadata_block_size: 4096 | |
partition_filters: 0 | |
use_delta_encoding: 1 | |
filter_policy: nullptr | |
whole_key_filtering: 1 | |
verify_compression: 0 | |
read_amp_bytes_per_bit: 0 | |
format_version: 5 | |
enable_index_compression: 1 | |
block_align: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.write_buffer_size: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression: Snappy | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression: Disabled | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_insert_with_hint_prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.num_levels: 7 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_write_buffer_number_to_merge: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_size_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_file_num_compaction_trigger: 4 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_slowdown_writes_trigger: 20 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_stop_writes_trigger: 36 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_base: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_multiplier: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_base: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level_compaction_dynamic_level_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier: 10.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[0]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[1]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[2]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[3]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[4]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[5]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[6]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_sequential_skip_in_iterations: 8 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_compaction_bytes: 1677721600 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.arena_block_size: 8388608 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.soft_pending_compaction_bytes_limit: 68719476736 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.hard_pending_compaction_bytes_limit: 274877906944 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.rate_limit_delay_max_milliseconds: 100 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.disable_auto_compactions: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_style: kCompactionStyleLevel | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_pri: kMinOverlappingRatio | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.size_ratio: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.min_merge_width: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_merge_width: 4294967295 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_size_amplification_percent: 200 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.compression_size_percent: -1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.max_table_files_size: 1073741824 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.allow_compaction: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_properties_collectors: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_support: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_num_locks: 10000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_prefix_bloom_size_ratio: 0.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_whole_key_filtering: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_huge_page_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bloom_locality: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_successive_merges: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.optimize_filters_for_hits: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.paranoid_file_checks: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.force_consistency_checks: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.report_bg_io_stats: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.ttl: 2592000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.periodic_compaction_seconds: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_files: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_blob_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_file_size: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_compression_type: NoCompression | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_garbage_collection: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_garbage_collection_age_cutoff: 0.250000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl.cc:2662] Created column family [hudi_bootstrap_basefile_s3a:__hudi-testing_test_hoodie_table_2] (ID 3) | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/column_family.cc:598] --------------- Options for column family [hudi_partitions_s3a:__hudi-testing_test_hoodie_table_2]: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.comparator: leveldb.BytewiseComparator | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.merge_operator: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.sst_partitioner_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_factory: SkipListFactory | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_factory: BlockBasedTable | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x600003fc9e70) | |
cache_index_and_filter_blocks: 0 | |
cache_index_and_filter_blocks_with_high_priority: 1 | |
pin_l0_filter_and_index_blocks_in_cache: 0 | |
pin_top_level_index_and_filter: 1 | |
index_type: 0 | |
data_block_index_type: 0 | |
index_shortening: 1 | |
data_block_hash_table_util_ratio: 0.750000 | |
hash_index_allow_collision: 1 | |
checksum: 1 | |
no_block_cache: 0 | |
block_cache: 0x600000b68c98 | |
block_cache_name: LRUCache | |
block_cache_options: | |
capacity : 8388608 | |
num_shard_bits : 4 | |
strict_capacity_limit : 0 | |
memory_allocator : None | |
high_pri_pool_ratio: 0.000 | |
block_cache_compressed: 0x0 | |
persistent_cache: 0x0 | |
block_size: 4096 | |
block_size_deviation: 10 | |
block_restart_interval: 16 | |
index_block_restart_interval: 1 | |
metadata_block_size: 4096 | |
partition_filters: 0 | |
use_delta_encoding: 1 | |
filter_policy: nullptr | |
whole_key_filtering: 1 | |
verify_compression: 0 | |
read_amp_bytes_per_bit: 0 | |
format_version: 5 | |
enable_index_compression: 1 | |
block_align: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.write_buffer_size: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression: Snappy | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression: Disabled | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_insert_with_hint_prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.num_levels: 7 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_write_buffer_number_to_merge: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_size_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_file_num_compaction_trigger: 4 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_slowdown_writes_trigger: 20 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_stop_writes_trigger: 36 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_base: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_multiplier: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_base: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level_compaction_dynamic_level_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier: 10.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[0]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[1]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[2]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[3]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[4]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[5]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[6]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_sequential_skip_in_iterations: 8 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_compaction_bytes: 1677721600 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.arena_block_size: 8388608 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.soft_pending_compaction_bytes_limit: 68719476736 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.hard_pending_compaction_bytes_limit: 274877906944 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.rate_limit_delay_max_milliseconds: 100 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.disable_auto_compactions: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_style: kCompactionStyleLevel | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_pri: kMinOverlappingRatio | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.size_ratio: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.min_merge_width: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_merge_width: 4294967295 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_size_amplification_percent: 200 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.compression_size_percent: -1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.max_table_files_size: 1073741824 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.allow_compaction: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_properties_collectors: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_support: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_num_locks: 10000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_prefix_bloom_size_ratio: 0.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_whole_key_filtering: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_huge_page_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bloom_locality: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_successive_merges: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.optimize_filters_for_hits: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.paranoid_file_checks: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.force_consistency_checks: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.report_bg_io_stats: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.ttl: 2592000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.periodic_compaction_seconds: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_files: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_blob_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_file_size: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_compression_type: NoCompression | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_garbage_collection: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_garbage_collection_age_cutoff: 0.250000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl.cc:2662] Created column family [hudi_partitions_s3a:__hudi-testing_test_hoodie_table_2] (ID 4) | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/column_family.cc:598] --------------- Options for column family [hudi_replaced_fgs3a:__hudi-testing_test_hoodie_table_2]: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.comparator: leveldb.BytewiseComparator | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.merge_operator: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.sst_partitioner_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_factory: SkipListFactory | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_factory: BlockBasedTable | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x600003ffa340) | |
cache_index_and_filter_blocks: 0 | |
cache_index_and_filter_blocks_with_high_priority: 1 | |
pin_l0_filter_and_index_blocks_in_cache: 0 | |
pin_top_level_index_and_filter: 1 | |
index_type: 0 | |
data_block_index_type: 0 | |
index_shortening: 1 | |
data_block_hash_table_util_ratio: 0.750000 | |
hash_index_allow_collision: 1 | |
checksum: 1 | |
no_block_cache: 0 | |
block_cache: 0x600000c82ef8 | |
block_cache_name: LRUCache | |
block_cache_options: | |
capacity : 8388608 | |
num_shard_bits : 4 | |
strict_capacity_limit : 0 | |
memory_allocator : None | |
high_pri_pool_ratio: 0.000 | |
block_cache_compressed: 0x0 | |
persistent_cache: 0x0 | |
block_size: 4096 | |
block_size_deviation: 10 | |
block_restart_interval: 16 | |
index_block_restart_interval: 1 | |
metadata_block_size: 4096 | |
partition_filters: 0 | |
use_delta_encoding: 1 | |
filter_policy: nullptr | |
whole_key_filtering: 1 | |
verify_compression: 0 | |
read_amp_bytes_per_bit: 0 | |
format_version: 5 | |
enable_index_compression: 1 | |
block_align: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.write_buffer_size: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression: Snappy | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression: Disabled | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_insert_with_hint_prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.num_levels: 7 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_write_buffer_number_to_merge: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_size_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_file_num_compaction_trigger: 4 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_slowdown_writes_trigger: 20 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_stop_writes_trigger: 36 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_base: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_multiplier: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_base: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level_compaction_dynamic_level_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier: 10.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[0]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[1]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[2]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[3]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[4]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[5]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[6]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_sequential_skip_in_iterations: 8 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_compaction_bytes: 1677721600 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.arena_block_size: 8388608 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.soft_pending_compaction_bytes_limit: 68719476736 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.hard_pending_compaction_bytes_limit: 274877906944 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.rate_limit_delay_max_milliseconds: 100 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.disable_auto_compactions: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_style: kCompactionStyleLevel | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_pri: kMinOverlappingRatio | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.size_ratio: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.min_merge_width: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_merge_width: 4294967295 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_size_amplification_percent: 200 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.compression_size_percent: -1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.max_table_files_size: 1073741824 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.allow_compaction: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_properties_collectors: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_support: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_num_locks: 10000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_prefix_bloom_size_ratio: 0.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_whole_key_filtering: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_huge_page_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bloom_locality: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_successive_merges: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.optimize_filters_for_hits: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.paranoid_file_checks: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.force_consistency_checks: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.report_bg_io_stats: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.ttl: 2592000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.periodic_compaction_seconds: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_files: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_blob_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_file_size: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_compression_type: NoCompression | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_garbage_collection: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_garbage_collection_age_cutoff: 0.250000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl.cc:2662] Created column family [hudi_replaced_fgs3a:__hudi-testing_test_hoodie_table_2] (ID 5) | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/column_family.cc:598] --------------- Options for column family [hudi_pending_clustering_fgs3a:__hudi-testing_test_hoodie_table_2]: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.comparator: leveldb.BytewiseComparator | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.merge_operator: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_filter_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.sst_partitioner_factory: None | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_factory: SkipListFactory | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_factory: BlockBasedTable | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x600003fc8f60) | |
cache_index_and_filter_blocks: 0 | |
cache_index_and_filter_blocks_with_high_priority: 1 | |
pin_l0_filter_and_index_blocks_in_cache: 0 | |
pin_top_level_index_and_filter: 1 | |
index_type: 0 | |
data_block_index_type: 0 | |
index_shortening: 1 | |
data_block_hash_table_util_ratio: 0.750000 | |
hash_index_allow_collision: 1 | |
checksum: 1 | |
no_block_cache: 0 | |
block_cache: 0x600000b6a638 | |
block_cache_name: LRUCache | |
block_cache_options: | |
capacity : 8388608 | |
num_shard_bits : 4 | |
strict_capacity_limit : 0 | |
memory_allocator : None | |
high_pri_pool_ratio: 0.000 | |
block_cache_compressed: 0x0 | |
persistent_cache: 0x0 | |
block_size: 4096 | |
block_size_deviation: 10 | |
block_restart_interval: 16 | |
index_block_restart_interval: 1 | |
metadata_block_size: 4096 | |
partition_filters: 0 | |
use_delta_encoding: 1 | |
filter_policy: nullptr | |
whole_key_filtering: 1 | |
verify_compression: 0 | |
read_amp_bytes_per_bit: 0 | |
format_version: 5 | |
enable_index_compression: 1 | |
block_align: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.write_buffer_size: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression: Snappy | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression: Disabled | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_insert_with_hint_prefix_extractor: nullptr | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.num_levels: 7 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_write_buffer_number_to_merge: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_number_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_write_buffer_size_to_maintain: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.window_bits: -14 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.level: 32767 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.strategy: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.zstd_max_train_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.parallel_threads: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.enabled: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compression_opts.max_dict_buffer_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_file_num_compaction_trigger: 4 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_slowdown_writes_trigger: 20 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level0_stop_writes_trigger: 36 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_base: 67108864 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.target_file_size_multiplier: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_base: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.level_compaction_dynamic_level_bytes: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier: 10.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[0]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[1]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[2]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[3]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[4]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[5]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_bytes_for_level_multiplier_addtl[6]: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_sequential_skip_in_iterations: 8 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_compaction_bytes: 1677721600 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.arena_block_size: 8388608 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.soft_pending_compaction_bytes_limit: 68719476736 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.hard_pending_compaction_bytes_limit: 274877906944 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.rate_limit_delay_max_milliseconds: 100 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.disable_auto_compactions: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_style: kCompactionStyleLevel | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_pri: kMinOverlappingRatio | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.size_ratio: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.min_merge_width: 2 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_merge_width: 4294967295 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.max_size_amplification_percent: 200 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.compression_size_percent: -1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.max_table_files_size: 1073741824 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.compaction_options_fifo.allow_compaction: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.table_properties_collectors: | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_support: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.inplace_update_num_locks: 10000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_prefix_bloom_size_ratio: 0.000000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_whole_key_filtering: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.memtable_huge_page_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.bloom_locality: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.max_successive_merges: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.optimize_filters_for_hits: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.paranoid_file_checks: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.force_consistency_checks: 1 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.report_bg_io_stats: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.ttl: 2592000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.periodic_compaction_seconds: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_files: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.min_blob_size: 0 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_file_size: 268435456 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_compression_type: NoCompression | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.enable_blob_garbage_collection: false | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : Options.blob_garbage_collection_age_cutoff: 0.250000 | |
21/12/01 01:22:25 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl.cc:2662] Created column family [hudi_pending_clustering_fgs3a:__hudi-testing_test_hoodie_table_2] (ID 6) | |
21/12/01 01:22:26 INFO BlockManagerInfo: Removed broadcast_1109_piece0 on 192.168.1.48:56496 in memory (size: 150.1 KiB, free: 365.7 MiB) | |
21/12/01 01:22:26 INFO BlockManagerInfo: Removed broadcast_1108_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 365.8 MiB) | |
21/12/01 01:22:26 INFO BlockManagerInfo: Removed broadcast_1111_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 365.8 MiB) | |
21/12/01 01:22:26 INFO BlockManagerInfo: Removed broadcast_1110_piece0 on 192.168.1.48:56496 in memory (size: 111.5 KiB, free: 365.9 MiB) | |
21/12/01 01:22:26 INFO BlockManagerInfo: Removed broadcast_1106_piece0 on 192.168.1.48:56496 in memory (size: 150.2 KiB, free: 366.0 MiB) | |
21/12/01 01:22:26 INFO BlockManagerInfo: Removed broadcast_1107_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 366.1 MiB) | |
21/12/01 01:22:27 INFO RocksDbBasedFileSystemView: Resetting replacedFileGroups to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=66 | |
21/12/01 01:22:27 INFO RocksDBDAO: Prefix DELETE (query=part=) on hudi_replaced_fgs3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:22:27 INFO RocksDbBasedFileSystemView: Finished adding replaced file groups to partition (americas/brazil/sao_paulo) to ROCKSDB based view at /tmp/hoodie_timeline_rocksdb, Total file-groups=22 | |
21/12/01 01:22:27 INFO RocksDbBasedFileSystemView: Finished adding replaced file groups to partition (americas/united_states/san_francisco) to ROCKSDB based view at /tmp/hoodie_timeline_rocksdb, Total file-groups=22 | |
21/12/01 01:22:27 INFO RocksDbBasedFileSystemView: Finished adding replaced file groups to partition (asia/india/chennai) to ROCKSDB based view at /tmp/hoodie_timeline_rocksdb, Total file-groups=22 | |
21/12/01 01:22:27 INFO RocksDbBasedFileSystemView: Resetting replacedFileGroups to ROCKSDB based file-system view complete | |
21/12/01 01:22:27 INFO AbstractTableFileSystemView: Took 2038 ms to read 9 instants, 66 replaced file groups | |
21/12/01 01:22:27 INFO RocksDbBasedFileSystemView: Initializing pending compaction operations. Count=0 | |
21/12/01 01:22:27 INFO RocksDbBasedFileSystemView: Initializing external data file mapping. Count=0 | |
21/12/01 01:22:28 INFO ClusteringUtils: Found 9 files in pending clustering operations | |
21/12/01 01:22:28 INFO RocksDbBasedFileSystemView: Resetting file groups in pending clustering to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=9 | |
21/12/01 01:22:28 INFO RocksDBDAO: Prefix DELETE (query=part=) on hudi_pending_clustering_fgs3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:22:28 INFO RocksDbBasedFileSystemView: Resetting replacedFileGroups to ROCKSDB based file-system view complete | |
21/12/01 01:22:28 INFO RocksDbBasedFileSystemView: Created ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb | |
21/12/01 01:22:28 INFO RocksDBDAO: Prefix Search for (query=) on hudi_pending_compaction_s3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=0. Serialization Time taken(micro)=0, num entries=0 | |
21/12/01 01:22:28 INFO CleanPlanner: Incremental Cleaning mode is enabled. Looking up partition-paths that have since changed since last cleaned at 20211201004028138. New Instant to retain : Option{val=[20211201004828250__commit__COMPLETED]} | |
21/12/01 01:22:29 INFO CleanPlanner: Total Partitions to clean : 3, with policy KEEP_LATEST_COMMITS | |
21/12/01 01:22:29 INFO CleanPlanner: Using cleanerParallelism: 3 | |
21/12/01 01:22:30 INFO SparkContext: Starting job: collect at HoodieSparkEngineContext.java:100 | |
21/12/01 01:22:30 INFO DAGScheduler: Got job 814 (collect at HoodieSparkEngineContext.java:100) with 3 output partitions | |
21/12/01 01:22:30 INFO DAGScheduler: Final stage: ResultStage 1201 (collect at HoodieSparkEngineContext.java:100) | |
21/12/01 01:22:30 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:22:30 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:22:30 INFO DAGScheduler: Submitting ResultStage 1201 (MapPartitionsRDD[2735] at map at HoodieSparkEngineContext.java:100), which has no missing parents | |
21/12/01 01:22:30 INFO MemoryStore: Block broadcast_1112 stored as values in memory (estimated size 541.5 KiB, free 364.7 MiB) | |
21/12/01 01:22:30 INFO MemoryStore: Block broadcast_1112_piece0 stored as bytes in memory (estimated size 191.8 KiB, free 364.5 MiB) | |
21/12/01 01:22:30 INFO BlockManagerInfo: Added broadcast_1112_piece0 in memory on 192.168.1.48:56496 (size: 191.8 KiB, free: 365.9 MiB) | |
21/12/01 01:22:30 INFO SparkContext: Created broadcast 1112 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:30 INFO DAGScheduler: Submitting 3 missing tasks from ResultStage 1201 (MapPartitionsRDD[2735] at map at HoodieSparkEngineContext.java:100) (first 15 tasks are for partitions Vector(0, 1, 2)) | |
21/12/01 01:22:30 INFO TaskSchedulerImpl: Adding task set 1201.0 with 3 tasks resource profile 0 | |
21/12/01 01:22:30 INFO TaskSetManager: Starting task 0.0 in stage 1201.0 (TID 2244) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4358 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:30 INFO TaskSetManager: Starting task 1.0 in stage 1201.0 (TID 2245) (192.168.1.48, executor driver, partition 1, PROCESS_LOCAL, 4369 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:30 INFO TaskSetManager: Starting task 2.0 in stage 1201.0 (TID 2246) (192.168.1.48, executor driver, partition 2, PROCESS_LOCAL, 4351 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:30 INFO Executor: Running task 0.0 in stage 1201.0 (TID 2244) | |
21/12/01 01:22:30 INFO Executor: Running task 2.0 in stage 1201.0 (TID 2246) | |
21/12/01 01:22:30 INFO Executor: Running task 1.0 in stage 1201.0 (TID 2245) | |
21/12/01 01:22:30 INFO CleanPlanner: Cleaning asia/india/chennai, retaining latest 10 commits. | |
21/12/01 01:22:30 INFO CleanPlanner: Cleaning americas/united_states/san_francisco, retaining latest 10 commits. | |
21/12/01 01:22:30 INFO CleanPlanner: Cleaning americas/brazil/sao_paulo, retaining latest 10 commits. | |
21/12/01 01:22:30 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/filegroups/replaced/before/?partition=asia%2Findia%2Fchennai&maxinstant=20211201004828250&basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011944112&timelinehash=38e7e6cda07b7589a95b1f03fa2afeb8cb80ee04a59b25f0400b5d88b14aa236) | |
21/12/01 01:22:30 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/filegroups/replaced/before/?partition=americas%2Funited_states%2Fsan_francisco&maxinstant=20211201004828250&basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011944112&timelinehash=38e7e6cda07b7589a95b1f03fa2afeb8cb80ee04a59b25f0400b5d88b14aa236) | |
21/12/01 01:22:30 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/filegroups/replaced/before/?partition=americas%2Fbrazil%2Fsao_paulo&maxinstant=20211201004828250&basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011944112&timelinehash=38e7e6cda07b7589a95b1f03fa2afeb8cb80ee04a59b25f0400b5d88b14aa236) | |
21/12/01 01:22:30 INFO AbstractTableFileSystemView: Building file system view for partition (asia/india/chennai) | |
21/12/01 01:22:30 INFO AbstractTableFileSystemView: Building file system view for partition (americas/united_states/san_francisco) | |
21/12/01 01:22:30 INFO AbstractTableFileSystemView: Building file system view for partition (americas/brazil/sao_paulo) | |
21/12/01 01:22:30 INFO RocksDbBasedFileSystemView: Resetting and adding new partition (americas/united_states/san_francisco) to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=17 | |
21/12/01 01:22:30 INFO RocksDBDAO: Prefix DELETE (query=type=slice,part=americas/united_states/san_francisco,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:22:30 INFO RocksDBDAO: Prefix DELETE (query=type=df,part=americas/united_states/san_francisco,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:22:30 INFO RocksDbBasedFileSystemView: Finished adding new partition (americas/united_states/san_francisco) to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=17 | |
21/12/01 01:22:30 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=18, NumFileGroups=17, FileGroupsCreationTime=0, StoreTimeTaken=1 | |
21/12/01 01:22:30 INFO RocksDBDAO: Prefix Search for (query=type=slice,part=americas/united_states/san_francisco,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=0. Serialization Time taken(micro)=34, num entries=17 | |
21/12/01 01:22:30 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/filegroups/all/partition/?partition=americas%2Funited_states%2Fsan_francisco&basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011944112&timelinehash=38e7e6cda07b7589a95b1f03fa2afeb8cb80ee04a59b25f0400b5d88b14aa236) | |
21/12/01 01:22:30 INFO RocksDBDAO: Prefix Search for (query=type=slice,part=americas/united_states/san_francisco,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=0. Serialization Time taken(micro)=43, num entries=17 | |
21/12/01 01:22:30 INFO CleanPlanner: 1 patterns used to delete in partition path:americas/united_states/san_francisco | |
21/12/01 01:22:30 INFO Executor: Finished task 1.0 in stage 1201.0 (TID 2245). 1115 bytes result sent to driver | |
21/12/01 01:22:30 INFO TaskSetManager: Finished task 1.0 in stage 1201.0 (TID 2245) in 200 ms on 192.168.1.48 (executor driver) (1/3) | |
21/12/01 01:22:30 INFO RocksDbBasedFileSystemView: Resetting and adding new partition (americas/brazil/sao_paulo) to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=17 | |
21/12/01 01:22:30 INFO RocksDBDAO: Prefix DELETE (query=type=slice,part=americas/brazil/sao_paulo,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:22:30 INFO RocksDBDAO: Prefix DELETE (query=type=df,part=americas/brazil/sao_paulo,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:22:30 INFO RocksDbBasedFileSystemView: Resetting and adding new partition (asia/india/chennai) to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=17 | |
21/12/01 01:22:30 INFO RocksDBDAO: Prefix DELETE (query=type=slice,part=asia/india/chennai,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:22:30 INFO RocksDbBasedFileSystemView: Finished adding new partition (americas/brazil/sao_paulo) to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=17 | |
21/12/01 01:22:30 INFO RocksDBDAO: Prefix DELETE (query=type=df,part=asia/india/chennai,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2 | |
21/12/01 01:22:30 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=18, NumFileGroups=17, FileGroupsCreationTime=1, StoreTimeTaken=1 | |
21/12/01 01:22:30 INFO RocksDBDAO: Prefix Search for (query=type=slice,part=americas/brazil/sao_paulo,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=0. Serialization Time taken(micro)=50, num entries=17 | |
21/12/01 01:22:30 INFO RocksDbBasedFileSystemView: Finished adding new partition (asia/india/chennai) to ROCKSDB based file-system view at /tmp/hoodie_timeline_rocksdb, Total file-groups=17 | |
21/12/01 01:22:30 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=18, NumFileGroups=17, FileGroupsCreationTime=1, StoreTimeTaken=0 | |
21/12/01 01:22:30 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/filegroups/all/partition/?partition=americas%2Fbrazil%2Fsao_paulo&basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011944112&timelinehash=38e7e6cda07b7589a95b1f03fa2afeb8cb80ee04a59b25f0400b5d88b14aa236) | |
21/12/01 01:22:30 INFO RocksDBDAO: Prefix Search for (query=type=slice,part=americas/brazil/sao_paulo,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=1. Serialization Time taken(micro)=66, num entries=17 | |
21/12/01 01:22:30 INFO CleanPlanner: 1 patterns used to delete in partition path:americas/brazil/sao_paulo | |
21/12/01 01:22:30 INFO Executor: Finished task 0.0 in stage 1201.0 (TID 2244). 1093 bytes result sent to driver | |
21/12/01 01:22:30 INFO TaskSetManager: Finished task 0.0 in stage 1201.0 (TID 2244) in 521 ms on 192.168.1.48 (executor driver) (2/3) | |
21/12/01 01:22:30 INFO RocksDBDAO: Prefix Search for (query=type=slice,part=asia/india/chennai,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=3. Serialization Time taken(micro)=2873, num entries=17 | |
21/12/01 01:22:30 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/filegroups/all/partition/?partition=asia%2Findia%2Fchennai&basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011944112&timelinehash=38e7e6cda07b7589a95b1f03fa2afeb8cb80ee04a59b25f0400b5d88b14aa236) | |
21/12/01 01:22:30 INFO RocksDBDAO: Prefix Search for (query=type=slice,part=asia/india/chennai,id=) on hudi_view_s3a:__hudi-testing_test_hoodie_table_2. Total Time Taken (msec)=0. Serialization Time taken(micro)=63, num entries=17 | |
21/12/01 01:22:30 INFO CleanPlanner: 1 patterns used to delete in partition path:asia/india/chennai | |
21/12/01 01:22:30 INFO Executor: Finished task 2.0 in stage 1201.0 (TID 2246). 1079 bytes result sent to driver | |
21/12/01 01:22:30 INFO TaskSetManager: Finished task 2.0 in stage 1201.0 (TID 2246) in 524 ms on 192.168.1.48 (executor driver) (3/3) | |
21/12/01 01:22:30 INFO TaskSchedulerImpl: Removed TaskSet 1201.0, whose tasks have all completed, from pool | |
21/12/01 01:22:30 INFO DAGScheduler: ResultStage 1201 (collect at HoodieSparkEngineContext.java:100) finished in 0.589 s | |
21/12/01 01:22:30 INFO DAGScheduler: Job 814 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:30 INFO TaskSchedulerImpl: Killing all running tasks in stage 1201: Stage finished | |
21/12/01 01:22:30 INFO DAGScheduler: Job 814 finished: collect at HoodieSparkEngineContext.java:100, took 0.590468 s | |
21/12/01 01:22:31 INFO CleanPlanner: Requesting Cleaning with instant time [==>20211201012216421__clean__REQUESTED] | |
21/12/01 01:22:31 INFO AbstractHoodieWriteClient: Cleaner started | |
21/12/01 01:22:31 INFO AbstractHoodieWriteClient: Cleaned failed attempts if any | |
21/12/01 01:22:31 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:31 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:22:31 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:31 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:31 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201012216421__clean__REQUESTED]} | |
21/12/01 01:22:31 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:32 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:22:32 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:32 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:32 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:22:32 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:32 INFO FileSystemViewManager: Creating View Manager with storage type :REMOTE_FIRST | |
21/12/01 01:22:32 INFO FileSystemViewManager: Creating remote first table view | |
21/12/01 01:22:32 INFO CleanActionExecutor: Finishing previously unfinished cleaner instant=[==>20211201012216421__clean__REQUESTED] | |
21/12/01 01:22:32 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201012216421.clean.requested | |
21/12/01 01:22:33 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201012216421.clean.inflight | |
21/12/01 01:22:33 INFO CleanActionExecutor: Using cleanerParallelism: 3 | |
21/12/01 01:22:33 INFO SparkContext: Starting job: collect at HoodieSparkEngineContext.java:122 | |
21/12/01 01:22:33 INFO DAGScheduler: Registering RDD 2737 (mapPartitionsToPair at HoodieSparkEngineContext.java:116) as input to shuffle 269 | |
21/12/01 01:22:33 INFO DAGScheduler: Got job 815 (collect at HoodieSparkEngineContext.java:122) with 3 output partitions | |
21/12/01 01:22:33 INFO DAGScheduler: Final stage: ResultStage 1203 (collect at HoodieSparkEngineContext.java:122) | |
21/12/01 01:22:33 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1202) | |
21/12/01 01:22:33 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1202) | |
21/12/01 01:22:33 INFO DAGScheduler: Submitting ShuffleMapStage 1202 (MapPartitionsRDD[2737] at mapPartitionsToPair at HoodieSparkEngineContext.java:116), which has no missing parents | |
21/12/01 01:22:34 INFO MemoryStore: Block broadcast_1113 stored as values in memory (estimated size 608.8 KiB, free 363.9 MiB) | |
21/12/01 01:22:34 INFO MemoryStore: Block broadcast_1113_piece0 stored as bytes in memory (estimated size 213.2 KiB, free 363.7 MiB) | |
21/12/01 01:22:34 INFO BlockManagerInfo: Added broadcast_1113_piece0 in memory on 192.168.1.48:56496 (size: 213.2 KiB, free: 365.7 MiB) | |
21/12/01 01:22:34 INFO SparkContext: Created broadcast 1113 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:34 INFO DAGScheduler: Submitting 3 missing tasks from ShuffleMapStage 1202 (MapPartitionsRDD[2737] at mapPartitionsToPair at HoodieSparkEngineContext.java:116) (first 15 tasks are for partitions Vector(0, 1, 2)) | |
21/12/01 01:22:34 INFO TaskSchedulerImpl: Adding task set 1202.0 with 3 tasks resource profile 0 | |
21/12/01 01:22:34 INFO TaskSetManager: Starting task 0.0 in stage 1202.0 (TID 2247) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4616 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:34 INFO TaskSetManager: Starting task 1.0 in stage 1202.0 (TID 2248) (192.168.1.48, executor driver, partition 1, PROCESS_LOCAL, 4594 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:34 INFO TaskSetManager: Starting task 2.0 in stage 1202.0 (TID 2249) (192.168.1.48, executor driver, partition 2, PROCESS_LOCAL, 4580 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:34 INFO Executor: Running task 1.0 in stage 1202.0 (TID 2248) | |
21/12/01 01:22:34 INFO Executor: Running task 0.0 in stage 1202.0 (TID 2247) | |
21/12/01 01:22:34 INFO Executor: Running task 2.0 in stage 1202.0 (TID 2249) | |
21/12/01 01:22:34 INFO Executor: Finished task 0.0 in stage 1202.0 (TID 2247). 1088 bytes result sent to driver | |
21/12/01 01:22:34 INFO TaskSetManager: Finished task 0.0 in stage 1202.0 (TID 2247) in 516 ms on 192.168.1.48 (executor driver) (1/3) | |
21/12/01 01:22:34 INFO Executor: Finished task 2.0 in stage 1202.0 (TID 2249). 1088 bytes result sent to driver | |
21/12/01 01:22:34 INFO TaskSetManager: Finished task 2.0 in stage 1202.0 (TID 2249) in 936 ms on 192.168.1.48 (executor driver) (2/3) | |
21/12/01 01:22:34 INFO Executor: Finished task 1.0 in stage 1202.0 (TID 2248). 1088 bytes result sent to driver | |
21/12/01 01:22:34 INFO TaskSetManager: Finished task 1.0 in stage 1202.0 (TID 2248) in 957 ms on 192.168.1.48 (executor driver) (3/3) | |
21/12/01 01:22:34 INFO TaskSchedulerImpl: Removed TaskSet 1202.0, whose tasks have all completed, from pool | |
21/12/01 01:22:34 INFO DAGScheduler: ShuffleMapStage 1202 (mapPartitionsToPair at HoodieSparkEngineContext.java:116) finished in 1.035 s | |
21/12/01 01:22:34 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:22:34 INFO DAGScheduler: running: Set(ResultStage 1183) | |
21/12/01 01:22:34 INFO DAGScheduler: waiting: Set(ResultStage 1203) | |
21/12/01 01:22:34 INFO DAGScheduler: failed: Set() | |
21/12/01 01:22:34 INFO DAGScheduler: Submitting ResultStage 1203 (MapPartitionsRDD[2739] at map at HoodieSparkEngineContext.java:121), which has no missing parents | |
21/12/01 01:22:34 INFO MemoryStore: Block broadcast_1114 stored as values in memory (estimated size 7.7 KiB, free 363.7 MiB) | |
21/12/01 01:22:34 INFO MemoryStore: Block broadcast_1114_piece0 stored as bytes in memory (estimated size 4.0 KiB, free 363.7 MiB) | |
21/12/01 01:22:34 INFO BlockManagerInfo: Added broadcast_1114_piece0 in memory on 192.168.1.48:56496 (size: 4.0 KiB, free: 365.7 MiB) | |
21/12/01 01:22:34 INFO SparkContext: Created broadcast 1114 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:34 INFO DAGScheduler: Submitting 3 missing tasks from ResultStage 1203 (MapPartitionsRDD[2739] at map at HoodieSparkEngineContext.java:121) (first 15 tasks are for partitions Vector(0, 1, 2)) | |
21/12/01 01:22:34 INFO TaskSchedulerImpl: Adding task set 1203.0 with 3 tasks resource profile 0 | |
21/12/01 01:22:34 INFO TaskSetManager: Starting task 0.0 in stage 1203.0 (TID 2250) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:34 INFO TaskSetManager: Starting task 2.0 in stage 1203.0 (TID 2251) (192.168.1.48, executor driver, partition 2, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:34 INFO TaskSetManager: Starting task 1.0 in stage 1203.0 (TID 2252) (192.168.1.48, executor driver, partition 1, PROCESS_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:34 INFO Executor: Running task 2.0 in stage 1203.0 (TID 2251) | |
21/12/01 01:22:34 INFO Executor: Running task 1.0 in stage 1203.0 (TID 2252) | |
21/12/01 01:22:34 INFO Executor: Running task 0.0 in stage 1203.0 (TID 2250) | |
21/12/01 01:22:34 INFO ShuffleBlockFetcherIterator: Getting 0 (0.0 B) non-empty blocks including 0 (0.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:22:34 INFO ShuffleBlockFetcherIterator: Getting 2 (527.0 B) non-empty blocks including 2 (527.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:22:34 INFO ShuffleBlockFetcherIterator: Getting 1 (276.0 B) non-empty blocks including 1 (276.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:22:34 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:22:34 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:22:34 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:22:34 INFO Executor: Finished task 1.0 in stage 1203.0 (TID 2252). 1140 bytes result sent to driver | |
21/12/01 01:22:34 INFO TaskSetManager: Finished task 1.0 in stage 1203.0 (TID 2252) in 2 ms on 192.168.1.48 (executor driver) (1/3) | |
21/12/01 01:22:34 INFO Executor: Finished task 0.0 in stage 1203.0 (TID 2250). 1778 bytes result sent to driver | |
21/12/01 01:22:34 INFO Executor: Finished task 2.0 in stage 1203.0 (TID 2251). 1573 bytes result sent to driver | |
21/12/01 01:22:34 INFO TaskSetManager: Finished task 0.0 in stage 1203.0 (TID 2250) in 4 ms on 192.168.1.48 (executor driver) (2/3) | |
21/12/01 01:22:34 INFO TaskSetManager: Finished task 2.0 in stage 1203.0 (TID 2251) in 4 ms on 192.168.1.48 (executor driver) (3/3) | |
21/12/01 01:22:34 INFO TaskSchedulerImpl: Removed TaskSet 1203.0, whose tasks have all completed, from pool | |
21/12/01 01:22:34 INFO DAGScheduler: ResultStage 1203 (collect at HoodieSparkEngineContext.java:122) finished in 0.005 s | |
21/12/01 01:22:34 INFO DAGScheduler: Job 815 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:34 INFO TaskSchedulerImpl: Killing all running tasks in stage 1203: Stage finished | |
21/12/01 01:22:34 INFO DAGScheduler: Job 815 finished: collect at HoodieSparkEngineContext.java:122, took 1.042042 s | |
21/12/01 01:22:35 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201012216421__clean__INFLIGHT]} | |
21/12/01 01:22:35 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:35 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:22:35 INFO HoodieCreateHandle: CreateHandle for partitionPath asia/india/chennai fileID d2e90ef3-db5d-4488-a112-512bd5889d86-0, took 44059 ms. | |
21/12/01 01:22:35 INFO BoundedInMemoryExecutor: Queue Consumption is done; notifying producer threads | |
21/12/01 01:22:35 INFO MemoryStore: Block rdd_2703_2 stored as values in memory (estimated size 401.0 B, free 363.7 MiB) | |
21/12/01 01:22:35 INFO BlockManagerInfo: Added rdd_2703_2 in memory on 192.168.1.48:56496 (size: 401.0 B, free: 365.7 MiB) | |
21/12/01 01:22:35 INFO Executor: Finished task 2.0 in stage 1183.0 (TID 2229). 1572 bytes result sent to driver | |
21/12/01 01:22:35 INFO TaskSetManager: Finished task 2.0 in stage 1183.0 (TID 2229) in 47854 ms on 192.168.1.48 (executor driver) (1/3) | |
21/12/01 01:22:35 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:36 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:36 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:22:36 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:36 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011944112__deltacommit__COMPLETED]} | |
21/12/01 01:22:36 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201012216421__clean__INFLIGHT]} | |
21/12/01 01:22:36 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:37 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:22:37 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:37 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:37 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:22:38 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:38 INFO HoodieTableMetadataUtil: Updating at 20211201012216421 from Clean. #partitions_updated=3, #files_deleted=3 | |
21/12/01 01:22:38 INFO HoodieTableMetadataUtil: Loading file groups for metadata table partition files | |
21/12/01 01:22:38 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011944112__deltacommit__COMPLETED]} | |
21/12/01 01:22:38 INFO AbstractTableFileSystemView: Took 1 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:39 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:39 INFO AbstractTableFileSystemView: Building file system view for partition (files) | |
21/12/01 01:22:39 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=14, NumFileGroups=1, FileGroupsCreationTime=1, StoreTimeTaken=0 | |
21/12/01 01:22:39 INFO AbstractHoodieClient: Embedded Timeline Server is disabled. Not starting timeline service | |
21/12/01 01:22:39 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:39 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:22:39 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:39 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:39 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201011944112__deltacommit__COMPLETED]} | |
21/12/01 01:22:39 INFO AbstractHoodieWriteClient: Generate a new instant time: 20211201012216421 action: deltacommit | |
21/12/01 01:22:39 INFO HoodieHeartbeatClient: Received request to start heartbeat for instant time 20211201012216421 | |
21/12/01 01:22:40 INFO HoodieActiveTimeline: Creating a new instant [==>20211201012216421__deltacommit__REQUESTED] | |
21/12/01 01:22:40 INFO HoodieCreateHandle: CreateHandle for partitionPath americas/united_states/san_francisco fileID 96ab0122-d7c8-4038-b5d1-9592dfd9e29f-0, took 49073 ms. | |
21/12/01 01:22:40 INFO BoundedInMemoryExecutor: Queue Consumption is done; notifying producer threads | |
21/12/01 01:22:40 INFO MemoryStore: Block rdd_2703_1 stored as values in memory (estimated size 437.0 B, free 363.7 MiB) | |
21/12/01 01:22:40 INFO BlockManagerInfo: Added rdd_2703_1 in memory on 192.168.1.48:56496 (size: 437.0 B, free: 365.7 MiB) | |
21/12/01 01:22:40 INFO Executor: Finished task 1.0 in stage 1183.0 (TID 2228). 1608 bytes result sent to driver | |
21/12/01 01:22:40 INFO TaskSetManager: Finished task 1.0 in stage 1183.0 (TID 2228) in 52840 ms on 192.168.1.48 (executor driver) (2/3) | |
21/12/01 01:22:40 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:40 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:22:41 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:41 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:41 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201012216421__deltacommit__REQUESTED]} | |
21/12/01 01:22:41 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:22:41 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:22:41 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata | |
21/12/01 01:22:41 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:41 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:41 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201012216421__deltacommit__REQUESTED]} | |
21/12/01 01:22:41 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:41 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:41 INFO HoodieCreateHandle: CreateHandle for partitionPath americas/brazil/sao_paulo fileID c0bf7539-9317-4e0f-b82f-875fc3a17625-0, took 50306 ms. | |
21/12/01 01:22:41 INFO BoundedInMemoryExecutor: Queue Consumption is done; notifying producer threads | |
21/12/01 01:22:41 INFO MemoryStore: Block rdd_2703_0 stored as values in memory (estimated size 415.0 B, free 363.7 MiB) | |
21/12/01 01:22:41 INFO BlockManagerInfo: Added rdd_2703_0 in memory on 192.168.1.48:56496 (size: 415.0 B, free: 365.7 MiB) | |
21/12/01 01:22:41 INFO Executor: Finished task 0.0 in stage 1183.0 (TID 2227). 1586 bytes result sent to driver | |
21/12/01 01:22:41 INFO TaskSetManager: Finished task 0.0 in stage 1183.0 (TID 2227) in 54121 ms on 192.168.1.48 (executor driver) (3/3) | |
21/12/01 01:22:41 INFO TaskSchedulerImpl: Removed TaskSet 1183.0, whose tasks have all completed, from pool | |
21/12/01 01:22:41 INFO DAGScheduler: ResultStage 1183 (collect at SparkExecuteClusteringCommitActionExecutor.java:85) finished in 54.198 s | |
21/12/01 01:22:41 INFO DAGScheduler: Job 802 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:41 INFO TaskSchedulerImpl: Killing all running tasks in stage 1183: Stage finished | |
21/12/01 01:22:41 INFO DAGScheduler: Job 802 finished: collect at SparkExecuteClusteringCommitActionExecutor.java:85, took 67.560211 s | |
21/12/01 01:22:41 INFO BaseSparkCommitActionExecutor: no validators configured. | |
21/12/01 01:22:41 INFO BaseCommitActionExecutor: Auto commit disabled for 20211201011347895 | |
21/12/01 01:22:41 INFO CommitUtils: Creating metadata for CLUSTER numWriteStats:3numReplaceFileIds:3 | |
21/12/01 01:22:41 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/dir/exists?markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201011347895) | |
21/12/01 01:22:42 INFO TimelineServerBasedWriteMarkers: Sending request : (http://192.168.1.48:56507/v1/hoodie/marker/create-and-merge?markerdirpath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2%2F.hoodie%2F.temp%2F20211201011347895) | |
21/12/01 01:22:42 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:42 INFO AsyncCleanerService: Async auto cleaning is not enabled. Not running cleaner now | |
21/12/01 01:22:42 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:22:42 INFO SparkContext: Starting job: countByKey at BaseSparkCommitActionExecutor.java:191 | |
21/12/01 01:22:42 INFO DAGScheduler: Registering RDD 2743 (countByKey at BaseSparkCommitActionExecutor.java:191) as input to shuffle 270 | |
21/12/01 01:22:42 INFO DAGScheduler: Got job 816 (countByKey at BaseSparkCommitActionExecutor.java:191) with 1 output partitions | |
21/12/01 01:22:42 INFO DAGScheduler: Final stage: ResultStage 1205 (countByKey at BaseSparkCommitActionExecutor.java:191) | |
21/12/01 01:22:42 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1204) | |
21/12/01 01:22:42 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1204) | |
21/12/01 01:22:42 INFO DAGScheduler: Submitting ShuffleMapStage 1204 (MapPartitionsRDD[2743] at countByKey at BaseSparkCommitActionExecutor.java:191), which has no missing parents | |
21/12/01 01:22:42 INFO MemoryStore: Block broadcast_1115 stored as values in memory (estimated size 10.6 KiB, free 363.7 MiB) | |
21/12/01 01:22:42 INFO MemoryStore: Block broadcast_1115_piece0 stored as bytes in memory (estimated size 5.2 KiB, free 363.6 MiB) | |
21/12/01 01:22:42 INFO BlockManagerInfo: Added broadcast_1115_piece0 in memory on 192.168.1.48:56496 (size: 5.2 KiB, free: 365.7 MiB) | |
21/12/01 01:22:42 INFO SparkContext: Created broadcast 1115 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:42 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1204 (MapPartitionsRDD[2743] at countByKey at BaseSparkCommitActionExecutor.java:191) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:42 INFO TaskSchedulerImpl: Adding task set 1204.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:42 INFO TaskSetManager: Starting task 0.0 in stage 1204.0 (TID 2253) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4828 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:42 INFO Executor: Running task 0.0 in stage 1204.0 (TID 2253) | |
21/12/01 01:22:42 INFO MemoryStore: Block rdd_2741_0 stored as values in memory (estimated size 988.0 B, free 363.6 MiB) | |
21/12/01 01:22:42 INFO BlockManagerInfo: Added rdd_2741_0 in memory on 192.168.1.48:56496 (size: 988.0 B, free: 365.7 MiB) | |
21/12/01 01:22:42 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:42 INFO Executor: Finished task 0.0 in stage 1204.0 (TID 2253). 1043 bytes result sent to driver | |
21/12/01 01:22:42 INFO TaskSetManager: Finished task 0.0 in stage 1204.0 (TID 2253) in 4 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:42 INFO TaskSchedulerImpl: Removed TaskSet 1204.0, whose tasks have all completed, from pool | |
21/12/01 01:22:42 INFO DAGScheduler: ShuffleMapStage 1204 (countByKey at BaseSparkCommitActionExecutor.java:191) finished in 0.006 s | |
21/12/01 01:22:42 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:22:42 INFO DAGScheduler: running: Set() | |
21/12/01 01:22:42 INFO DAGScheduler: waiting: Set(ResultStage 1205) | |
21/12/01 01:22:42 INFO DAGScheduler: failed: Set() | |
21/12/01 01:22:42 INFO DAGScheduler: Submitting ResultStage 1205 (ShuffledRDD[2744] at countByKey at BaseSparkCommitActionExecutor.java:191), which has no missing parents | |
21/12/01 01:22:42 INFO MemoryStore: Block broadcast_1116 stored as values in memory (estimated size 5.6 KiB, free 363.6 MiB) | |
21/12/01 01:22:42 INFO MemoryStore: Block broadcast_1116_piece0 stored as bytes in memory (estimated size 3.2 KiB, free 363.6 MiB) | |
21/12/01 01:22:42 INFO BlockManagerInfo: Added broadcast_1116_piece0 in memory on 192.168.1.48:56496 (size: 3.2 KiB, free: 365.7 MiB) | |
21/12/01 01:22:42 INFO SparkContext: Created broadcast 1116 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:42 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1205 (ShuffledRDD[2744] at countByKey at BaseSparkCommitActionExecutor.java:191) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:42 INFO TaskSchedulerImpl: Adding task set 1205.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:42 INFO TaskSetManager: Starting task 0.0 in stage 1205.0 (TID 2254) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:42 INFO Executor: Running task 0.0 in stage 1205.0 (TID 2254) | |
21/12/01 01:22:42 INFO ShuffleBlockFetcherIterator: Getting 1 (156.0 B) non-empty blocks including 1 (156.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:22:42 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:22:42 INFO Executor: Finished task 0.0 in stage 1205.0 (TID 2254). 1318 bytes result sent to driver | |
21/12/01 01:22:42 INFO TaskSetManager: Finished task 0.0 in stage 1205.0 (TID 2254) in 3 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:42 INFO TaskSchedulerImpl: Removed TaskSet 1205.0, whose tasks have all completed, from pool | |
21/12/01 01:22:42 INFO DAGScheduler: ResultStage 1205 (countByKey at BaseSparkCommitActionExecutor.java:191) finished in 0.004 s | |
21/12/01 01:22:42 INFO DAGScheduler: Job 816 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:42 INFO TaskSchedulerImpl: Killing all running tasks in stage 1205: Stage finished | |
21/12/01 01:22:42 INFO DAGScheduler: Job 816 finished: countByKey at BaseSparkCommitActionExecutor.java:191, took 0.011080 s | |
21/12/01 01:22:42 INFO BaseSparkCommitActionExecutor: Workload profile :WorkloadProfile {globalStat=WorkloadStat {numInserts=0, numUpdates=3}, partitionStat={files=WorkloadStat {numInserts=0, numUpdates=3}}, operationType=UPSERT_PREPPED} | |
21/12/01 01:22:42 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201012216421.deltacommit.requested | |
21/12/01 01:22:42 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:42 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:22:43 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:43 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201012216421__deltacommit__REQUESTED]} | |
21/12/01 01:22:43 INFO BlockManagerInfo: Removed broadcast_1112_piece0 on 192.168.1.48:56496 in memory (size: 191.8 KiB, free: 365.9 MiB) | |
21/12/01 01:22:43 INFO BlockManagerInfo: Removed broadcast_1115_piece0 on 192.168.1.48:56496 in memory (size: 5.2 KiB, free: 365.9 MiB) | |
21/12/01 01:22:43 INFO BlockManagerInfo: Removed broadcast_1113_piece0 on 192.168.1.48:56496 in memory (size: 213.2 KiB, free: 366.1 MiB) | |
21/12/01 01:22:43 INFO BlockManagerInfo: Removed broadcast_1114_piece0 on 192.168.1.48:56496 in memory (size: 4.0 KiB, free: 366.1 MiB) | |
21/12/01 01:22:43 INFO BlockManagerInfo: Removed broadcast_1116_piece0 on 192.168.1.48:56496 in memory (size: 3.2 KiB, free: 366.1 MiB) | |
21/12/01 01:22:43 INFO HoodieActiveTimeline: Created a new file in meta path: s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201012216421.deltacommit.inflight | |
21/12/01 01:22:43 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201012216421__clean__INFLIGHT]} | |
21/12/01 01:22:43 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:44 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201012216421.deltacommit.inflight | |
21/12/01 01:22:44 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:44 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:22:44 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:22:44 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:44 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:22:44 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:44 INFO SparkContext: Starting job: collect at SparkRejectUpdateStrategy.java:52 | |
21/12/01 01:22:44 INFO DAGScheduler: Registering RDD 2747 (distinct at SparkRejectUpdateStrategy.java:52) as input to shuffle 271 | |
21/12/01 01:22:44 INFO DAGScheduler: Got job 817 (collect at SparkRejectUpdateStrategy.java:52) with 1 output partitions | |
21/12/01 01:22:44 INFO DAGScheduler: Final stage: ResultStage 1207 (collect at SparkRejectUpdateStrategy.java:52) | |
21/12/01 01:22:44 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1206) | |
21/12/01 01:22:44 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1206) | |
21/12/01 01:22:44 INFO DAGScheduler: Submitting ShuffleMapStage 1206 (MapPartitionsRDD[2747] at distinct at SparkRejectUpdateStrategy.java:52), which has no missing parents | |
21/12/01 01:22:44 INFO MemoryStore: Block broadcast_1117 stored as values in memory (estimated size 10.6 KiB, free 365.2 MiB) | |
21/12/01 01:22:44 INFO MemoryStore: Block broadcast_1117_piece0 stored as bytes in memory (estimated size 5.1 KiB, free 365.2 MiB) | |
21/12/01 01:22:44 INFO BlockManagerInfo: Added broadcast_1117_piece0 in memory on 192.168.1.48:56496 (size: 5.1 KiB, free: 366.1 MiB) | |
21/12/01 01:22:44 INFO SparkContext: Created broadcast 1117 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:44 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1206 (MapPartitionsRDD[2747] at distinct at SparkRejectUpdateStrategy.java:52) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:44 INFO TaskSchedulerImpl: Adding task set 1206.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:44 INFO TaskSetManager: Starting task 0.0 in stage 1206.0 (TID 2255) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4828 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:44 INFO Executor: Running task 0.0 in stage 1206.0 (TID 2255) | |
21/12/01 01:22:44 INFO BlockManager: Found block rdd_2741_0 locally | |
21/12/01 01:22:44 INFO Executor: Finished task 0.0 in stage 1206.0 (TID 2255). 1129 bytes result sent to driver | |
21/12/01 01:22:44 INFO TaskSetManager: Finished task 0.0 in stage 1206.0 (TID 2255) in 3 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:44 INFO TaskSchedulerImpl: Removed TaskSet 1206.0, whose tasks have all completed, from pool | |
21/12/01 01:22:44 INFO DAGScheduler: ShuffleMapStage 1206 (distinct at SparkRejectUpdateStrategy.java:52) finished in 0.004 s | |
21/12/01 01:22:44 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:22:44 INFO DAGScheduler: running: Set() | |
21/12/01 01:22:44 INFO DAGScheduler: waiting: Set(ResultStage 1207) | |
21/12/01 01:22:44 INFO DAGScheduler: failed: Set() | |
21/12/01 01:22:44 INFO DAGScheduler: Submitting ResultStage 1207 (MapPartitionsRDD[2749] at distinct at SparkRejectUpdateStrategy.java:52), which has no missing parents | |
21/12/01 01:22:44 INFO MemoryStore: Block broadcast_1118 stored as values in memory (estimated size 6.4 KiB, free 365.2 MiB) | |
21/12/01 01:22:44 INFO MemoryStore: Block broadcast_1118_piece0 stored as bytes in memory (estimated size 3.5 KiB, free 365.2 MiB) | |
21/12/01 01:22:44 INFO BlockManagerInfo: Added broadcast_1118_piece0 in memory on 192.168.1.48:56496 (size: 3.5 KiB, free: 366.1 MiB) | |
21/12/01 01:22:44 INFO SparkContext: Created broadcast 1118 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:44 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1207 (MapPartitionsRDD[2749] at distinct at SparkRejectUpdateStrategy.java:52) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:44 INFO TaskSchedulerImpl: Adding task set 1207.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:44 INFO TaskSetManager: Starting task 0.0 in stage 1207.0 (TID 2256) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:44 INFO Executor: Running task 0.0 in stage 1207.0 (TID 2256) | |
21/12/01 01:22:44 INFO ShuffleBlockFetcherIterator: Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:22:44 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:22:44 INFO Executor: Finished task 0.0 in stage 1207.0 (TID 2256). 1249 bytes result sent to driver | |
21/12/01 01:22:44 INFO TaskSetManager: Finished task 0.0 in stage 1207.0 (TID 2256) in 3 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:44 INFO TaskSchedulerImpl: Removed TaskSet 1207.0, whose tasks have all completed, from pool | |
21/12/01 01:22:44 INFO DAGScheduler: ResultStage 1207 (collect at SparkRejectUpdateStrategy.java:52) finished in 0.004 s | |
21/12/01 01:22:44 INFO DAGScheduler: Job 817 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:44 INFO TaskSchedulerImpl: Killing all running tasks in stage 1207: Stage finished | |
21/12/01 01:22:44 INFO DAGScheduler: Job 817 finished: collect at SparkRejectUpdateStrategy.java:52, took 0.010425 s | |
21/12/01 01:22:44 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:45 INFO HoodieTableMetadataUtil: Updating at 20211201011347895 from Commit/CLUSTER. #partitions_updated=4 | |
21/12/01 01:22:45 INFO HoodieTableMetadataUtil: Loading file groups for metadata table partition files | |
21/12/01 01:22:45 INFO UpsertPartitioner: AvgRecordSize => 1024 | |
21/12/01 01:22:45 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201012216421__deltacommit__INFLIGHT]} | |
21/12/01 01:22:45 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:45 INFO SparkContext: Starting job: collectAsMap at UpsertPartitioner.java:256 | |
21/12/01 01:22:45 INFO DAGScheduler: Got job 818 (collectAsMap at UpsertPartitioner.java:256) with 1 output partitions | |
21/12/01 01:22:45 INFO DAGScheduler: Final stage: ResultStage 1208 (collectAsMap at UpsertPartitioner.java:256) | |
21/12/01 01:22:45 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:22:45 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:22:45 INFO DAGScheduler: Submitting ResultStage 1208 (MapPartitionsRDD[2752] at mapToPair at UpsertPartitioner.java:255), which has no missing parents | |
21/12/01 01:22:45 INFO MemoryStore: Block broadcast_1119 stored as values in memory (estimated size 316.5 KiB, free 364.9 MiB) | |
21/12/01 01:22:45 INFO MemoryStore: Block broadcast_1119_piece0 stored as bytes in memory (estimated size 110.4 KiB, free 364.8 MiB) | |
21/12/01 01:22:45 INFO BlockManagerInfo: Added broadcast_1119_piece0 in memory on 192.168.1.48:56496 (size: 110.4 KiB, free: 366.0 MiB) | |
21/12/01 01:22:45 INFO SparkContext: Created broadcast 1119 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:45 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1208 (MapPartitionsRDD[2752] at mapToPair at UpsertPartitioner.java:255) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:45 INFO TaskSchedulerImpl: Adding task set 1208.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:45 INFO TaskSetManager: Starting task 0.0 in stage 1208.0 (TID 2257) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4338 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:45 INFO Executor: Running task 0.0 in stage 1208.0 (TID 2257) | |
21/12/01 01:22:45 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:22:45 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:22:45 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata | |
21/12/01 01:22:45 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:45 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:45 INFO AbstractTableFileSystemView: Building file system view for partition (files) | |
21/12/01 01:22:45 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl.cc:932] ------- DUMPING STATS ------- | |
21/12/01 01:22:45 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl.cc:933] | |
** DB Stats ** | |
Uptime(secs): 4602.1 total, 300.0 interval | |
Cumulative writes: 6 writes, 259 keys, 6 commit groups, 0.9 writes per commit group, ingest: 0.00 GB, 0.00 MB/s | |
Cumulative WAL: 6 writes, 0 syncs, 6.00 writes per sync, written: 0.00 GB, 0.00 MB/s | |
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent | |
Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s | |
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s | |
Interval stall: 00:00:0.000 H:M:S, 0.0 percent | |
** Compaction Stats [default] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [default] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 300.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [default] ** | |
** Compaction Stats [hudi_view_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [hudi_view_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 300.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [hudi_view_s3a:__hudi-testing_test_hoodie_table_2] ** | |
** Compaction Stats [hudi_pending_compaction_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [hudi_pending_compaction_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 300.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [hudi_pending_compaction_s3a:__hudi-testing_test_hoodie_table_2] ** | |
** Compaction Stats [hudi_bootstrap_basefile_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [hudi_bootstrap_basefile_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 300.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [hudi_bootstrap_basefile_s3a:__hudi-testing_test_hoodie_table_2] ** | |
** Compaction Stats [hudi_partitions_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [hudi_partitions_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 300.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [hudi_partitions_s3a:__hudi-testing_test_hoodie_table_2] ** | |
** Compaction Stats [hudi_replaced_fgs3a:__hudi-testing_test_hoodie_table_2] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [hudi_replaced_fgs3a:__hudi-testing_test_hoodie_table_2] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 300.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [hudi_replaced_fgs3a:__hudi-testing_test_hoodie_table_2] ** | |
** Compaction Stats [hudi_pending_clustering_fgs3a:__hudi-testing_test_hoodie_table_2] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [hudi_pending_clustering_fgs3a:__hudi-testing_test_hoodie_table_2] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 300.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [hudi_pending_clustering_fgs3a:__hudi-testing_test_hoodie_table_2] ** | |
** Compaction Stats [default] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [default] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 0.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [default] ** | |
** Compaction Stats [hudi_view_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [hudi_view_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 0.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [hudi_view_s3a:__hudi-testing_test_hoodie_table_2] ** | |
** Compaction Stats [hudi_pending_compaction_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [hudi_pending_compaction_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 0.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [hudi_pending_compaction_s3a:__hudi-testing_test_hoodie_table_2] ** | |
** Compaction Stats [hudi_bootstrap_basefile_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [hudi_bootstrap_basefile_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 0.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [hudi_bootstrap_basefile_s3a:__hudi-testing_test_hoodie_table_2] ** | |
** Compaction Stats [hudi_partitions_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [hudi_partitions_s3a:__hudi-testing_test_hoodie_table_2] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 0.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [hudi_partitions_s3a:__hudi-testing_test_hoodie_table_2] ** | |
** Compaction Stats [hudi_replaced_fgs3a:__hudi-testing_test_hoodie_table_2] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [hudi_replaced_fgs3a:__hudi-testing_test_hoodie_table_2] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 0.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [hudi_replaced_fgs3a:__hudi-testing_test_hoodie_table_2] ** | |
** Compaction Stats [hudi_pending_clustering_fgs3a:__hudi-testing_test_hoodie_table_2] ** | |
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 | |
** Compaction Stats [hudi_pending_clustering_fgs3a:__hudi-testing_test_hoodie_table_2] ** | |
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
Blob file count: 0, total size: 0.0 GB | |
Uptime(secs): 4602.1 total, 0.0 interval | |
Flush(GB): cumulative 0.000, interval 0.000 | |
AddFile(GB): cumulative 0.000, interval 0.000 | |
AddFile(Total Files): cumulative 0, interval 0 | |
AddFile(L0 Files): cumulative 0, interval 0 | |
AddFile(Keys): cumulative 0, interval 0 | |
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds | |
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count | |
** File Read Latency Histogram By Level [hudi_pending_clustering_fgs3a:__hudi-testing_test_hoodie_table_2] ** | |
21/12/01 01:22:45 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl.cc:704] STATISTICS: | |
rocksdb.block.cache.miss COUNT : 0 | |
rocksdb.block.cache.hit COUNT : 0 | |
rocksdb.block.cache.add COUNT : 0 | |
rocksdb.block.cache.add.failures COUNT : 0 | |
rocksdb.block.cache.index.miss COUNT : 0 | |
rocksdb.block.cache.index.hit COUNT : 0 | |
rocksdb.block.cache.index.add COUNT : 0 | |
rocksdb.block.cache.index.bytes.insert COUNT : 0 | |
rocksdb.block.cache.index.bytes.evict COUNT : 0 | |
rocksdb.block.cache.filter.miss COUNT : 0 | |
rocksdb.block.cache.filter.hit COUNT : 0 | |
rocksdb.block.cache.filter.add COUNT : 0 | |
rocksdb.block.cache.filter.bytes.insert COUNT : 0 | |
rocksdb.block.cache.filter.bytes.evict COUNT : 0 | |
rocksdb.block.cache.data.miss COUNT : 0 | |
rocksdb.block.cache.data.hit COUNT : 0 | |
rocksdb.block.cache.data.add COUNT : 0 | |
rocksdb.block.cache.data.bytes.insert COUNT : 0 | |
rocksdb.block.cache.bytes.read COUNT : 0 | |
rocksdb.block.cache.bytes.write COUNT : 0 | |
rocksdb.bloom.filter.useful COUNT : 0 | |
rocksdb.bloom.filter.full.positive COUNT : 0 | |
rocksdb.bloom.filter.full.true.positive COUNT : 0 | |
rocksdb.bloom.filter.micros COUNT : 0 | |
rocksdb.persistent.cache.hit COUNT : 0 | |
rocksdb.persistent.cache.miss COUNT : 0 | |
rocksdb.sim.block.cache.hit COUNT : 0 | |
rocksdb.sim.block.cache.miss COUNT : 0 | |
rocksdb.memtable.hit COUNT : 0 | |
rocksdb.memtable.miss COUNT : 6 | |
rocksdb.l0.hit COUNT : 0 | |
rocksdb.l1.hit COUNT : 0 | |
rocksdb.l2andup.hit COUNT : 0 | |
rocksdb.compaction.key.drop.new COUNT : 0 | |
rocksdb.compaction.key.drop.obsolete COUNT : 0 | |
rocksdb.compaction.key.drop.range_del COUNT : 0 | |
rocksdb.compaction.key.drop.user COUNT : 0 | |
rocksdb.compaction.range_del.drop.obsolete COUNT : 0 | |
rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0 | |
rocksdb.compaction.cancelled COUNT : 0 | |
rocksdb.number.keys.written COUNT : 259 | |
rocksdb.number.keys.read COUNT : 6 | |
rocksdb.number.keys.updated COUNT : 0 | |
rocksdb.bytes.written COUNT : 44155 | |
rocksdb.bytes.read COUNT : 0 | |
rocksdb.number.db.seek COUNT : 2 | |
rocksdb.number.db.next COUNT : 0 | |
rocksdb.number.db.prev COUNT : 0 | |
rocksdb.number.db.seek.found COUNT : 0 | |
rocksdb.number.db.next.found COUNT : 0 | |
rocksdb.number.db.prev.found COUNT : 0 | |
rocksdb.db.iter.bytes.read COUNT : 0 | |
rocksdb.no.file.closes COUNT : 0 | |
rocksdb.no.file.opens COUNT : 0 | |
rocksdb.no.file.errors COUNT : 0 | |
rocksdb.l0.slowdown.micros COUNT : 0 | |
rocksdb.memtable.compaction.micros COUNT : 0 | |
rocksdb.l0.num.files.stall.micros COUNT : 0 | |
rocksdb.stall.micros COUNT : 0 | |
rocksdb.db.mutex.wait.micros COUNT : 0 | |
rocksdb.rate.limit.delay.millis COUNT : 0 | |
rocksdb.num.iterators COUNT : 0 | |
rocksdb.number.multiget.get COUNT : 0 | |
rocksdb.number.multiget.keys.read COUNT : 0 | |
rocksdb.number.multiget.bytes.read COUNT : 0 | |
rocksdb.number.deletes.filtered COUNT : 0 | |
rocksdb.number.merge.failures COUNT : 0 | |
rocksdb.bloom.filter.prefix.checked COUNT : 0 | |
rocksdb.bloom.filter.prefix.useful COUNT : 0 | |
rocksdb.number.reseeks.iteration COUNT : 0 | |
rocksdb.getupdatessince.calls COUNT : 0 | |
rocksdb.block.cachecompressed.miss COUNT : 0 | |
rocksdb.block.cachecompressed.hit COUNT : 0 | |
rocksdb.block.cachecompressed.add COUNT : 0 | |
rocksdb.block.cachecompressed.add.failures COUNT : 0 | |
rocksdb.wal.synced COUNT : 0 | |
rocksdb.wal.bytes COUNT : 44155 | |
rocksdb.write.self COUNT : 6 | |
rocksdb.write.other COUNT : 0 | |
rocksdb.write.timeout COUNT : 0 | |
rocksdb.write.wal COUNT : 12 | |
rocksdb.compact.read.bytes COUNT : 0 | |
rocksdb.compact.write.bytes COUNT : 0 | |
rocksdb.flush.write.bytes COUNT : 0 | |
rocksdb.compact.read.marked.bytes COUNT : 0 | |
rocksdb.compact.read.periodic.bytes COUNT : 0 | |
rocksdb.compact.read.ttl.bytes COUNT : 0 | |
rocksdb.compact.write.marked.bytes COUNT : 0 | |
rocksdb.compact.write.periodic.bytes COUNT : 0 | |
rocksdb.compact.write.ttl.bytes COUNT : 0 | |
rocksdb.number.direct.load.table.properties COUNT : 0 | |
rocksdb.number.superversion_acquires COUNT : 2 | |
rocksdb.number.superversion_releases COUNT : 0 | |
rocksdb.number.superversion_cleanups COUNT : 0 | |
rocksdb.number.block.compressed COUNT : 0 | |
rocksdb.number.block.decompressed COUNT : 0 | |
rocksdb.number.block.not_compressed COUNT : 0 | |
rocksdb.merge.operation.time.nanos COUNT : 0 | |
rocksdb.filter.operation.time.nanos COUNT : 0 | |
rocksdb.row.cache.hit COUNT : 0 | |
rocksdb.row.cache.miss COUNT : 0 | |
rocksdb.read.amp.estimate.useful.bytes COUNT : 0 | |
rocksdb.read.amp.total.read.bytes COUNT : 0 | |
rocksdb.number.rate_limiter.drains COUNT : 0 | |
rocksdb.number.iter.skip COUNT : 0 | |
rocksdb.blobdb.num.put COUNT : 0 | |
rocksdb.blobdb.num.write COUNT : 0 | |
rocksdb.blobdb.num.get COUNT : 0 | |
rocksdb.blobdb.num.multiget COUNT : 0 | |
rocksdb.blobdb.num.seek COUNT : 0 | |
rocksdb.blobdb.num.next COUNT : 0 | |
rocksdb.blobdb.num.prev COUNT : 0 | |
rocksdb.blobdb.num.keys.written COUNT : 0 | |
rocksdb.blobdb.num.keys.read COUNT : 0 | |
rocksdb.blobdb.bytes.written COUNT : 0 | |
rocksdb.blobdb.bytes.read COUNT : 0 | |
rocksdb.blobdb.write.inlined COUNT : 0 | |
rocksdb.blobdb.write.inlined.ttl COUNT : 0 | |
rocksdb.blobdb.write.blob COUNT : 0 | |
rocksdb.blobdb.write.blob.ttl COUNT : 0 | |
rocksdb.blobdb.blob.file.bytes.written COUNT : 0 | |
rocksdb.blobdb.blob.file.bytes.read COUNT : 0 | |
rocksdb.blobdb.blob.file.synced COUNT : 0 | |
rocksdb.blobdb.blob.index.expired.count COUNT : 0 | |
rocksdb.blobdb.blob.index.expired.size COUNT : 0 | |
rocksdb.blobdb.blob.index.evicted.count COUNT : 0 | |
rocksdb.blobdb.blob.index.evicted.size COUNT : 0 | |
rocksdb.blobdb.gc.num.files COUNT : 0 | |
rocksdb.blobdb.gc.num.new.files COUNT : 0 | |
rocksdb.blobdb.gc.failures COUNT : 0 | |
rocksdb.blobdb.gc.num.keys.overwritten COUNT : 0 | |
rocksdb.blobdb.gc.num.keys.expired COUNT : 0 | |
rocksdb.blobdb.gc.num.keys.relocated COUNT : 0 | |
rocksdb.blobdb.gc.bytes.overwritten COUNT : 0 | |
rocksdb.blobdb.gc.bytes.expired COUNT : 0 | |
rocksdb.blobdb.gc.bytes.relocated COUNT : 0 | |
rocksdb.blobdb.fifo.num.files.evicted COUNT : 0 | |
rocksdb.blobdb.fifo.num.keys.evicted COUNT : 0 | |
rocksdb.blobdb.fifo.bytes.evicted COUNT : 0 | |
rocksdb.txn.overhead.mutex.prepare COUNT : 0 | |
rocksdb.txn.overhead.mutex.old.commit.map COUNT : 0 | |
rocksdb.txn.overhead.duplicate.key COUNT : 0 | |
rocksdb.txn.overhead.mutex.snapshot COUNT : 0 | |
rocksdb.txn.get.tryagain COUNT : 0 | |
rocksdb.number.multiget.keys.found COUNT : 0 | |
rocksdb.num.iterator.created COUNT : 2 | |
rocksdb.num.iterator.deleted COUNT : 2 | |
rocksdb.block.cache.compression.dict.miss COUNT : 0 | |
rocksdb.block.cache.compression.dict.hit COUNT : 0 | |
rocksdb.block.cache.compression.dict.add COUNT : 0 | |
rocksdb.block.cache.compression.dict.bytes.insert COUNT : 0 | |
rocksdb.block.cache.compression.dict.bytes.evict COUNT : 0 | |
rocksdb.block.cache.add.redundant COUNT : 0 | |
rocksdb.block.cache.index.add.redundant COUNT : 0 | |
rocksdb.block.cache.filter.add.redundant COUNT : 0 | |
rocksdb.block.cache.data.add.redundant COUNT : 0 | |
rocksdb.block.cache.compression.dict.add.redundant COUNT : 0 | |
rocksdb.files.marked.trash COUNT : 0 | |
rocksdb.files.deleted.immediately COUNT : 0 | |
rocksdb.error.handler.bg.errro.count COUNT : 0 | |
rocksdb.error.handler.bg.io.errro.count COUNT : 0 | |
rocksdb.error.handler.bg.retryable.io.errro.count COUNT : 0 | |
rocksdb.error.handler.autoresume.count COUNT : 0 | |
rocksdb.error.handler.autoresume.retry.total.count COUNT : 0 | |
rocksdb.error.handler.autoresume.success.count COUNT : 0 | |
rocksdb.db.get.micros P50 : 1.000000 P95 : 5.000000 P99 : 5.000000 P100 : 5.000000 COUNT : 6 SUM : 12 | |
rocksdb.db.write.micros P50 : 93.000000 P95 : 150.000000 P99 : 150.000000 P100 : 150.000000 COUNT : 6 SUM : 537 | |
rocksdb.compaction.times.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.compaction.times.cpu_micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.subcompaction.setup.times.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.table.sync.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.compaction.outfile.sync.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.wal.file.sync.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.manifest.file.sync.micros P50 : 150.000000 P95 : 207.000000 P99 : 207.000000 P100 : 207.000000 COUNT : 8 SUM : 1231 | |
rocksdb.table.open.io.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.db.multiget.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.read.block.compaction.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.read.block.get.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.write.raw.block.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.l0.slowdown.count P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.memtable.compaction.count P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.num.files.stall.count P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.hard.rate.limit.delay.count P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.soft.rate.limit.delay.count P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.numfiles.in.singlecompaction P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.db.seek.micros P50 : 2.000000 P95 : 2.900000 P99 : 2.980000 P100 : 3.000000 COUNT : 2 SUM : 5 | |
rocksdb.db.write.stall P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.sst.read.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.num.subcompactions.scheduled P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.bytes.per.read P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 6 SUM : 0 | |
rocksdb.bytes.per.write P50 : 2900.000000 P95 : 15804.000000 P99 : 15804.000000 P100 : 15804.000000 COUNT : 6 SUM : 44155 | |
rocksdb.bytes.per.multiget P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.bytes.compressed P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.bytes.decompressed P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.compression.times.nanos P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.decompression.times.nanos P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.read.num.merge_operands P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.key.size P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.value.size P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.write.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.get.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.multiget.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.seek.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.next.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.prev.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.blob.file.write.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.blob.file.read.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.blob.file.sync.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.gc.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.compression.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.blobdb.decompression.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.db.flush.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.sst.batch.size P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.num.index.and.filter.blocks.read.per.level P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.num.data.blocks.read.per.level P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.num.sst.read.per.level P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
rocksdb.error.handler.autoresume.retry.count P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 | |
21/12/01 01:22:45 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=14, NumFileGroups=1, FileGroupsCreationTime=2, StoreTimeTaken=0 | |
21/12/01 01:22:45 INFO Executor: Finished task 0.0 in stage 1208.0 (TID 2257). 829 bytes result sent to driver | |
21/12/01 01:22:45 INFO TaskSetManager: Finished task 0.0 in stage 1208.0 (TID 2257) in 353 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:45 INFO TaskSchedulerImpl: Removed TaskSet 1208.0, whose tasks have all completed, from pool | |
21/12/01 01:22:45 INFO DAGScheduler: ResultStage 1208 (collectAsMap at UpsertPartitioner.java:256) finished in 0.393 s | |
21/12/01 01:22:45 INFO DAGScheduler: Job 818 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:45 INFO TaskSchedulerImpl: Killing all running tasks in stage 1208: Stage finished | |
21/12/01 01:22:45 INFO DAGScheduler: Job 818 finished: collectAsMap at UpsertPartitioner.java:256, took 0.393124 s | |
21/12/01 01:22:45 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:45 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:45 INFO AbstractTableFileSystemView: Building file system view for partition (files) | |
21/12/01 01:22:45 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=14, NumFileGroups=1, FileGroupsCreationTime=2, StoreTimeTaken=0 | |
21/12/01 01:22:45 INFO AbstractHoodieClient: Embedded Timeline Server is disabled. Not starting timeline service | |
21/12/01 01:22:45 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:45 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:45 INFO UpsertPartitioner: Total Buckets :1, buckets info => {0=BucketInfo {bucketType=UPDATE, fileIdPrefix=files-0000, partitionPath=files}}, | |
Partition to insert buckets => {}, | |
UpdateLocations mapped to buckets =>{files-0000=0} | |
21/12/01 01:22:46 INFO BaseSparkCommitActionExecutor: no validators configured. | |
21/12/01 01:22:46 INFO BaseCommitActionExecutor: Auto commit enabled: Committing 20211201012216421 | |
21/12/01 01:22:46 INFO SparkContext: Starting job: collect at BaseSparkCommitActionExecutor.java:274 | |
21/12/01 01:22:46 INFO DAGScheduler: Registering RDD 2754 (mapToPair at BaseSparkCommitActionExecutor.java:225) as input to shuffle 272 | |
21/12/01 01:22:46 INFO DAGScheduler: Got job 819 (collect at BaseSparkCommitActionExecutor.java:274) with 1 output partitions | |
21/12/01 01:22:46 INFO DAGScheduler: Final stage: ResultStage 1210 (collect at BaseSparkCommitActionExecutor.java:274) | |
21/12/01 01:22:46 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1209) | |
21/12/01 01:22:46 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1209) | |
21/12/01 01:22:46 INFO DAGScheduler: Submitting ShuffleMapStage 1209 (MapPartitionsRDD[2754] at mapToPair at BaseSparkCommitActionExecutor.java:225), which has no missing parents | |
21/12/01 01:22:46 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:22:46 INFO MemoryStore: Block broadcast_1120 stored as values in memory (estimated size 321.8 KiB, free 364.4 MiB) | |
21/12/01 01:22:46 INFO MemoryStore: Block broadcast_1120_piece0 stored as bytes in memory (estimated size 113.3 KiB, free 364.3 MiB) | |
21/12/01 01:22:46 INFO BlockManagerInfo: Added broadcast_1120_piece0 in memory on 192.168.1.48:56496 (size: 113.3 KiB, free: 365.9 MiB) | |
21/12/01 01:22:46 INFO SparkContext: Created broadcast 1120 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:46 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1209 (MapPartitionsRDD[2754] at mapToPair at BaseSparkCommitActionExecutor.java:225) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:46 INFO TaskSchedulerImpl: Adding task set 1209.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:46 INFO TaskSetManager: Starting task 0.0 in stage 1209.0 (TID 2258) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4828 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:46 INFO Executor: Running task 0.0 in stage 1209.0 (TID 2258) | |
21/12/01 01:22:46 INFO BlockManager: Found block rdd_2741_0 locally | |
21/12/01 01:22:46 INFO Executor: Finished task 0.0 in stage 1209.0 (TID 2258). 1043 bytes result sent to driver | |
21/12/01 01:22:46 INFO TaskSetManager: Finished task 0.0 in stage 1209.0 (TID 2258) in 15 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:46 INFO TaskSchedulerImpl: Removed TaskSet 1209.0, whose tasks have all completed, from pool | |
21/12/01 01:22:46 INFO DAGScheduler: ShuffleMapStage 1209 (mapToPair at BaseSparkCommitActionExecutor.java:225) finished in 0.057 s | |
21/12/01 01:22:46 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:22:46 INFO DAGScheduler: running: Set() | |
21/12/01 01:22:46 INFO DAGScheduler: waiting: Set(ResultStage 1210) | |
21/12/01 01:22:46 INFO DAGScheduler: failed: Set() | |
21/12/01 01:22:46 INFO DAGScheduler: Submitting ResultStage 1210 (MapPartitionsRDD[2759] at map at BaseSparkCommitActionExecutor.java:274), which has no missing parents | |
21/12/01 01:22:46 INFO MemoryStore: Block broadcast_1121 stored as values in memory (estimated size 425.0 KiB, free 363.9 MiB) | |
21/12/01 01:22:46 INFO MemoryStore: Block broadcast_1121_piece0 stored as bytes in memory (estimated size 150.3 KiB, free 363.8 MiB) | |
21/12/01 01:22:46 INFO BlockManagerInfo: Added broadcast_1121_piece0 in memory on 192.168.1.48:56496 (size: 150.3 KiB, free: 365.7 MiB) | |
21/12/01 01:22:46 INFO SparkContext: Created broadcast 1121 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:46 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1210 (MapPartitionsRDD[2759] at map at BaseSparkCommitActionExecutor.java:274) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:46 INFO TaskSchedulerImpl: Adding task set 1210.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:46 INFO TaskSetManager: Starting task 0.0 in stage 1210.0 (TID 2259) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:46 INFO Executor: Running task 0.0 in stage 1210.0 (TID 2259) | |
21/12/01 01:22:46 INFO ShuffleBlockFetcherIterator: Getting 1 (445.0 B) non-empty blocks including 1 (445.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:22:46 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:22:46 INFO AbstractSparkDeltaCommitActionExecutor: Merging updates for commit 20211201012216421 for file files-0000 | |
21/12/01 01:22:46 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:22:46 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:22:46 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata | |
21/12/01 01:22:46 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:46 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:46 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:46 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:46 INFO AbstractTableFileSystemView: Building file system view for partition (files) | |
21/12/01 01:22:46 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201012216421__deltacommit__INFLIGHT]} | |
21/12/01 01:22:46 INFO AbstractHoodieWriteClient: Generate a new instant time: 20211201011347895 action: deltacommit | |
21/12/01 01:22:46 INFO HoodieHeartbeatClient: Received request to start heartbeat for instant time 20211201011347895 | |
21/12/01 01:22:46 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=14, NumFileGroups=1, FileGroupsCreationTime=1, StoreTimeTaken=0 | |
21/12/01 01:22:46 INFO HoodieActiveTimeline: Creating a new instant [==>20211201011347895__deltacommit__REQUESTED] | |
21/12/01 01:22:47 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:47 INFO DirectWriteMarkers: Creating Marker Path=s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.temp/20211201012216421/files/files-0000_0-1210-2259_20211201004828250001.hfile.marker.APPEND | |
21/12/01 01:22:48 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:22:48 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:48 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:22:48 INFO DirectWriteMarkers: [direct] Created marker file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.temp/20211201012216421/files/files-0000_0-1210-2259_20211201004828250001.hfile.marker.APPEND in 2012 ms | |
21/12/01 01:22:48 INFO HoodieLogFormat$WriterBuilder: Building HoodieLogFormat Writer | |
21/12/01 01:22:48 INFO HoodieLogFormat$WriterBuilder: HoodieLogFile on path s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/files/.files-0000_20211201004828250001.log.12_0-1194-2236 | |
21/12/01 01:22:48 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201012216421__deltacommit__INFLIGHT]} | |
21/12/01 01:22:48 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:22:48 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:22:48 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata | |
21/12/01 01:22:48 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:48 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:49 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[==>20211201012216421__deltacommit__INFLIGHT]} | |
21/12/01 01:22:49 INFO AbstractTableFileSystemView: Took 1 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:49 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:49 INFO HoodieLogFormatWriter: Append not supported.. Rolling over to HoodieLogFile{pathStr='s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/files/.files-0000_20211201004828250001.log.13_0-1210-2259', fileLen=0} | |
21/12/01 01:22:49 INFO CacheConfig: Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=406512, freeSize=394696944, maxSize=395103456, heapSize=406512, minSize=375348288, minFactor=0.95, multiSize=187674144, multiFactor=0.5, singleSize=93837072, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false | |
21/12/01 01:22:49 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:22:49 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:22:49 INFO HoodieAppendHandle: AppendHandle for partitionPath files filePath files/.files-0000_20211201004828250001.log.13_0-1210-2259, took 3196 ms. | |
21/12/01 01:22:49 INFO AsyncCleanerService: Async auto cleaning is not enabled. Not running cleaner now | |
21/12/01 01:22:50 INFO SparkContext: Starting job: countByKey at BaseSparkCommitActionExecutor.java:191 | |
21/12/01 01:22:50 INFO DAGScheduler: Registering RDD 2761 (countByKey at BaseSparkCommitActionExecutor.java:191) as input to shuffle 273 | |
21/12/01 01:22:50 INFO DAGScheduler: Got job 820 (countByKey at BaseSparkCommitActionExecutor.java:191) with 1 output partitions | |
21/12/01 01:22:50 INFO DAGScheduler: Final stage: ResultStage 1212 (countByKey at BaseSparkCommitActionExecutor.java:191) | |
21/12/01 01:22:50 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1211) | |
21/12/01 01:22:50 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1211) | |
21/12/01 01:22:50 INFO DAGScheduler: Submitting ShuffleMapStage 1211 (MapPartitionsRDD[2761] at countByKey at BaseSparkCommitActionExecutor.java:191), which has no missing parents | |
21/12/01 01:22:50 INFO MemoryStore: Block broadcast_1122 stored as values in memory (estimated size 10.6 KiB, free 363.8 MiB) | |
21/12/01 01:22:50 INFO MemoryStore: Block broadcast_1122_piece0 stored as bytes in memory (estimated size 5.2 KiB, free 363.7 MiB) | |
21/12/01 01:22:50 INFO BlockManagerInfo: Added broadcast_1122_piece0 in memory on 192.168.1.48:56496 (size: 5.2 KiB, free: 365.7 MiB) | |
21/12/01 01:22:50 INFO SparkContext: Created broadcast 1122 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:50 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1211 (MapPartitionsRDD[2761] at countByKey at BaseSparkCommitActionExecutor.java:191) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:50 INFO TaskSchedulerImpl: Adding task set 1211.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:50 INFO TaskSetManager: Starting task 0.0 in stage 1211.0 (TID 2260) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4898 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:50 INFO Executor: Running task 0.0 in stage 1211.0 (TID 2260) | |
21/12/01 01:22:50 INFO MemoryStore: Block rdd_2753_0 stored as values in memory (estimated size 1337.0 B, free 363.7 MiB) | |
21/12/01 01:22:50 INFO BlockManagerInfo: Added rdd_2753_0 in memory on 192.168.1.48:56496 (size: 1337.0 B, free: 365.7 MiB) | |
21/12/01 01:22:50 INFO Executor: Finished task 0.0 in stage 1211.0 (TID 2260). 1043 bytes result sent to driver | |
21/12/01 01:22:50 INFO TaskSetManager: Finished task 0.0 in stage 1211.0 (TID 2260) in 5 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:50 INFO TaskSchedulerImpl: Removed TaskSet 1211.0, whose tasks have all completed, from pool | |
21/12/01 01:22:50 INFO DAGScheduler: ShuffleMapStage 1211 (countByKey at BaseSparkCommitActionExecutor.java:191) finished in 0.006 s | |
21/12/01 01:22:50 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:22:50 INFO DAGScheduler: running: Set(ResultStage 1210) | |
21/12/01 01:22:50 INFO DAGScheduler: waiting: Set(ResultStage 1212) | |
21/12/01 01:22:50 INFO DAGScheduler: failed: Set() | |
21/12/01 01:22:50 INFO DAGScheduler: Submitting ResultStage 1212 (ShuffledRDD[2762] at countByKey at BaseSparkCommitActionExecutor.java:191), which has no missing parents | |
21/12/01 01:22:50 INFO MemoryStore: Block broadcast_1123 stored as values in memory (estimated size 5.6 KiB, free 363.7 MiB) | |
21/12/01 01:22:50 INFO MemoryStore: Block broadcast_1123_piece0 stored as bytes in memory (estimated size 3.2 KiB, free 363.7 MiB) | |
21/12/01 01:22:50 INFO BlockManagerInfo: Added broadcast_1123_piece0 in memory on 192.168.1.48:56496 (size: 3.2 KiB, free: 365.7 MiB) | |
21/12/01 01:22:50 INFO SparkContext: Created broadcast 1123 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:50 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1212 (ShuffledRDD[2762] at countByKey at BaseSparkCommitActionExecutor.java:191) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:50 INFO TaskSchedulerImpl: Adding task set 1212.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:50 INFO TaskSetManager: Starting task 0.0 in stage 1212.0 (TID 2261) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:50 INFO Executor: Running task 0.0 in stage 1212.0 (TID 2261) | |
21/12/01 01:22:50 INFO ShuffleBlockFetcherIterator: Getting 1 (156.0 B) non-empty blocks including 1 (156.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:22:50 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:22:50 INFO Executor: Finished task 0.0 in stage 1212.0 (TID 2261). 1318 bytes result sent to driver | |
21/12/01 01:22:50 INFO TaskSetManager: Finished task 0.0 in stage 1212.0 (TID 2261) in 3 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:50 INFO TaskSchedulerImpl: Removed TaskSet 1212.0, whose tasks have all completed, from pool | |
21/12/01 01:22:50 INFO DAGScheduler: ResultStage 1212 (countByKey at BaseSparkCommitActionExecutor.java:191) finished in 0.005 s | |
21/12/01 01:22:50 INFO DAGScheduler: Job 820 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:50 INFO TaskSchedulerImpl: Killing all running tasks in stage 1212: Stage finished | |
21/12/01 01:22:50 INFO DAGScheduler: Job 820 finished: countByKey at BaseSparkCommitActionExecutor.java:191, took 0.012054 s | |
21/12/01 01:22:50 INFO BaseSparkCommitActionExecutor: Workload profile :WorkloadProfile {globalStat=WorkloadStat {numInserts=0, numUpdates=4}, partitionStat={files=WorkloadStat {numInserts=0, numUpdates=4}}, operationType=UPSERT_PREPPED} | |
21/12/01 01:22:50 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201011347895.deltacommit.requested | |
21/12/01 01:22:50 INFO MemoryStore: Block rdd_2758_0 stored as values in memory (estimated size 1063.0 B, free 363.7 MiB) | |
21/12/01 01:22:50 INFO BlockManagerInfo: Added rdd_2758_0 in memory on 192.168.1.48:56496 (size: 1063.0 B, free: 365.7 MiB) | |
21/12/01 01:22:50 INFO Executor: Finished task 0.0 in stage 1210.0 (TID 2259). 2212 bytes result sent to driver | |
21/12/01 01:22:50 INFO TaskSetManager: Finished task 0.0 in stage 1210.0 (TID 2259) in 4184 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:50 INFO TaskSchedulerImpl: Removed TaskSet 1210.0, whose tasks have all completed, from pool | |
21/12/01 01:22:50 INFO DAGScheduler: ResultStage 1210 (collect at BaseSparkCommitActionExecutor.java:274) finished in 4.234 s | |
21/12/01 01:22:50 INFO DAGScheduler: Job 819 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:50 INFO TaskSchedulerImpl: Killing all running tasks in stage 1210: Stage finished | |
21/12/01 01:22:50 INFO DAGScheduler: Job 819 finished: collect at BaseSparkCommitActionExecutor.java:274, took 4.293302 s | |
21/12/01 01:22:50 INFO BaseSparkCommitActionExecutor: Committing 20211201012216421, action Type deltacommit | |
21/12/01 01:22:50 INFO SparkContext: Starting job: collect at HoodieSparkEngineContext.java:134 | |
21/12/01 01:22:50 INFO DAGScheduler: Got job 821 (collect at HoodieSparkEngineContext.java:134) with 1 output partitions | |
21/12/01 01:22:50 INFO DAGScheduler: Final stage: ResultStage 1213 (collect at HoodieSparkEngineContext.java:134) | |
21/12/01 01:22:50 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:22:50 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:22:50 INFO DAGScheduler: Submitting ResultStage 1213 (MapPartitionsRDD[2764] at flatMap at HoodieSparkEngineContext.java:134), which has no missing parents | |
21/12/01 01:22:50 INFO MemoryStore: Block broadcast_1124 stored as values in memory (estimated size 99.4 KiB, free 363.6 MiB) | |
21/12/01 01:22:50 INFO MemoryStore: Block broadcast_1124_piece0 stored as bytes in memory (estimated size 35.3 KiB, free 363.6 MiB) | |
21/12/01 01:22:50 INFO BlockManagerInfo: Added broadcast_1124_piece0 in memory on 192.168.1.48:56496 (size: 35.3 KiB, free: 365.7 MiB) | |
21/12/01 01:22:50 INFO SparkContext: Created broadcast 1124 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:50 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1213 (MapPartitionsRDD[2764] at flatMap at HoodieSparkEngineContext.java:134) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:50 INFO TaskSchedulerImpl: Adding task set 1213.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:50 INFO TaskSetManager: Starting task 0.0 in stage 1213.0 (TID 2262) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:50 INFO Executor: Running task 0.0 in stage 1213.0 (TID 2262) | |
21/12/01 01:22:50 INFO Executor: Finished task 0.0 in stage 1213.0 (TID 2262). 796 bytes result sent to driver | |
21/12/01 01:22:50 INFO TaskSetManager: Finished task 0.0 in stage 1213.0 (TID 2262) in 113 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:50 INFO TaskSchedulerImpl: Removed TaskSet 1213.0, whose tasks have all completed, from pool | |
21/12/01 01:22:50 INFO DAGScheduler: ResultStage 1213 (collect at HoodieSparkEngineContext.java:134) finished in 0.129 s | |
21/12/01 01:22:50 INFO DAGScheduler: Job 821 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:50 INFO TaskSchedulerImpl: Killing all running tasks in stage 1213: Stage finished | |
21/12/01 01:22:50 INFO DAGScheduler: Job 821 finished: collect at HoodieSparkEngineContext.java:134, took 0.129931 s | |
21/12/01 01:22:50 INFO CommitUtils: Creating metadata for UPSERT_PREPPED numWriteStats:1numReplaceFileIds:0 | |
21/12/01 01:22:50 INFO HoodieActiveTimeline: Marking instant complete [==>20211201012216421__deltacommit__INFLIGHT] | |
21/12/01 01:22:50 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201012216421.deltacommit.inflight | |
21/12/01 01:22:51 INFO HoodieActiveTimeline: Created a new file in meta path: s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201011347895.deltacommit.inflight | |
21/12/01 01:22:51 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201012216421.deltacommit | |
21/12/01 01:22:51 INFO HoodieActiveTimeline: Completed [==>20211201012216421__deltacommit__INFLIGHT] | |
21/12/01 01:22:51 INFO BaseSparkCommitActionExecutor: Committed 20211201012216421 | |
21/12/01 01:22:51 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201011347895.deltacommit.inflight | |
21/12/01 01:22:51 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:52 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:52 INFO SparkContext: Starting job: collectAsMap at HoodieSparkEngineContext.java:148 | |
21/12/01 01:22:52 INFO DAGScheduler: Got job 822 (collectAsMap at HoodieSparkEngineContext.java:148) with 1 output partitions | |
21/12/01 01:22:52 INFO DAGScheduler: Final stage: ResultStage 1214 (collectAsMap at HoodieSparkEngineContext.java:148) | |
21/12/01 01:22:52 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:22:52 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:22:52 INFO DAGScheduler: Submitting ResultStage 1214 (MapPartitionsRDD[2766] at mapToPair at HoodieSparkEngineContext.java:145), which has no missing parents | |
21/12/01 01:22:52 INFO SparkContext: Starting job: collect at SparkRejectUpdateStrategy.java:52 | |
21/12/01 01:22:52 INFO MemoryStore: Block broadcast_1125 stored as values in memory (estimated size 99.6 KiB, free 363.5 MiB) | |
21/12/01 01:22:52 INFO MemoryStore: Block broadcast_1125_piece0 stored as bytes in memory (estimated size 35.3 KiB, free 363.5 MiB) | |
21/12/01 01:22:52 INFO BlockManagerInfo: Added broadcast_1125_piece0 in memory on 192.168.1.48:56496 (size: 35.3 KiB, free: 365.6 MiB) | |
21/12/01 01:22:52 INFO SparkContext: Created broadcast 1125 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:52 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1214 (MapPartitionsRDD[2766] at mapToPair at HoodieSparkEngineContext.java:145) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:52 INFO TaskSchedulerImpl: Adding task set 1214.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:52 INFO TaskSetManager: Starting task 0.0 in stage 1214.0 (TID 2263) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:52 INFO DAGScheduler: Registering RDD 2769 (distinct at SparkRejectUpdateStrategy.java:52) as input to shuffle 274 | |
21/12/01 01:22:52 INFO Executor: Running task 0.0 in stage 1214.0 (TID 2263) | |
21/12/01 01:22:52 INFO DAGScheduler: Got job 823 (collect at SparkRejectUpdateStrategy.java:52) with 1 output partitions | |
21/12/01 01:22:52 INFO DAGScheduler: Final stage: ResultStage 1216 (collect at SparkRejectUpdateStrategy.java:52) | |
21/12/01 01:22:52 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1215) | |
21/12/01 01:22:52 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1215) | |
21/12/01 01:22:52 INFO DAGScheduler: Submitting ShuffleMapStage 1215 (MapPartitionsRDD[2769] at distinct at SparkRejectUpdateStrategy.java:52), which has no missing parents | |
21/12/01 01:22:52 INFO MemoryStore: Block broadcast_1126 stored as values in memory (estimated size 10.6 KiB, free 363.5 MiB) | |
21/12/01 01:22:52 INFO MemoryStore: Block broadcast_1126_piece0 stored as bytes in memory (estimated size 5.1 KiB, free 363.5 MiB) | |
21/12/01 01:22:52 INFO BlockManagerInfo: Added broadcast_1126_piece0 in memory on 192.168.1.48:56496 (size: 5.1 KiB, free: 365.6 MiB) | |
21/12/01 01:22:52 INFO SparkContext: Created broadcast 1126 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:52 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1215 (MapPartitionsRDD[2769] at distinct at SparkRejectUpdateStrategy.java:52) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:52 INFO TaskSchedulerImpl: Adding task set 1215.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:52 INFO TaskSetManager: Starting task 0.0 in stage 1215.0 (TID 2264) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4898 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:52 INFO Executor: Running task 0.0 in stage 1215.0 (TID 2264) | |
21/12/01 01:22:52 INFO BlockManager: Found block rdd_2753_0 locally | |
21/12/01 01:22:52 INFO Executor: Finished task 0.0 in stage 1215.0 (TID 2264). 1129 bytes result sent to driver | |
21/12/01 01:22:52 INFO TaskSetManager: Finished task 0.0 in stage 1215.0 (TID 2264) in 4 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:52 INFO TaskSchedulerImpl: Removed TaskSet 1215.0, whose tasks have all completed, from pool | |
21/12/01 01:22:52 INFO DAGScheduler: ShuffleMapStage 1215 (distinct at SparkRejectUpdateStrategy.java:52) finished in 0.006 s | |
21/12/01 01:22:52 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:22:52 INFO DAGScheduler: running: Set(ResultStage 1214) | |
21/12/01 01:22:52 INFO DAGScheduler: waiting: Set(ResultStage 1216) | |
21/12/01 01:22:52 INFO DAGScheduler: failed: Set() | |
21/12/01 01:22:52 INFO DAGScheduler: Submitting ResultStage 1216 (MapPartitionsRDD[2771] at distinct at SparkRejectUpdateStrategy.java:52), which has no missing parents | |
21/12/01 01:22:52 INFO MemoryStore: Block broadcast_1127 stored as values in memory (estimated size 6.4 KiB, free 363.5 MiB) | |
21/12/01 01:22:52 INFO MemoryStore: Block broadcast_1127_piece0 stored as bytes in memory (estimated size 3.5 KiB, free 363.4 MiB) | |
21/12/01 01:22:52 INFO BlockManagerInfo: Added broadcast_1127_piece0 in memory on 192.168.1.48:56496 (size: 3.5 KiB, free: 365.6 MiB) | |
21/12/01 01:22:52 INFO SparkContext: Created broadcast 1127 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:52 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1216 (MapPartitionsRDD[2771] at distinct at SparkRejectUpdateStrategy.java:52) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:52 INFO TaskSchedulerImpl: Adding task set 1216.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:52 INFO TaskSetManager: Starting task 0.0 in stage 1216.0 (TID 2265) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:52 INFO Executor: Running task 0.0 in stage 1216.0 (TID 2265) | |
21/12/01 01:22:52 INFO ShuffleBlockFetcherIterator: Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:22:52 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:22:52 INFO Executor: Finished task 0.0 in stage 1216.0 (TID 2265). 1249 bytes result sent to driver | |
21/12/01 01:22:52 INFO TaskSetManager: Finished task 0.0 in stage 1216.0 (TID 2265) in 3 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:52 INFO TaskSchedulerImpl: Removed TaskSet 1216.0, whose tasks have all completed, from pool | |
21/12/01 01:22:52 INFO DAGScheduler: ResultStage 1216 (collect at SparkRejectUpdateStrategy.java:52) finished in 0.004 s | |
21/12/01 01:22:52 INFO DAGScheduler: Job 823 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:52 INFO TaskSchedulerImpl: Killing all running tasks in stage 1216: Stage finished | |
21/12/01 01:22:52 INFO DAGScheduler: Job 823 finished: collect at SparkRejectUpdateStrategy.java:52, took 0.021897 s | |
21/12/01 01:22:52 INFO UpsertPartitioner: AvgRecordSize => 1024 | |
21/12/01 01:22:52 INFO SparkContext: Starting job: collectAsMap at UpsertPartitioner.java:256 | |
21/12/01 01:22:52 INFO DAGScheduler: Got job 824 (collectAsMap at UpsertPartitioner.java:256) with 1 output partitions | |
21/12/01 01:22:52 INFO DAGScheduler: Final stage: ResultStage 1217 (collectAsMap at UpsertPartitioner.java:256) | |
21/12/01 01:22:52 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:22:52 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:22:52 INFO DAGScheduler: Submitting ResultStage 1217 (MapPartitionsRDD[2773] at mapToPair at UpsertPartitioner.java:255), which has no missing parents | |
21/12/01 01:22:53 INFO MemoryStore: Block broadcast_1128 stored as values in memory (estimated size 316.6 KiB, free 363.1 MiB) | |
21/12/01 01:22:53 INFO MemoryStore: Block broadcast_1128_piece0 stored as bytes in memory (estimated size 110.4 KiB, free 363.3 MiB) | |
21/12/01 01:22:53 INFO BlockManagerInfo: Added broadcast_1128_piece0 in memory on 192.168.1.48:56496 (size: 110.4 KiB, free: 365.5 MiB) | |
21/12/01 01:22:53 INFO BlockManagerInfo: Removed broadcast_1120_piece0 on 192.168.1.48:56496 in memory (size: 113.3 KiB, free: 365.6 MiB) | |
21/12/01 01:22:53 INFO SparkContext: Created broadcast 1128 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:53 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1217 (MapPartitionsRDD[2773] at mapToPair at UpsertPartitioner.java:255) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:53 INFO TaskSchedulerImpl: Adding task set 1217.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:53 INFO TaskSetManager: Starting task 0.0 in stage 1217.0 (TID 2266) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4338 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:53 INFO Executor: Running task 0.0 in stage 1217.0 (TID 2266) | |
21/12/01 01:22:53 INFO BlockManagerInfo: Removed broadcast_1118_piece0 on 192.168.1.48:56496 in memory (size: 3.5 KiB, free: 365.6 MiB) | |
21/12/01 01:22:53 INFO BlockManagerInfo: Removed broadcast_1126_piece0 on 192.168.1.48:56496 in memory (size: 5.1 KiB, free: 365.6 MiB) | |
21/12/01 01:22:53 INFO BlockManagerInfo: Removed broadcast_1121_piece0 on 192.168.1.48:56496 in memory (size: 150.3 KiB, free: 365.8 MiB) | |
21/12/01 01:22:53 INFO BlockManagerInfo: Removed broadcast_1123_piece0 on 192.168.1.48:56496 in memory (size: 3.2 KiB, free: 365.8 MiB) | |
21/12/01 01:22:53 INFO BlockManagerInfo: Removed broadcast_1122_piece0 on 192.168.1.48:56496 in memory (size: 5.2 KiB, free: 365.8 MiB) | |
21/12/01 01:22:53 INFO BlockManagerInfo: Removed broadcast_1124_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 365.8 MiB) | |
21/12/01 01:22:53 INFO BlockManagerInfo: Removed broadcast_1119_piece0 on 192.168.1.48:56496 in memory (size: 110.4 KiB, free: 365.9 MiB) | |
21/12/01 01:22:53 INFO BlockManagerInfo: Removed broadcast_1127_piece0 on 192.168.1.48:56496 in memory (size: 3.5 KiB, free: 365.9 MiB) | |
21/12/01 01:22:53 INFO BlockManagerInfo: Removed broadcast_1117_piece0 on 192.168.1.48:56496 in memory (size: 5.1 KiB, free: 365.9 MiB) | |
21/12/01 01:22:53 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:22:53 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:22:53 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata | |
21/12/01 01:22:53 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:53 INFO Executor: Finished task 0.0 in stage 1214.0 (TID 2263). 941 bytes result sent to driver | |
21/12/01 01:22:53 INFO TaskSetManager: Finished task 0.0 in stage 1214.0 (TID 2263) in 846 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:53 INFO TaskSchedulerImpl: Removed TaskSet 1214.0, whose tasks have all completed, from pool | |
21/12/01 01:22:53 INFO DAGScheduler: ResultStage 1214 (collectAsMap at HoodieSparkEngineContext.java:148) finished in 0.861 s | |
21/12/01 01:22:53 INFO DAGScheduler: Job 822 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:53 INFO TaskSchedulerImpl: Killing all running tasks in stage 1214: Stage finished | |
21/12/01 01:22:53 INFO DAGScheduler: Job 822 finished: collectAsMap at HoodieSparkEngineContext.java:148, took 0.860936 s | |
21/12/01 01:22:53 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:53 INFO AbstractTableFileSystemView: Building file system view for partition (files) | |
21/12/01 01:22:53 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=15, NumFileGroups=1, FileGroupsCreationTime=3, StoreTimeTaken=0 | |
21/12/01 01:22:53 INFO Executor: Finished task 0.0 in stage 1217.0 (TID 2266). 829 bytes result sent to driver | |
21/12/01 01:22:53 INFO TaskSetManager: Finished task 0.0 in stage 1217.0 (TID 2266) in 651 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:53 INFO TaskSchedulerImpl: Removed TaskSet 1217.0, whose tasks have all completed, from pool | |
21/12/01 01:22:53 INFO DAGScheduler: ResultStage 1217 (collectAsMap at UpsertPartitioner.java:256) finished in 0.723 s | |
21/12/01 01:22:53 INFO DAGScheduler: Job 824 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:53 INFO TaskSchedulerImpl: Killing all running tasks in stage 1217: Stage finished | |
21/12/01 01:22:53 INFO DAGScheduler: Job 824 finished: collectAsMap at UpsertPartitioner.java:256, took 0.724183 s | |
21/12/01 01:22:53 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:53 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:53 INFO UpsertPartitioner: Total Buckets :1, buckets info => {0=BucketInfo {bucketType=UPDATE, fileIdPrefix=files-0000, partitionPath=files}}, | |
Partition to insert buckets => {}, | |
UpdateLocations mapped to buckets =>{files-0000=0} | |
21/12/01 01:22:53 INFO BaseSparkCommitActionExecutor: no validators configured. | |
21/12/01 01:22:53 INFO BaseCommitActionExecutor: Auto commit enabled: Committing 20211201011347895 | |
21/12/01 01:22:54 INFO SparkContext: Starting job: collect at BaseSparkCommitActionExecutor.java:274 | |
21/12/01 01:22:54 INFO DAGScheduler: Registering RDD 2774 (mapToPair at BaseSparkCommitActionExecutor.java:225) as input to shuffle 275 | |
21/12/01 01:22:54 INFO DAGScheduler: Got job 825 (collect at BaseSparkCommitActionExecutor.java:274) with 1 output partitions | |
21/12/01 01:22:54 INFO DAGScheduler: Final stage: ResultStage 1219 (collect at BaseSparkCommitActionExecutor.java:274) | |
21/12/01 01:22:54 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1218) | |
21/12/01 01:22:54 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1218) | |
21/12/01 01:22:54 INFO DAGScheduler: Submitting ShuffleMapStage 1218 (MapPartitionsRDD[2774] at mapToPair at BaseSparkCommitActionExecutor.java:225), which has no missing parents | |
21/12/01 01:22:54 INFO MemoryStore: Block broadcast_1129 stored as values in memory (estimated size 321.9 KiB, free 364.3 MiB) | |
21/12/01 01:22:54 INFO MemoryStore: Block broadcast_1129_piece0 stored as bytes in memory (estimated size 113.3 KiB, free 364.2 MiB) | |
21/12/01 01:22:54 INFO BlockManagerInfo: Added broadcast_1129_piece0 in memory on 192.168.1.48:56496 (size: 113.3 KiB, free: 365.8 MiB) | |
21/12/01 01:22:54 INFO SparkContext: Created broadcast 1129 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:54 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1218 (MapPartitionsRDD[2774] at mapToPair at BaseSparkCommitActionExecutor.java:225) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:54 INFO TaskSchedulerImpl: Adding task set 1218.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:54 INFO TaskSetManager: Starting task 0.0 in stage 1218.0 (TID 2267) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4898 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:54 INFO Executor: Running task 0.0 in stage 1218.0 (TID 2267) | |
21/12/01 01:22:54 INFO BlockManager: Found block rdd_2753_0 locally | |
21/12/01 01:22:54 INFO Executor: Finished task 0.0 in stage 1218.0 (TID 2267). 1043 bytes result sent to driver | |
21/12/01 01:22:54 INFO TaskSetManager: Finished task 0.0 in stage 1218.0 (TID 2267) in 15 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:54 INFO TaskSchedulerImpl: Removed TaskSet 1218.0, whose tasks have all completed, from pool | |
21/12/01 01:22:54 INFO DAGScheduler: ShuffleMapStage 1218 (mapToPair at BaseSparkCommitActionExecutor.java:225) finished in 0.053 s | |
21/12/01 01:22:54 INFO DAGScheduler: looking for newly runnable stages | |
21/12/01 01:22:54 INFO DAGScheduler: running: Set() | |
21/12/01 01:22:54 INFO DAGScheduler: waiting: Set(ResultStage 1219) | |
21/12/01 01:22:54 INFO DAGScheduler: failed: Set() | |
21/12/01 01:22:54 INFO DAGScheduler: Submitting ResultStage 1219 (MapPartitionsRDD[2779] at map at BaseSparkCommitActionExecutor.java:274), which has no missing parents | |
21/12/01 01:22:54 INFO MemoryStore: Block broadcast_1130 stored as values in memory (estimated size 425.1 KiB, free 363.8 MiB) | |
21/12/01 01:22:54 INFO MemoryStore: Block broadcast_1130_piece0 stored as bytes in memory (estimated size 150.3 KiB, free 363.7 MiB) | |
21/12/01 01:22:54 INFO BlockManagerInfo: Added broadcast_1130_piece0 in memory on 192.168.1.48:56496 (size: 150.3 KiB, free: 365.7 MiB) | |
21/12/01 01:22:54 INFO SparkContext: Created broadcast 1130 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:54 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1219 (MapPartitionsRDD[2779] at map at BaseSparkCommitActionExecutor.java:274) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:54 INFO TaskSchedulerImpl: Adding task set 1219.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:54 INFO TaskSetManager: Starting task 0.0 in stage 1219.0 (TID 2268) (192.168.1.48, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:54 INFO Executor: Running task 0.0 in stage 1219.0 (TID 2268) | |
21/12/01 01:22:54 INFO ShuffleBlockFetcherIterator: Getting 1 (652.0 B) non-empty blocks including 1 (652.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks | |
21/12/01 01:22:54 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms | |
21/12/01 01:22:54 INFO AbstractSparkDeltaCommitActionExecutor: Merging updates for commit 20211201011347895 for file files-0000 | |
21/12/01 01:22:54 INFO FileSystemViewManager: Creating View Manager with storage type :MEMORY | |
21/12/01 01:22:54 INFO FileSystemViewManager: Creating in-memory based Table View | |
21/12/01 01:22:54 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata | |
21/12/01 01:22:54 INFO AbstractTableFileSystemView: Took 0 ms to read 0 instants, 0 replaced file groups | |
21/12/01 01:22:54 INFO FSUtils: Removed directory at s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.temp/20211201012216421 | |
21/12/01 01:22:54 INFO ClusteringUtils: Found 0 files in pending clustering operations | |
21/12/01 01:22:54 INFO AbstractTableFileSystemView: Building file system view for partition (files) | |
21/12/01 01:22:54 INFO AbstractTableFileSystemView: addFilesToView: NumFiles=15, NumFileGroups=1, FileGroupsCreationTime=3, StoreTimeTaken=0 | |
21/12/01 01:22:54 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201012216421__deltacommit__COMPLETED]} | |
21/12/01 01:22:54 INFO HoodieLogFormat$WriterBuilder: Building HoodieLogFormat Writer | |
21/12/01 01:22:54 INFO HoodieLogFormat$WriterBuilder: Computing the next log version for commits in s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/archived | |
21/12/01 01:22:54 INFO HoodieLogFormat$WriterBuilder: Computed the next log version for commits in s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/archived as 3 with write-token 1-0-1 | |
21/12/01 01:22:54 INFO HoodieLogFormat$WriterBuilder: HoodieLogFile on path s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/archived/.commits_.archive.3_1-0-1 | |
21/12/01 01:22:54 INFO HoodieTimelineArchiveLog: Archiving instants [[==>20211201000952501001__compaction__REQUESTED], [==>20211201000952501001__compaction__INFLIGHT], [20211201000952501001__commit__COMPLETED], [==>20211201001222696__deltacommit__REQUESTED], [==>20211201001222696__deltacommit__INFLIGHT], [20211201001222696__deltacommit__COMPLETED], [==>20211201001327610__deltacommit__REQUESTED], [==>20211201001327610__deltacommit__INFLIGHT], [20211201001327610__deltacommit__COMPLETED], [==>20211201001615832__deltacommit__REQUESTED], [==>20211201001615832__deltacommit__INFLIGHT], [20211201001615832__deltacommit__COMPLETED], [==>20211201001916822__deltacommit__REQUESTED], [==>20211201001916822__deltacommit__INFLIGHT], [20211201001916822__deltacommit__COMPLETED], [==>20211201002149590__deltacommit__REQUESTED], [==>20211201002149590__deltacommit__INFLIGHT], [20211201002149590__deltacommit__COMPLETED], [==>20211201002228421__deltacommit__REQUESTED], [==>20211201002228421__deltacommit__INFLIGHT], [20211201002228421__deltacommit__COMPLETED], [==>20211201002458660__deltacommit__REQUESTED], [==>20211201002458660__deltacommit__INFLIGHT], [20211201002458660__deltacommit__COMPLETED], [==>20211201002536353__deltacommit__REQUESTED], [==>20211201002536353__deltacommit__INFLIGHT], [20211201002536353__deltacommit__COMPLETED], [==>20211201002953399__deltacommit__REQUESTED], [==>20211201002953399__deltacommit__INFLIGHT], [20211201002953399__deltacommit__COMPLETED], [==>20211201003049103__deltacommit__REQUESTED], [==>20211201003049103__deltacommit__INFLIGHT], [20211201003049103__deltacommit__COMPLETED]] | |
21/12/01 01:22:54 INFO HoodieTimelineArchiveLog: Wrapper schema {"type":"record","name":"HoodieArchivedMetaEntry","namespace":"org.apache.hudi.avro.model","fields":[{"name":"hoodieCommitMetadata","type":["null",{"type":"record","name":"HoodieCommitMetadata","fields":[{"name":"partitionToWriteStats","type":["null",{"type":"map","values":{"type":"array","items":{"type":"record","name":"HoodieWriteStat","fields":[{"name":"fileId","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"path","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"prevCommit","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"numWrites","type":["null","long"],"default":null},{"name":"numDeletes","type":["null","long"],"default":null},{"name":"numUpdateWrites","type":["null","long"],"default":null},{"name":"totalWriteBytes","type":["null","long"],"default":null},{"name":"totalWriteErrors","type":["null","long"],"default":null},{"name":"partitionPath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"totalLogRecords","type":["null","long"],"default":null},{"name":"totalLogFiles","type":["null","long"],"default":null},{"name":"totalUpdatedRecordsCompacted","type":["null","long"],"default":null},{"name":"numInserts","type":["null","long"],"default":null},{"name":"totalLogBlocks","type":["null","long"],"default":null},{"name":"totalCorruptLogBlock","type":["null","long"],"default":null},{"name":"totalRollbackBlocks","type":["null","long"],"default":null},{"name":"fileSizeInBytes","type":["null","long"],"default":null}]}},"avro.java.string":"String"}],"default":null},{"name":"extraMetadata","type":["null",{"type":"map","values":{"type":"string","avro.java.string":"String"},"avro.java.string":"String","default":null}],"default":null},{"name":"version","type":["int","null"],"default":1},{"name":"operationType","type":["null",{"type":"string","avro.java.string":"String"}],"default":null}]}],"default":null},{"name":"hoodieCleanMetadata","type":["null",{"type":"record","name":"HoodieCleanMetadata","fields":[{"name":"startCleanTime","type":{"type":"string","avro.java.string":"String"}},{"name":"timeTakenInMillis","type":"long"},{"name":"totalFilesDeleted","type":"int"},{"name":"earliestCommitToRetain","type":{"type":"string","avro.java.string":"String"}},{"name":"partitionMetadata","type":{"type":"map","values":{"type":"record","name":"HoodieCleanPartitionMetadata","fields":[{"name":"partitionPath","type":{"type":"string","avro.java.string":"String"}},{"name":"policy","type":{"type":"string","avro.java.string":"String"}},{"name":"deletePathPatterns","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}},{"name":"successDeleteFiles","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}},{"name":"failedDeleteFiles","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}}]},"avro.java.string":"String"}},{"name":"version","type":["int","null"],"default":1},{"name":"bootstrapPartitionMetadata","type":["null",{"type":"map","values":"HoodieCleanPartitionMetadata","avro.java.string":"String","default":null}],"default":null}]}],"default":null},{"name":"hoodieCompactionMetadata","type":["null",{"type":"record","name":"HoodieCompactionMetadata","fields":[{"name":"partitionToCompactionWriteStats","type":["null",{"type":"map","values":{"type":"array","items":{"type":"record","name":"HoodieCompactionWriteStat","fields":[{"name":"partitionPath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"totalLogRecords","type":["null","long"],"default":null},{"name":"totalLogFiles","type":["null","long"],"default":null},{"name":"totalUpdatedRecordsCompacted","type":["null","long"],"default":null},{"name":"hoodieWriteStat","type":["null","HoodieWriteStat"],"default":null}]}},"avro.java.string":"String"}]}]}],"default":null},{"name":"hoodieRollbackMetadata","type":["null",{"type":"record","name":"HoodieRollbackMetadata","fields":[{"name":"startRollbackTime","type":{"type":"string","avro.java.string":"String"}},{"name":"timeTakenInMillis","type":"long"},{"name":"totalFilesDeleted","type":"int"},{"name":"commitsRollback","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}},{"name":"partitionMetadata","type":{"type":"map","values":{"type":"record","name":"HoodieRollbackPartitionMetadata","fields":[{"name":"partitionPath","type":{"type":"string","avro.java.string":"String"}},{"name":"successDeleteFiles","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}},{"name":"failedDeleteFiles","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}},{"name":"rollbackLogFiles","type":["null",{"type":"map","values":"long","avro.java.string":"String"}],"default":null},{"name":"writtenLogFiles","type":["null",{"type":"map","values":"long","avro.java.string":"String"}],"default":null}]},"avro.java.string":"String"}},{"name":"version","type":["int","null"],"default":1},{"name":"instantsRollback","type":{"type":"array","items":{"type":"record","name":"HoodieInstantInfo","fields":[{"name":"commitTime","type":{"type":"string","avro.java.string":"String"}},{"name":"action","type":{"type":"string","avro.java.string":"String"}}]},"default":[]},"default":[]}]}],"default":null},{"name":"hoodieSavePointMetadata","type":["null",{"type":"record","name":"HoodieSavepointMetadata","fields":[{"name":"savepointedBy","type":{"type":"string","avro.java.string":"String"}},{"name":"savepointedAt","type":"long"},{"name":"comments","type":{"type":"string","avro.java.string":"String"}},{"name":"partitionMetadata","type":{"type":"map","values":{"type":"record","name":"HoodieSavepointPartitionMetadata","fields":[{"name":"partitionPath","type":{"type":"string","avro.java.string":"String"}},{"name":"savepointDataFile","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}}]},"avro.java.string":"String"}},{"name":"version","type":["int","null"],"default":1}]}],"default":null},{"name":"commitTime","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"actionType","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1},{"name":"hoodieCompactionPlan","type":["null",{"type":"record","name":"HoodieCompactionPlan","fields":[{"name":"operations","type":["null",{"type":"array","items":{"type":"record","name":"HoodieCompactionOperation","fields":[{"name":"baseInstantTime","type":["null",{"type":"string","avro.java.string":"String"}]},{"name":"deltaFilePaths","type":["null",{"type":"array","items":{"type":"string","avro.java.string":"String"}}],"default":null},{"name":"dataFilePath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"fileId","type":["null",{"type":"string","avro.java.string":"String"}]},{"name":"partitionPath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"metrics","type":["null",{"type":"map","values":"double","avro.java.string":"String"}],"default":null},{"name":"bootstrapFilePath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null}]}}],"default":null},{"name":"extraMetadata","type":["null",{"type":"map","values":{"type":"string","avro.java.string":"String"},"avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1}]}],"default":null},{"name":"hoodieCleanerPlan","type":["null",{"type":"record","name":"HoodieCleanerPlan","fields":[{"name":"earliestInstantToRetain","type":["null",{"type":"record","name":"HoodieActionInstant","fields":[{"name":"timestamp","type":{"type":"string","avro.java.string":"String"}},{"name":"action","type":{"type":"string","avro.java.string":"String"}},{"name":"state","type":{"type":"string","avro.java.string":"String"}}]}],"default":null},{"name":"policy","type":{"type":"string","avro.java.string":"String"}},{"name":"filesToBeDeletedPerPartition","type":["null",{"type":"map","values":{"type":"array","items":{"type":"string","avro.java.string":"String"}},"avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1},{"name":"filePathsToBeDeletedPerPartition","type":["null",{"type":"map","values":{"type":"array","items":{"type":"record","name":"HoodieCleanFileInfo","fields":[{"name":"filePath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"isBootstrapBaseFile","type":["null","boolean"],"default":null}]}},"avro.java.string":"String"}],"doc":"This field replaces the field filesToBeDeletedPerPartition","default":null}]}],"default":null},{"name":"actionState","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"hoodieReplaceCommitMetadata","type":["null",{"type":"record","name":"HoodieReplaceCommitMetadata","fields":[{"name":"partitionToWriteStats","type":["null",{"type":"map","values":{"type":"array","items":"HoodieWriteStat"},"avro.java.string":"String"}],"default":null},{"name":"extraMetadata","type":["null",{"type":"map","values":{"type":"string","avro.java.string":"String"},"avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1},{"name":"operationType","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"partitionToReplaceFileIds","type":["null",{"type":"map","values":{"type":"array","items":{"type":"string","avro.java.string":"String"}},"avro.java.string":"String"}],"default":null}]}],"default":null},{"name":"hoodieRequestedReplaceMetadata","type":["null",{"type":"record","name":"HoodieRequestedReplaceMetadata","fields":[{"name":"operationType","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"clusteringPlan","type":["null",{"type":"record","name":"HoodieClusteringPlan","fields":[{"name":"inputGroups","type":["null",{"type":"array","items":{"type":"record","name":"HoodieClusteringGroup","fields":[{"name":"slices","type":["null",{"type":"array","items":{"type":"record","name":"HoodieSliceInfo","fields":[{"name":"dataFilePath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"deltaFilePaths","type":["null",{"type":"array","items":{"type":"string","avro.java.string":"String"}}],"default":null},{"name":"fileId","type":["null",{"type":"string","avro.java.string":"String"}]},{"name":"partitionPath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"bootstrapFilePath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1}]}}],"default":null},{"name":"metrics","type":["null",{"type":"map","values":"double","avro.java.string":"String"}],"default":null},{"name":"numOutputFileGroups","type":["int","null"],"default":1},{"name":"version","type":["int","null"],"default":1}]}}],"default":null},{"name":"strategy","type":["null",{"type":"record","name":"HoodieClusteringStrategy","fields":[{"name":"strategyClassName","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"strategyParams","type":["null",{"type":"map","values":{"type":"string","avro.java.string":"String"},"avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1}]}],"default":null},{"name":"extraMetadata","type":["null",{"type":"map","values":{"type":"string","avro.java.string":"String"},"avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1},{"name":"preserveHoodieMetadata","type":["null","boolean"],"default":null}]}],"default":null},{"name":"extraMetadata","type":["null",{"type":"map","values":{"type":"string","avro.java.string":"String"},"avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1}]}],"default":null},{"name":"HoodieInflightReplaceMetadata","type":["null","HoodieCommitMetadata"],"default":null}]} | |
21/12/01 01:22:55 INFO DirectWriteMarkers: Creating Marker Path=s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.temp/20211201011347895/files/files-0000_0-1219-2268_20211201004828250001.hfile.marker.APPEND | |
21/12/01 01:22:56 INFO DirectWriteMarkers: [direct] Created marker file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.temp/20211201011347895/files/files-0000_0-1219-2268_20211201004828250001.hfile.marker.APPEND in 2003 ms | |
21/12/01 01:22:56 INFO HoodieLogFormat$WriterBuilder: Building HoodieLogFormat Writer | |
21/12/01 01:22:56 INFO HoodieLogFormat$WriterBuilder: HoodieLogFile on path s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/files/.files-0000_20211201004828250001.log.13_0-1210-2259 | |
21/12/01 01:22:57 INFO HoodieLogFormatWriter: Append not supported.. Rolling over to HoodieLogFile{pathStr='s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/files/.files-0000_20211201004828250001.log.14_0-1219-2268', fileLen=0} | |
21/12/01 01:22:57 INFO CacheConfig: Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=406512, freeSize=394696944, maxSize=395103456, heapSize=406512, minSize=375348288, minFactor=0.95, multiSize=187674144, multiFactor=0.5, singleSize=93837072, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false | |
21/12/01 01:22:57 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:22:57 INFO CodecPool: Got brand-new compressor [.gz] | |
21/12/01 01:22:57 INFO HoodieAppendHandle: AppendHandle for partitionPath files filePath files/.files-0000_20211201004828250001.log.14_0-1219-2268, took 3203 ms. | |
21/12/01 01:22:58 INFO MemoryStore: Block rdd_2778_0 stored as values in memory (estimated size 1116.0 B, free 363.7 MiB) | |
21/12/01 01:22:58 INFO BlockManagerInfo: Added rdd_2778_0 in memory on 192.168.1.48:56496 (size: 1116.0 B, free: 365.7 MiB) | |
21/12/01 01:22:58 INFO Executor: Finished task 0.0 in stage 1219.0 (TID 2268). 2267 bytes result sent to driver | |
21/12/01 01:22:58 INFO TaskSetManager: Finished task 0.0 in stage 1219.0 (TID 2268) in 4212 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:58 INFO TaskSchedulerImpl: Removed TaskSet 1219.0, whose tasks have all completed, from pool | |
21/12/01 01:22:58 INFO DAGScheduler: ResultStage 1219 (collect at BaseSparkCommitActionExecutor.java:274) finished in 4.266 s | |
21/12/01 01:22:58 INFO DAGScheduler: Job 825 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:58 INFO TaskSchedulerImpl: Killing all running tasks in stage 1219: Stage finished | |
21/12/01 01:22:58 INFO DAGScheduler: Job 825 finished: collect at BaseSparkCommitActionExecutor.java:274, took 4.319068 s | |
21/12/01 01:22:58 INFO BaseSparkCommitActionExecutor: Committing 20211201011347895, action Type deltacommit | |
21/12/01 01:22:58 INFO SparkContext: Starting job: collect at HoodieSparkEngineContext.java:134 | |
21/12/01 01:22:58 INFO DAGScheduler: Got job 826 (collect at HoodieSparkEngineContext.java:134) with 1 output partitions | |
21/12/01 01:22:58 INFO DAGScheduler: Final stage: ResultStage 1220 (collect at HoodieSparkEngineContext.java:134) | |
21/12/01 01:22:58 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:22:58 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:22:58 INFO DAGScheduler: Submitting ResultStage 1220 (MapPartitionsRDD[2781] at flatMap at HoodieSparkEngineContext.java:134), which has no missing parents | |
21/12/01 01:22:58 INFO MemoryStore: Block broadcast_1131 stored as values in memory (estimated size 99.4 KiB, free 363.6 MiB) | |
21/12/01 01:22:58 INFO MemoryStore: Block broadcast_1131_piece0 stored as bytes in memory (estimated size 35.3 KiB, free 363.5 MiB) | |
21/12/01 01:22:58 INFO BlockManagerInfo: Added broadcast_1131_piece0 in memory on 192.168.1.48:56496 (size: 35.3 KiB, free: 365.6 MiB) | |
21/12/01 01:22:58 INFO SparkContext: Created broadcast 1131 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:22:58 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1220 (MapPartitionsRDD[2781] at flatMap at HoodieSparkEngineContext.java:134) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:22:58 INFO TaskSchedulerImpl: Adding task set 1220.0 with 1 tasks resource profile 0 | |
21/12/01 01:22:58 INFO TaskSetManager: Starting task 0.0 in stage 1220.0 (TID 2269) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:22:58 INFO Executor: Running task 0.0 in stage 1220.0 (TID 2269) | |
21/12/01 01:22:58 INFO Executor: Finished task 0.0 in stage 1220.0 (TID 2269). 796 bytes result sent to driver | |
21/12/01 01:22:58 INFO TaskSetManager: Finished task 0.0 in stage 1220.0 (TID 2269) in 114 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:22:58 INFO TaskSchedulerImpl: Removed TaskSet 1220.0, whose tasks have all completed, from pool | |
21/12/01 01:22:58 INFO DAGScheduler: ResultStage 1220 (collect at HoodieSparkEngineContext.java:134) finished in 0.131 s | |
21/12/01 01:22:58 INFO DAGScheduler: Job 826 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:22:58 INFO TaskSchedulerImpl: Killing all running tasks in stage 1220: Stage finished | |
21/12/01 01:22:58 INFO DAGScheduler: Job 826 finished: collect at HoodieSparkEngineContext.java:134, took 0.131558 s | |
21/12/01 01:22:58 INFO CommitUtils: Creating metadata for UPSERT_PREPPED numWriteStats:1numReplaceFileIds:0 | |
21/12/01 01:22:58 INFO HoodieActiveTimeline: Marking instant complete [==>20211201011347895__deltacommit__INFLIGHT] | |
21/12/01 01:22:58 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201011347895.deltacommit.inflight | |
21/12/01 01:22:59 INFO HoodieLogFormatWriter: Append not supported.. Rolling over to HoodieLogFile{pathStr='s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/archived/.commits_.archive.4_1-0-1', fileLen=0} | |
21/12/01 01:22:59 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201011347895.deltacommit | |
21/12/01 01:22:59 INFO HoodieActiveTimeline: Completed [==>20211201011347895__deltacommit__INFLIGHT] | |
21/12/01 01:22:59 INFO BaseSparkCommitActionExecutor: Committed 20211201011347895 | |
21/12/01 01:23:00 INFO SparkContext: Starting job: collectAsMap at HoodieSparkEngineContext.java:148 | |
21/12/01 01:23:00 INFO DAGScheduler: Got job 827 (collectAsMap at HoodieSparkEngineContext.java:148) with 1 output partitions | |
21/12/01 01:23:00 INFO DAGScheduler: Final stage: ResultStage 1221 (collectAsMap at HoodieSparkEngineContext.java:148) | |
21/12/01 01:23:00 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:23:00 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:23:00 INFO DAGScheduler: Submitting ResultStage 1221 (MapPartitionsRDD[2783] at mapToPair at HoodieSparkEngineContext.java:145), which has no missing parents | |
21/12/01 01:23:00 INFO MemoryStore: Block broadcast_1132 stored as values in memory (estimated size 99.6 KiB, free 363.4 MiB) | |
21/12/01 01:23:00 INFO MemoryStore: Block broadcast_1132_piece0 stored as bytes in memory (estimated size 35.3 KiB, free 363.4 MiB) | |
21/12/01 01:23:00 INFO BlockManagerInfo: Added broadcast_1132_piece0 in memory on 192.168.1.48:56496 (size: 35.3 KiB, free: 365.6 MiB) | |
21/12/01 01:23:00 INFO SparkContext: Created broadcast 1132 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:23:00 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1221 (MapPartitionsRDD[2783] at mapToPair at HoodieSparkEngineContext.java:145) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:23:00 INFO TaskSchedulerImpl: Adding task set 1221.0 with 1 tasks resource profile 0 | |
21/12/01 01:23:00 INFO TaskSetManager: Starting task 0.0 in stage 1221.0 (TID 2270) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:00 INFO Executor: Running task 0.0 in stage 1221.0 (TID 2270) | |
21/12/01 01:23:01 INFO Executor: Finished task 0.0 in stage 1221.0 (TID 2270). 898 bytes result sent to driver | |
21/12/01 01:23:01 INFO TaskSetManager: Finished task 0.0 in stage 1221.0 (TID 2270) in 1176 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:23:01 INFO TaskSchedulerImpl: Removed TaskSet 1221.0, whose tasks have all completed, from pool | |
21/12/01 01:23:01 INFO DAGScheduler: ResultStage 1221 (collectAsMap at HoodieSparkEngineContext.java:148) finished in 1.193 s | |
21/12/01 01:23:01 INFO DAGScheduler: Job 827 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:23:01 INFO TaskSchedulerImpl: Killing all running tasks in stage 1221: Stage finished | |
21/12/01 01:23:01 INFO DAGScheduler: Job 827 finished: collectAsMap at HoodieSparkEngineContext.java:148, took 1.194580 s | |
21/12/01 01:23:02 INFO FSUtils: Removed directory at s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.temp/20211201011347895 | |
21/12/01 01:23:03 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201012216421__deltacommit__COMPLETED]} | |
21/12/01 01:23:03 INFO HoodieLogFormat$WriterBuilder: Building HoodieLogFormat Writer | |
21/12/01 01:23:03 INFO HoodieLogFormat$WriterBuilder: Computing the next log version for commits in s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/archived | |
21/12/01 01:23:03 INFO HoodieLogFormat$WriterBuilder: Computed the next log version for commits in s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/archived as 3 with write-token 1-0-1 | |
21/12/01 01:23:03 INFO HoodieLogFormat$WriterBuilder: HoodieLogFile on path s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/archived/.commits_.archive.3_1-0-1 | |
21/12/01 01:23:03 INFO HoodieTimelineArchiveLog: Archiving instants [[==>20211201000952501001__compaction__REQUESTED], [==>20211201000952501001__compaction__INFLIGHT], [20211201000952501001__commit__COMPLETED], [==>20211201001222696__deltacommit__REQUESTED], [==>20211201001222696__deltacommit__INFLIGHT], [20211201001222696__deltacommit__COMPLETED], [==>20211201001327610__deltacommit__REQUESTED], [==>20211201001327610__deltacommit__INFLIGHT], [20211201001327610__deltacommit__COMPLETED], [==>20211201001615832__deltacommit__REQUESTED], [==>20211201001615832__deltacommit__INFLIGHT], [20211201001615832__deltacommit__COMPLETED], [==>20211201001916822__deltacommit__REQUESTED], [==>20211201001916822__deltacommit__INFLIGHT], [20211201001916822__deltacommit__COMPLETED], [==>20211201002149590__deltacommit__REQUESTED], [==>20211201002149590__deltacommit__INFLIGHT], [20211201002149590__deltacommit__COMPLETED], [==>20211201002228421__deltacommit__REQUESTED], [==>20211201002228421__deltacommit__INFLIGHT], [20211201002228421__deltacommit__COMPLETED], [==>20211201002458660__deltacommit__REQUESTED], [==>20211201002458660__deltacommit__INFLIGHT], [20211201002458660__deltacommit__COMPLETED], [==>20211201002536353__deltacommit__REQUESTED], [==>20211201002536353__deltacommit__INFLIGHT], [20211201002536353__deltacommit__COMPLETED], [==>20211201002953399__deltacommit__REQUESTED], [==>20211201002953399__deltacommit__INFLIGHT], [20211201002953399__deltacommit__COMPLETED], [==>20211201003049103__deltacommit__REQUESTED], [==>20211201003049103__deltacommit__INFLIGHT], [20211201003049103__deltacommit__COMPLETED]] | |
21/12/01 01:23:03 INFO HoodieTimelineArchiveLog: Wrapper schema {"type":"record","name":"HoodieArchivedMetaEntry","namespace":"org.apache.hudi.avro.model","fields":[{"name":"hoodieCommitMetadata","type":["null",{"type":"record","name":"HoodieCommitMetadata","fields":[{"name":"partitionToWriteStats","type":["null",{"type":"map","values":{"type":"array","items":{"type":"record","name":"HoodieWriteStat","fields":[{"name":"fileId","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"path","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"prevCommit","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"numWrites","type":["null","long"],"default":null},{"name":"numDeletes","type":["null","long"],"default":null},{"name":"numUpdateWrites","type":["null","long"],"default":null},{"name":"totalWriteBytes","type":["null","long"],"default":null},{"name":"totalWriteErrors","type":["null","long"],"default":null},{"name":"partitionPath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"totalLogRecords","type":["null","long"],"default":null},{"name":"totalLogFiles","type":["null","long"],"default":null},{"name":"totalUpdatedRecordsCompacted","type":["null","long"],"default":null},{"name":"numInserts","type":["null","long"],"default":null},{"name":"totalLogBlocks","type":["null","long"],"default":null},{"name":"totalCorruptLogBlock","type":["null","long"],"default":null},{"name":"totalRollbackBlocks","type":["null","long"],"default":null},{"name":"fileSizeInBytes","type":["null","long"],"default":null}]}},"avro.java.string":"String"}],"default":null},{"name":"extraMetadata","type":["null",{"type":"map","values":{"type":"string","avro.java.string":"String"},"avro.java.string":"String","default":null}],"default":null},{"name":"version","type":["int","null"],"default":1},{"name":"operationType","type":["null",{"type":"string","avro.java.string":"String"}],"default":null}]}],"default":null},{"name":"hoodieCleanMetadata","type":["null",{"type":"record","name":"HoodieCleanMetadata","fields":[{"name":"startCleanTime","type":{"type":"string","avro.java.string":"String"}},{"name":"timeTakenInMillis","type":"long"},{"name":"totalFilesDeleted","type":"int"},{"name":"earliestCommitToRetain","type":{"type":"string","avro.java.string":"String"}},{"name":"partitionMetadata","type":{"type":"map","values":{"type":"record","name":"HoodieCleanPartitionMetadata","fields":[{"name":"partitionPath","type":{"type":"string","avro.java.string":"String"}},{"name":"policy","type":{"type":"string","avro.java.string":"String"}},{"name":"deletePathPatterns","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}},{"name":"successDeleteFiles","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}},{"name":"failedDeleteFiles","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}}]},"avro.java.string":"String"}},{"name":"version","type":["int","null"],"default":1},{"name":"bootstrapPartitionMetadata","type":["null",{"type":"map","values":"HoodieCleanPartitionMetadata","avro.java.string":"String","default":null}],"default":null}]}],"default":null},{"name":"hoodieCompactionMetadata","type":["null",{"type":"record","name":"HoodieCompactionMetadata","fields":[{"name":"partitionToCompactionWriteStats","type":["null",{"type":"map","values":{"type":"array","items":{"type":"record","name":"HoodieCompactionWriteStat","fields":[{"name":"partitionPath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"totalLogRecords","type":["null","long"],"default":null},{"name":"totalLogFiles","type":["null","long"],"default":null},{"name":"totalUpdatedRecordsCompacted","type":["null","long"],"default":null},{"name":"hoodieWriteStat","type":["null","HoodieWriteStat"],"default":null}]}},"avro.java.string":"String"}]}]}],"default":null},{"name":"hoodieRollbackMetadata","type":["null",{"type":"record","name":"HoodieRollbackMetadata","fields":[{"name":"startRollbackTime","type":{"type":"string","avro.java.string":"String"}},{"name":"timeTakenInMillis","type":"long"},{"name":"totalFilesDeleted","type":"int"},{"name":"commitsRollback","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}},{"name":"partitionMetadata","type":{"type":"map","values":{"type":"record","name":"HoodieRollbackPartitionMetadata","fields":[{"name":"partitionPath","type":{"type":"string","avro.java.string":"String"}},{"name":"successDeleteFiles","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}},{"name":"failedDeleteFiles","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}},{"name":"rollbackLogFiles","type":["null",{"type":"map","values":"long","avro.java.string":"String"}],"default":null},{"name":"writtenLogFiles","type":["null",{"type":"map","values":"long","avro.java.string":"String"}],"default":null}]},"avro.java.string":"String"}},{"name":"version","type":["int","null"],"default":1},{"name":"instantsRollback","type":{"type":"array","items":{"type":"record","name":"HoodieInstantInfo","fields":[{"name":"commitTime","type":{"type":"string","avro.java.string":"String"}},{"name":"action","type":{"type":"string","avro.java.string":"String"}}]},"default":[]},"default":[]}]}],"default":null},{"name":"hoodieSavePointMetadata","type":["null",{"type":"record","name":"HoodieSavepointMetadata","fields":[{"name":"savepointedBy","type":{"type":"string","avro.java.string":"String"}},{"name":"savepointedAt","type":"long"},{"name":"comments","type":{"type":"string","avro.java.string":"String"}},{"name":"partitionMetadata","type":{"type":"map","values":{"type":"record","name":"HoodieSavepointPartitionMetadata","fields":[{"name":"partitionPath","type":{"type":"string","avro.java.string":"String"}},{"name":"savepointDataFile","type":{"type":"array","items":{"type":"string","avro.java.string":"String"}}}]},"avro.java.string":"String"}},{"name":"version","type":["int","null"],"default":1}]}],"default":null},{"name":"commitTime","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"actionType","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1},{"name":"hoodieCompactionPlan","type":["null",{"type":"record","name":"HoodieCompactionPlan","fields":[{"name":"operations","type":["null",{"type":"array","items":{"type":"record","name":"HoodieCompactionOperation","fields":[{"name":"baseInstantTime","type":["null",{"type":"string","avro.java.string":"String"}]},{"name":"deltaFilePaths","type":["null",{"type":"array","items":{"type":"string","avro.java.string":"String"}}],"default":null},{"name":"dataFilePath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"fileId","type":["null",{"type":"string","avro.java.string":"String"}]},{"name":"partitionPath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"metrics","type":["null",{"type":"map","values":"double","avro.java.string":"String"}],"default":null},{"name":"bootstrapFilePath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null}]}}],"default":null},{"name":"extraMetadata","type":["null",{"type":"map","values":{"type":"string","avro.java.string":"String"},"avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1}]}],"default":null},{"name":"hoodieCleanerPlan","type":["null",{"type":"record","name":"HoodieCleanerPlan","fields":[{"name":"earliestInstantToRetain","type":["null",{"type":"record","name":"HoodieActionInstant","fields":[{"name":"timestamp","type":{"type":"string","avro.java.string":"String"}},{"name":"action","type":{"type":"string","avro.java.string":"String"}},{"name":"state","type":{"type":"string","avro.java.string":"String"}}]}],"default":null},{"name":"policy","type":{"type":"string","avro.java.string":"String"}},{"name":"filesToBeDeletedPerPartition","type":["null",{"type":"map","values":{"type":"array","items":{"type":"string","avro.java.string":"String"}},"avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1},{"name":"filePathsToBeDeletedPerPartition","type":["null",{"type":"map","values":{"type":"array","items":{"type":"record","name":"HoodieCleanFileInfo","fields":[{"name":"filePath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"isBootstrapBaseFile","type":["null","boolean"],"default":null}]}},"avro.java.string":"String"}],"doc":"This field replaces the field filesToBeDeletedPerPartition","default":null}]}],"default":null},{"name":"actionState","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"hoodieReplaceCommitMetadata","type":["null",{"type":"record","name":"HoodieReplaceCommitMetadata","fields":[{"name":"partitionToWriteStats","type":["null",{"type":"map","values":{"type":"array","items":"HoodieWriteStat"},"avro.java.string":"String"}],"default":null},{"name":"extraMetadata","type":["null",{"type":"map","values":{"type":"string","avro.java.string":"String"},"avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1},{"name":"operationType","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"partitionToReplaceFileIds","type":["null",{"type":"map","values":{"type":"array","items":{"type":"string","avro.java.string":"String"}},"avro.java.string":"String"}],"default":null}]}],"default":null},{"name":"hoodieRequestedReplaceMetadata","type":["null",{"type":"record","name":"HoodieRequestedReplaceMetadata","fields":[{"name":"operationType","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"clusteringPlan","type":["null",{"type":"record","name":"HoodieClusteringPlan","fields":[{"name":"inputGroups","type":["null",{"type":"array","items":{"type":"record","name":"HoodieClusteringGroup","fields":[{"name":"slices","type":["null",{"type":"array","items":{"type":"record","name":"HoodieSliceInfo","fields":[{"name":"dataFilePath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"deltaFilePaths","type":["null",{"type":"array","items":{"type":"string","avro.java.string":"String"}}],"default":null},{"name":"fileId","type":["null",{"type":"string","avro.java.string":"String"}]},{"name":"partitionPath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"bootstrapFilePath","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1}]}}],"default":null},{"name":"metrics","type":["null",{"type":"map","values":"double","avro.java.string":"String"}],"default":null},{"name":"numOutputFileGroups","type":["int","null"],"default":1},{"name":"version","type":["int","null"],"default":1}]}}],"default":null},{"name":"strategy","type":["null",{"type":"record","name":"HoodieClusteringStrategy","fields":[{"name":"strategyClassName","type":["null",{"type":"string","avro.java.string":"String"}],"default":null},{"name":"strategyParams","type":["null",{"type":"map","values":{"type":"string","avro.java.string":"String"},"avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1}]}],"default":null},{"name":"extraMetadata","type":["null",{"type":"map","values":{"type":"string","avro.java.string":"String"},"avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1},{"name":"preserveHoodieMetadata","type":["null","boolean"],"default":null}]}],"default":null},{"name":"extraMetadata","type":["null",{"type":"map","values":{"type":"string","avro.java.string":"String"},"avro.java.string":"String"}],"default":null},{"name":"version","type":["int","null"],"default":1}]}],"default":null},{"name":"HoodieInflightReplaceMetadata","type":["null","HoodieCommitMetadata"],"default":null}]} | |
21/12/01 01:23:07 INFO HoodieLogFormatWriter: Append not supported.. Rolling over to HoodieLogFile{pathStr='s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/archived/.commits_.archive.4_1-0-1', fileLen=0} | |
21/12/01 01:23:08 INFO HoodieTimelineArchiveLog: Deleting archived instants [[==>20211201000952501001__compaction__REQUESTED], [==>20211201000952501001__compaction__INFLIGHT], [20211201000952501001__commit__COMPLETED], [==>20211201001222696__deltacommit__REQUESTED], [==>20211201001222696__deltacommit__INFLIGHT], [20211201001222696__deltacommit__COMPLETED], [==>20211201001327610__deltacommit__REQUESTED], [==>20211201001327610__deltacommit__INFLIGHT], [20211201001327610__deltacommit__COMPLETED], [==>20211201001615832__deltacommit__REQUESTED], [==>20211201001615832__deltacommit__INFLIGHT], [20211201001615832__deltacommit__COMPLETED], [==>20211201001916822__deltacommit__REQUESTED], [==>20211201001916822__deltacommit__INFLIGHT], [20211201001916822__deltacommit__COMPLETED], [==>20211201002149590__deltacommit__REQUESTED], [==>20211201002149590__deltacommit__INFLIGHT], [20211201002149590__deltacommit__COMPLETED], [==>20211201002228421__deltacommit__REQUESTED], [==>20211201002228421__deltacommit__INFLIGHT], [20211201002228421__deltacommit__COMPLETED], [==>20211201002458660__deltacommit__REQUESTED], [==>20211201002458660__deltacommit__INFLIGHT], [20211201002458660__deltacommit__COMPLETED], [==>20211201002536353__deltacommit__REQUESTED], [==>20211201002536353__deltacommit__INFLIGHT], [20211201002536353__deltacommit__COMPLETED], [==>20211201002953399__deltacommit__REQUESTED], [==>20211201002953399__deltacommit__INFLIGHT], [20211201002953399__deltacommit__COMPLETED], [==>20211201003049103__deltacommit__REQUESTED], [==>20211201003049103__deltacommit__INFLIGHT], [20211201003049103__deltacommit__COMPLETED]] | |
21/12/01 01:23:08 INFO HoodieTimelineArchiveLog: Deleting instants [[==>20211201000952501001__compaction__REQUESTED], [==>20211201000952501001__compaction__INFLIGHT], [20211201000952501001__commit__COMPLETED], [==>20211201001222696__deltacommit__REQUESTED], [==>20211201001222696__deltacommit__INFLIGHT], [20211201001222696__deltacommit__COMPLETED], [==>20211201001327610__deltacommit__REQUESTED], [==>20211201001327610__deltacommit__INFLIGHT], [20211201001327610__deltacommit__COMPLETED], [==>20211201001615832__deltacommit__REQUESTED], [==>20211201001615832__deltacommit__INFLIGHT], [20211201001615832__deltacommit__COMPLETED], [==>20211201001916822__deltacommit__REQUESTED], [==>20211201001916822__deltacommit__INFLIGHT], [20211201001916822__deltacommit__COMPLETED], [==>20211201002149590__deltacommit__REQUESTED], [==>20211201002149590__deltacommit__INFLIGHT], [20211201002149590__deltacommit__COMPLETED], [==>20211201002228421__deltacommit__REQUESTED], [==>20211201002228421__deltacommit__INFLIGHT], [20211201002228421__deltacommit__COMPLETED], [==>20211201002458660__deltacommit__REQUESTED], [==>20211201002458660__deltacommit__INFLIGHT], [20211201002458660__deltacommit__COMPLETED], [==>20211201002536353__deltacommit__REQUESTED], [==>20211201002536353__deltacommit__INFLIGHT], [20211201002536353__deltacommit__COMPLETED], [==>20211201002953399__deltacommit__REQUESTED], [==>20211201002953399__deltacommit__INFLIGHT], [20211201002953399__deltacommit__COMPLETED], [==>20211201003049103__deltacommit__REQUESTED], [==>20211201003049103__deltacommit__INFLIGHT], [20211201003049103__deltacommit__COMPLETED]] | |
21/12/01 01:23:08 INFO SparkContext: Starting job: collectAsMap at HoodieSparkEngineContext.java:148 | |
21/12/01 01:23:08 INFO DAGScheduler: Got job 828 (collectAsMap at HoodieSparkEngineContext.java:148) with 33 output partitions | |
21/12/01 01:23:08 INFO DAGScheduler: Final stage: ResultStage 1222 (collectAsMap at HoodieSparkEngineContext.java:148) | |
21/12/01 01:23:08 INFO DAGScheduler: Parents of final stage: List() | |
21/12/01 01:23:08 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:23:08 INFO DAGScheduler: Submitting ResultStage 1222 (MapPartitionsRDD[2785] at mapToPair at HoodieSparkEngineContext.java:145), which has no missing parents | |
21/12/01 01:23:08 INFO MemoryStore: Block broadcast_1133 stored as values in memory (estimated size 99.7 KiB, free 363.3 MiB) | |
21/12/01 01:23:08 INFO MemoryStore: Block broadcast_1133_piece0 stored as bytes in memory (estimated size 35.4 KiB, free 363.3 MiB) | |
21/12/01 01:23:08 INFO BlockManagerInfo: Added broadcast_1133_piece0 in memory on 192.168.1.48:56496 (size: 35.4 KiB, free: 365.6 MiB) | |
21/12/01 01:23:08 INFO SparkContext: Created broadcast 1133 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:23:08 INFO DAGScheduler: Submitting 33 missing tasks from ResultStage 1222 (MapPartitionsRDD[2785] at mapToPair at HoodieSparkEngineContext.java:145) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)) | |
21/12/01 01:23:08 INFO TaskSchedulerImpl: Adding task set 1222.0 with 33 tasks resource profile 0 | |
21/12/01 01:23:08 INFO TaskSetManager: Starting task 0.0 in stage 1222.0 (TID 2271) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4440 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:08 INFO TaskSetManager: Starting task 1.0 in stage 1222.0 (TID 2272) (192.168.1.48, executor driver, partition 1, PROCESS_LOCAL, 4439 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:08 INFO TaskSetManager: Starting task 2.0 in stage 1222.0 (TID 2273) (192.168.1.48, executor driver, partition 2, PROCESS_LOCAL, 4426 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:08 INFO TaskSetManager: Starting task 3.0 in stage 1222.0 (TID 2274) (192.168.1.48, executor driver, partition 3, PROCESS_LOCAL, 4438 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:08 INFO TaskSetManager: Starting task 4.0 in stage 1222.0 (TID 2275) (192.168.1.48, executor driver, partition 4, PROCESS_LOCAL, 4437 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:08 INFO TaskSetManager: Starting task 5.0 in stage 1222.0 (TID 2276) (192.168.1.48, executor driver, partition 5, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:08 INFO TaskSetManager: Starting task 6.0 in stage 1222.0 (TID 2277) (192.168.1.48, executor driver, partition 6, PROCESS_LOCAL, 4438 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:08 INFO TaskSetManager: Starting task 7.0 in stage 1222.0 (TID 2278) (192.168.1.48, executor driver, partition 7, PROCESS_LOCAL, 4437 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:08 INFO TaskSetManager: Starting task 8.0 in stage 1222.0 (TID 2279) (192.168.1.48, executor driver, partition 8, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:08 INFO TaskSetManager: Starting task 9.0 in stage 1222.0 (TID 2280) (192.168.1.48, executor driver, partition 9, PROCESS_LOCAL, 4438 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:08 INFO TaskSetManager: Starting task 10.0 in stage 1222.0 (TID 2281) (192.168.1.48, executor driver, partition 10, PROCESS_LOCAL, 4437 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:08 INFO TaskSetManager: Starting task 11.0 in stage 1222.0 (TID 2282) (192.168.1.48, executor driver, partition 11, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:08 INFO Executor: Running task 0.0 in stage 1222.0 (TID 2271) | |
21/12/01 01:23:08 INFO Executor: Running task 1.0 in stage 1222.0 (TID 2272) | |
21/12/01 01:23:08 INFO Executor: Running task 3.0 in stage 1222.0 (TID 2274) | |
21/12/01 01:23:08 INFO Executor: Running task 5.0 in stage 1222.0 (TID 2276) | |
21/12/01 01:23:08 INFO Executor: Running task 4.0 in stage 1222.0 (TID 2275) | |
21/12/01 01:23:08 INFO Executor: Running task 2.0 in stage 1222.0 (TID 2273) | |
21/12/01 01:23:08 INFO Executor: Running task 6.0 in stage 1222.0 (TID 2277) | |
21/12/01 01:23:08 INFO Executor: Running task 7.0 in stage 1222.0 (TID 2278) | |
21/12/01 01:23:08 INFO Executor: Running task 8.0 in stage 1222.0 (TID 2279) | |
21/12/01 01:23:08 INFO Executor: Running task 9.0 in stage 1222.0 (TID 2280) | |
21/12/01 01:23:08 INFO Executor: Running task 10.0 in stage 1222.0 (TID 2281) | |
21/12/01 01:23:08 INFO Executor: Running task 11.0 in stage 1222.0 (TID 2282) | |
21/12/01 01:23:09 INFO Executor: Finished task 2.0 in stage 1222.0 (TID 2273). 896 bytes result sent to driver | |
21/12/01 01:23:09 INFO TaskSetManager: Starting task 12.0 in stage 1222.0 (TID 2283) (192.168.1.48, executor driver, partition 12, PROCESS_LOCAL, 4438 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:09 INFO TaskSetManager: Finished task 2.0 in stage 1222.0 (TID 2273) in 480 ms on 192.168.1.48 (executor driver) (1/33) | |
21/12/01 01:23:09 INFO Executor: Running task 12.0 in stage 1222.0 (TID 2283) | |
21/12/01 01:23:09 INFO Executor: Finished task 5.0 in stage 1222.0 (TID 2276). 898 bytes result sent to driver | |
21/12/01 01:23:09 INFO TaskSetManager: Starting task 13.0 in stage 1222.0 (TID 2284) (192.168.1.48, executor driver, partition 13, PROCESS_LOCAL, 4437 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:09 INFO TaskSetManager: Finished task 5.0 in stage 1222.0 (TID 2276) in 758 ms on 192.168.1.48 (executor driver) (2/33) | |
21/12/01 01:23:09 INFO Executor: Running task 13.0 in stage 1222.0 (TID 2284) | |
21/12/01 01:23:09 INFO Executor: Finished task 7.0 in stage 1222.0 (TID 2278). 907 bytes result sent to driver | |
21/12/01 01:23:09 INFO TaskSetManager: Starting task 14.0 in stage 1222.0 (TID 2285) (192.168.1.48, executor driver, partition 14, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:09 INFO TaskSetManager: Finished task 7.0 in stage 1222.0 (TID 2278) in 767 ms on 192.168.1.48 (executor driver) (3/33) | |
21/12/01 01:23:09 INFO Executor: Running task 14.0 in stage 1222.0 (TID 2285) | |
21/12/01 01:23:09 INFO Executor: Finished task 6.0 in stage 1222.0 (TID 2277). 908 bytes result sent to driver | |
21/12/01 01:23:09 INFO TaskSetManager: Starting task 15.0 in stage 1222.0 (TID 2286) (192.168.1.48, executor driver, partition 15, PROCESS_LOCAL, 4438 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:09 INFO TaskSetManager: Finished task 6.0 in stage 1222.0 (TID 2277) in 770 ms on 192.168.1.48 (executor driver) (4/33) | |
21/12/01 01:23:09 INFO Executor: Running task 15.0 in stage 1222.0 (TID 2286) | |
21/12/01 01:23:09 INFO Executor: Finished task 11.0 in stage 1222.0 (TID 2282). 941 bytes result sent to driver | |
21/12/01 01:23:09 INFO Executor: Finished task 8.0 in stage 1222.0 (TID 2279). 941 bytes result sent to driver | |
21/12/01 01:23:09 INFO Executor: Finished task 3.0 in stage 1222.0 (TID 2274). 951 bytes result sent to driver | |
21/12/01 01:23:09 INFO Executor: Finished task 10.0 in stage 1222.0 (TID 2281). 950 bytes result sent to driver | |
21/12/01 01:23:09 INFO Executor: Finished task 1.0 in stage 1222.0 (TID 2272). 952 bytes result sent to driver | |
21/12/01 01:23:09 INFO Executor: Finished task 9.0 in stage 1222.0 (TID 2280). 951 bytes result sent to driver | |
21/12/01 01:23:09 INFO TaskSetManager: Starting task 16.0 in stage 1222.0 (TID 2287) (192.168.1.48, executor driver, partition 16, PROCESS_LOCAL, 4437 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:09 INFO Executor: Running task 16.0 in stage 1222.0 (TID 2287) | |
21/12/01 01:23:09 INFO TaskSetManager: Starting task 17.0 in stage 1222.0 (TID 2288) (192.168.1.48, executor driver, partition 17, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:09 INFO Executor: Running task 17.0 in stage 1222.0 (TID 2288) | |
21/12/01 01:23:09 INFO TaskSetManager: Starting task 18.0 in stage 1222.0 (TID 2289) (192.168.1.48, executor driver, partition 18, PROCESS_LOCAL, 4438 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:09 INFO TaskSetManager: Finished task 11.0 in stage 1222.0 (TID 2282) in 801 ms on 192.168.1.48 (executor driver) (5/33) | |
21/12/01 01:23:09 INFO Executor: Running task 18.0 in stage 1222.0 (TID 2289) | |
21/12/01 01:23:09 INFO TaskSetManager: Starting task 19.0 in stage 1222.0 (TID 2290) (192.168.1.48, executor driver, partition 19, PROCESS_LOCAL, 4437 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:09 INFO TaskSetManager: Finished task 10.0 in stage 1222.0 (TID 2281) in 801 ms on 192.168.1.48 (executor driver) (6/33) | |
21/12/01 01:23:09 INFO TaskSetManager: Finished task 8.0 in stage 1222.0 (TID 2279) in 801 ms on 192.168.1.48 (executor driver) (7/33) | |
21/12/01 01:23:09 INFO Executor: Running task 19.0 in stage 1222.0 (TID 2290) | |
21/12/01 01:23:09 INFO TaskSetManager: Starting task 20.0 in stage 1222.0 (TID 2291) (192.168.1.48, executor driver, partition 20, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:09 INFO TaskSetManager: Starting task 21.0 in stage 1222.0 (TID 2292) (192.168.1.48, executor driver, partition 21, PROCESS_LOCAL, 4438 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:09 INFO TaskSetManager: Finished task 3.0 in stage 1222.0 (TID 2274) in 803 ms on 192.168.1.48 (executor driver) (8/33) | |
21/12/01 01:23:09 INFO TaskSetManager: Finished task 1.0 in stage 1222.0 (TID 2272) in 803 ms on 192.168.1.48 (executor driver) (9/33) | |
21/12/01 01:23:09 INFO TaskSetManager: Finished task 9.0 in stage 1222.0 (TID 2280) in 802 ms on 192.168.1.48 (executor driver) (10/33) | |
21/12/01 01:23:09 INFO Executor: Running task 20.0 in stage 1222.0 (TID 2291) | |
21/12/01 01:23:09 INFO Executor: Running task 21.0 in stage 1222.0 (TID 2292) | |
21/12/01 01:23:09 INFO Executor: Finished task 0.0 in stage 1222.0 (TID 2271). 953 bytes result sent to driver | |
21/12/01 01:23:09 INFO TaskSetManager: Starting task 22.0 in stage 1222.0 (TID 2293) (192.168.1.48, executor driver, partition 22, PROCESS_LOCAL, 4437 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:09 INFO Executor: Running task 22.0 in stage 1222.0 (TID 2293) | |
21/12/01 01:23:09 INFO TaskSetManager: Finished task 0.0 in stage 1222.0 (TID 2271) in 810 ms on 192.168.1.48 (executor driver) (11/33) | |
21/12/01 01:23:09 INFO BlockManagerInfo: Removed broadcast_1129_piece0 on 192.168.1.48:56496 in memory (size: 113.3 KiB, free: 365.7 MiB) | |
21/12/01 01:23:09 INFO BlockManagerInfo: Removed broadcast_1131_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 365.7 MiB) | |
21/12/01 01:23:09 INFO BlockManagerInfo: Removed broadcast_1128_piece0 on 192.168.1.48:56496 in memory (size: 110.4 KiB, free: 365.8 MiB) | |
21/12/01 01:23:09 INFO BlockManagerInfo: Removed broadcast_1132_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 365.9 MiB) | |
21/12/01 01:23:09 INFO BlockManagerInfo: Removed broadcast_1125_piece0 on 192.168.1.48:56496 in memory (size: 35.3 KiB, free: 365.9 MiB) | |
21/12/01 01:23:09 INFO BlockManagerInfo: Removed broadcast_1130_piece0 on 192.168.1.48:56496 in memory (size: 150.3 KiB, free: 366.0 MiB) | |
21/12/01 01:23:09 INFO Executor: Finished task 4.0 in stage 1222.0 (TID 2275). 950 bytes result sent to driver | |
21/12/01 01:23:09 INFO TaskSetManager: Starting task 23.0 in stage 1222.0 (TID 2294) (192.168.1.48, executor driver, partition 23, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:09 INFO TaskSetManager: Finished task 4.0 in stage 1222.0 (TID 2275) in 838 ms on 192.168.1.48 (executor driver) (12/33) | |
21/12/01 01:23:09 INFO Executor: Running task 23.0 in stage 1222.0 (TID 2294) | |
21/12/01 01:23:09 INFO Executor: Finished task 12.0 in stage 1222.0 (TID 2283). 951 bytes result sent to driver | |
21/12/01 01:23:09 INFO TaskSetManager: Starting task 24.0 in stage 1222.0 (TID 2295) (192.168.1.48, executor driver, partition 24, PROCESS_LOCAL, 4438 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:09 INFO TaskSetManager: Finished task 12.0 in stage 1222.0 (TID 2283) in 456 ms on 192.168.1.48 (executor driver) (13/33) | |
21/12/01 01:23:09 INFO Executor: Running task 24.0 in stage 1222.0 (TID 2295) | |
21/12/01 01:23:10 INFO Executor: Finished task 15.0 in stage 1222.0 (TID 2286). 951 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Starting task 25.0 in stage 1222.0 (TID 2296) (192.168.1.48, executor driver, partition 25, PROCESS_LOCAL, 4437 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 15.0 in stage 1222.0 (TID 2286) in 486 ms on 192.168.1.48 (executor driver) (14/33) | |
21/12/01 01:23:10 INFO Executor: Running task 25.0 in stage 1222.0 (TID 2296) | |
21/12/01 01:23:10 INFO Executor: Finished task 14.0 in stage 1222.0 (TID 2285). 941 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Starting task 26.0 in stage 1222.0 (TID 2297) (192.168.1.48, executor driver, partition 26, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 14.0 in stage 1222.0 (TID 2285) in 495 ms on 192.168.1.48 (executor driver) (15/33) | |
21/12/01 01:23:10 INFO Executor: Running task 26.0 in stage 1222.0 (TID 2297) | |
21/12/01 01:23:10 INFO Executor: Finished task 17.0 in stage 1222.0 (TID 2288). 898 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Starting task 27.0 in stage 1222.0 (TID 2298) (192.168.1.48, executor driver, partition 27, PROCESS_LOCAL, 4438 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 17.0 in stage 1222.0 (TID 2288) in 464 ms on 192.168.1.48 (executor driver) (16/33) | |
21/12/01 01:23:10 INFO Executor: Running task 27.0 in stage 1222.0 (TID 2298) | |
21/12/01 01:23:10 INFO Executor: Finished task 19.0 in stage 1222.0 (TID 2290). 907 bytes result sent to driver | |
21/12/01 01:23:10 INFO Executor: Finished task 16.0 in stage 1222.0 (TID 2287). 907 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Starting task 28.0 in stage 1222.0 (TID 2299) (192.168.1.48, executor driver, partition 28, PROCESS_LOCAL, 4437 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:10 INFO Executor: Running task 28.0 in stage 1222.0 (TID 2299) | |
21/12/01 01:23:10 INFO TaskSetManager: Starting task 29.0 in stage 1222.0 (TID 2300) (192.168.1.48, executor driver, partition 29, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 19.0 in stage 1222.0 (TID 2290) in 472 ms on 192.168.1.48 (executor driver) (17/33) | |
21/12/01 01:23:10 INFO Executor: Running task 29.0 in stage 1222.0 (TID 2300) | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 16.0 in stage 1222.0 (TID 2287) in 473 ms on 192.168.1.48 (executor driver) (18/33) | |
21/12/01 01:23:10 INFO Executor: Finished task 18.0 in stage 1222.0 (TID 2289). 908 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Starting task 30.0 in stage 1222.0 (TID 2301) (192.168.1.48, executor driver, partition 30, PROCESS_LOCAL, 4438 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 18.0 in stage 1222.0 (TID 2289) in 477 ms on 192.168.1.48 (executor driver) (19/33) | |
21/12/01 01:23:10 INFO Executor: Running task 30.0 in stage 1222.0 (TID 2301) | |
21/12/01 01:23:10 INFO Executor: Finished task 22.0 in stage 1222.0 (TID 2293). 907 bytes result sent to driver | |
21/12/01 01:23:10 INFO Executor: Finished task 20.0 in stage 1222.0 (TID 2291). 898 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Starting task 31.0 in stage 1222.0 (TID 2302) (192.168.1.48, executor driver, partition 31, PROCESS_LOCAL, 4437 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:10 INFO Executor: Running task 31.0 in stage 1222.0 (TID 2302) | |
21/12/01 01:23:10 INFO TaskSetManager: Starting task 32.0 in stage 1222.0 (TID 2303) (192.168.1.48, executor driver, partition 32, PROCESS_LOCAL, 4428 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 22.0 in stage 1222.0 (TID 2293) in 487 ms on 192.168.1.48 (executor driver) (20/33) | |
21/12/01 01:23:10 INFO Executor: Running task 32.0 in stage 1222.0 (TID 2303) | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 20.0 in stage 1222.0 (TID 2291) in 495 ms on 192.168.1.48 (executor driver) (21/33) | |
21/12/01 01:23:10 INFO Executor: Finished task 23.0 in stage 1222.0 (TID 2294). 898 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 23.0 in stage 1222.0 (TID 2294) in 469 ms on 192.168.1.48 (executor driver) (22/33) | |
21/12/01 01:23:10 INFO Executor: Finished task 13.0 in stage 1222.0 (TID 2284). 950 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 13.0 in stage 1222.0 (TID 2284) in 554 ms on 192.168.1.48 (executor driver) (23/33) | |
21/12/01 01:23:10 INFO Executor: Finished task 21.0 in stage 1222.0 (TID 2292). 908 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 21.0 in stage 1222.0 (TID 2292) in 524 ms on 192.168.1.48 (executor driver) (24/33) | |
21/12/01 01:23:10 INFO Executor: Finished task 24.0 in stage 1222.0 (TID 2295). 908 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 24.0 in stage 1222.0 (TID 2295) in 456 ms on 192.168.1.48 (executor driver) (25/33) | |
21/12/01 01:23:10 ERROR HoodieTimelineArchiveLog: Failed to archive commits, .commit file: 20211201002149590.deltacommit.requested | |
org.apache.hudi.exception.HoodieIOException: Could not read commit details from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002149590.deltacommit.requested | |
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.readDataFromPath(HoodieActiveTimeline.java:634) | |
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.getInstantDetails(HoodieActiveTimeline.java:250) | |
at org.apache.hudi.client.utils.MetadataConversionUtils.createMetaWrapper(MetadataConversionUtils.java:72) | |
at org.apache.hudi.table.HoodieTimelineArchiveLog.convertToAvroRecord(HoodieTimelineArchiveLog.java:358) | |
at org.apache.hudi.table.HoodieTimelineArchiveLog.archive(HoodieTimelineArchiveLog.java:321) | |
at org.apache.hudi.table.HoodieTimelineArchiveLog.archiveIfRequired(HoodieTimelineArchiveLog.java:130) | |
at org.apache.hudi.client.AbstractHoodieWriteClient.postCommit(AbstractHoodieWriteClient.java:454) | |
at org.apache.hudi.client.SparkRDDWriteClient.postWrite(SparkRDDWriteClient.java:280) | |
at org.apache.hudi.client.SparkRDDWriteClient.upsertPreppedRecords(SparkRDDWriteClient.java:173) | |
at org.apache.hudi.metadata.SparkHoodieBackedTableMetadataWriter.commit(SparkHoodieBackedTableMetadataWriter.java:146) | |
at org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.processAndCommit(HoodieBackedTableMetadataWriter.java:590) | |
at org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.update(HoodieBackedTableMetadataWriter.java:602) | |
at org.apache.hudi.client.SparkRDDWriteClient.lambda$writeTableMetadataForTableServices$5(SparkRDDWriteClient.java:420) | |
at org.apache.hudi.common.util.Option.ifPresent(Option.java:96) | |
at org.apache.hudi.client.SparkRDDWriteClient.writeTableMetadataForTableServices(SparkRDDWriteClient.java:419) | |
at org.apache.hudi.client.SparkRDDWriteClient.completeClustering(SparkRDDWriteClient.java:384) | |
at org.apache.hudi.client.SparkRDDWriteClient.completeTableService(SparkRDDWriteClient.java:470) | |
at org.apache.hudi.client.SparkRDDWriteClient.cluster(SparkRDDWriteClient.java:364) | |
at org.apache.hudi.client.HoodieSparkClusteringClient.cluster(HoodieSparkClusteringClient.java:54) | |
at org.apache.hudi.async.AsyncClusteringService.lambda$null$1(AsyncClusteringService.java:79) | |
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
Caused by: java.io.FileNotFoundException: No such file or directory: s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002149590.deltacommit.requested | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3356) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.extractOrFetchSimpleFileStatus(S3AFileSystem.java:4903) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.open(S3AFileSystem.java:1200) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.open(S3AFileSystem.java:1178) | |
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:976) | |
at org.apache.hudi.common.fs.HoodieWrapperFileSystem.open(HoodieWrapperFileSystem.java:459) | |
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.readDataFromPath(HoodieActiveTimeline.java:631) | |
... 23 more | |
21/12/01 01:23:10 INFO Executor: Finished task 26.0 in stage 1222.0 (TID 2297). 898 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 26.0 in stage 1222.0 (TID 2297) in 449 ms on 192.168.1.48 (executor driver) (26/33) | |
21/12/01 01:23:10 INFO Executor: Finished task 25.0 in stage 1222.0 (TID 2296). 907 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 25.0 in stage 1222.0 (TID 2296) in 461 ms on 192.168.1.48 (executor driver) (27/33) | |
21/12/01 01:23:10 INFO Executor: Finished task 27.0 in stage 1222.0 (TID 2298). 908 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 27.0 in stage 1222.0 (TID 2298) in 465 ms on 192.168.1.48 (executor driver) (28/33) | |
21/12/01 01:23:10 INFO Executor: Finished task 29.0 in stage 1222.0 (TID 2300). 898 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 29.0 in stage 1222.0 (TID 2300) in 468 ms on 192.168.1.48 (executor driver) (29/33) | |
21/12/01 01:23:10 INFO Executor: Finished task 28.0 in stage 1222.0 (TID 2299). 907 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 28.0 in stage 1222.0 (TID 2299) in 478 ms on 192.168.1.48 (executor driver) (30/33) | |
21/12/01 01:23:10 INFO Executor: Finished task 31.0 in stage 1222.0 (TID 2302). 907 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 31.0 in stage 1222.0 (TID 2302) in 469 ms on 192.168.1.48 (executor driver) (31/33) | |
21/12/01 01:23:10 INFO Executor: Finished task 32.0 in stage 1222.0 (TID 2303). 898 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 32.0 in stage 1222.0 (TID 2303) in 471 ms on 192.168.1.48 (executor driver) (32/33) | |
21/12/01 01:23:10 INFO Executor: Finished task 30.0 in stage 1222.0 (TID 2301). 908 bytes result sent to driver | |
21/12/01 01:23:10 INFO TaskSetManager: Finished task 30.0 in stage 1222.0 (TID 2301) in 492 ms on 192.168.1.48 (executor driver) (33/33) | |
21/12/01 01:23:10 INFO TaskSchedulerImpl: Removed TaskSet 1222.0, whose tasks have all completed, from pool | |
21/12/01 01:23:10 INFO DAGScheduler: ResultStage 1222 (collectAsMap at HoodieSparkEngineContext.java:148) finished in 1.786 s | |
21/12/01 01:23:10 INFO DAGScheduler: Job 828 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:23:10 INFO TaskSchedulerImpl: Killing all running tasks in stage 1222: Stage finished | |
21/12/01 01:23:10 INFO DAGScheduler: Job 828 finished: collectAsMap at HoodieSparkEngineContext.java:148, took 1.787251 s | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002953399.deltacommit.requested : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201001222696.deltacommit : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201001615832.deltacommit : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002149590.deltacommit.inflight : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201001222696.deltacommit.inflight : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002458660.deltacommit.inflight : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201001327610.deltacommit : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002536353.deltacommit.requested : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002953399.deltacommit.inflight : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201001222696.deltacommit.requested : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201001615832.deltacommit.inflight : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201003049103.deltacommit.inflight : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201000952501001.compaction.requested : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201001327610.deltacommit.inflight : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002953399.deltacommit : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002536353.deltacommit : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201000952501001.commit : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201001916822.deltacommit.inflight : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002228421.deltacommit.inflight : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002458660.deltacommit.requested : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201001916822.deltacommit : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201001916822.deltacommit.requested : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002149590.deltacommit : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002228421.deltacommit : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201001615832.deltacommit.requested : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002149590.deltacommit.requested : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002458660.deltacommit : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201001327610.deltacommit.requested : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201003049103.deltacommit : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002536353.deltacommit.inflight : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002228421.deltacommit.requested : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201003049103.deltacommit.requested : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Archived and deleted instant file s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201000952501001.compaction.inflight : true | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Latest Committed Instant=Option{val=[20211201003049103__deltacommit__COMPLETED]} | |
21/12/01 01:23:10 INFO HoodieTimelineArchiveLog: Deleting instant [==>20211201000952501001__compaction__REQUESTED] in auxiliary meta path s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.aux | |
21/12/01 01:23:11 INFO HoodieHeartbeatClient: Stopping heartbeat for instant 20211201011347895 | |
21/12/01 01:23:11 INFO HoodieHeartbeatClient: Stopped heartbeat for instant 20211201011347895 | |
21/12/01 01:23:11 INFO HoodieTimelineArchiveLog: Deleted instant file in auxiliary metapath : s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/.aux/20211201000952501001.compaction.requested | |
21/12/01 01:23:11 INFO HeartbeatUtils: Deleted the heartbeat for instant 20211201011347895 | |
21/12/01 01:23:11 INFO HoodieHeartbeatClient: Deleted heartbeat file for instant 20211201011347895 | |
21/12/01 01:23:11 ERROR HoodieAsyncService: Monitor noticed one or more threads failed. Requesting graceful shutdown of other threads | |
java.util.concurrent.ExecutionException: org.apache.hudi.exception.HoodieClusteringException: unable to transition clustering inflight to complete: 20211201011347895 | |
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) | |
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) | |
at org.apache.hudi.async.HoodieAsyncService.lambda$monitorThreads$1(HoodieAsyncService.java:158) | |
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
Caused by: org.apache.hudi.exception.HoodieClusteringException: unable to transition clustering inflight to complete: 20211201011347895 | |
at org.apache.hudi.client.SparkRDDWriteClient.completeClustering(SparkRDDWriteClient.java:395) | |
at org.apache.hudi.client.SparkRDDWriteClient.completeTableService(SparkRDDWriteClient.java:470) | |
at org.apache.hudi.client.SparkRDDWriteClient.cluster(SparkRDDWriteClient.java:364) | |
at org.apache.hudi.client.HoodieSparkClusteringClient.cluster(HoodieSparkClusteringClient.java:54) | |
at org.apache.hudi.async.AsyncClusteringService.lambda$null$1(AsyncClusteringService.java:79) | |
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) | |
... 3 more | |
Caused by: org.apache.hudi.exception.HoodieCommitException: Failed to archive commits | |
at org.apache.hudi.table.HoodieTimelineArchiveLog.archive(HoodieTimelineArchiveLog.java:334) | |
at org.apache.hudi.table.HoodieTimelineArchiveLog.archiveIfRequired(HoodieTimelineArchiveLog.java:130) | |
at org.apache.hudi.client.AbstractHoodieWriteClient.postCommit(AbstractHoodieWriteClient.java:454) | |
at org.apache.hudi.client.SparkRDDWriteClient.postWrite(SparkRDDWriteClient.java:280) | |
at org.apache.hudi.client.SparkRDDWriteClient.upsertPreppedRecords(SparkRDDWriteClient.java:173) | |
at org.apache.hudi.metadata.SparkHoodieBackedTableMetadataWriter.commit(SparkHoodieBackedTableMetadataWriter.java:146) | |
at org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.processAndCommit(HoodieBackedTableMetadataWriter.java:590) | |
at org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.update(HoodieBackedTableMetadataWriter.java:602) | |
at org.apache.hudi.client.SparkRDDWriteClient.lambda$writeTableMetadataForTableServices$5(SparkRDDWriteClient.java:420) | |
at org.apache.hudi.common.util.Option.ifPresent(Option.java:96) | |
at org.apache.hudi.client.SparkRDDWriteClient.writeTableMetadataForTableServices(SparkRDDWriteClient.java:419) | |
at org.apache.hudi.client.SparkRDDWriteClient.completeClustering(SparkRDDWriteClient.java:384) | |
... 8 more | |
Caused by: org.apache.hudi.exception.HoodieIOException: Could not read commit details from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002149590.deltacommit.requested | |
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.readDataFromPath(HoodieActiveTimeline.java:634) | |
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.getInstantDetails(HoodieActiveTimeline.java:250) | |
at org.apache.hudi.client.utils.MetadataConversionUtils.createMetaWrapper(MetadataConversionUtils.java:72) | |
at org.apache.hudi.table.HoodieTimelineArchiveLog.convertToAvroRecord(HoodieTimelineArchiveLog.java:358) | |
at org.apache.hudi.table.HoodieTimelineArchiveLog.archive(HoodieTimelineArchiveLog.java:321) | |
... 19 more | |
Caused by: java.io.FileNotFoundException: No such file or directory: s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/20211201002149590.deltacommit.requested | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3356) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.extractOrFetchSimpleFileStatus(S3AFileSystem.java:4903) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.open(S3AFileSystem.java:1200) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.open(S3AFileSystem.java:1178) | |
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:976) | |
at org.apache.hudi.common.fs.HoodieWrapperFileSystem.open(HoodieWrapperFileSystem.java:459) | |
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.readDataFromPath(HoodieActiveTimeline.java:631) | |
... 23 more | |
21/12/01 01:23:12 INFO HoodieHeartbeatClient: Stopping heartbeat for instant 20211201012216421 | |
21/12/01 01:23:12 INFO HoodieHeartbeatClient: Stopped heartbeat for instant 20211201012216421 | |
21/12/01 01:23:12 INFO HeartbeatUtils: Deleted the heartbeat for instant 20211201012216421 | |
21/12/01 01:23:12 INFO HoodieHeartbeatClient: Deleted heartbeat file for instant 20211201012216421 | |
21/12/01 01:23:13 INFO SparkContext: Starting job: collect at SparkHoodieBackedTableMetadataWriter.java:146 | |
21/12/01 01:23:13 INFO DAGScheduler: Got job 829 (collect at SparkHoodieBackedTableMetadataWriter.java:146) with 1 output partitions | |
21/12/01 01:23:13 INFO DAGScheduler: Final stage: ResultStage 1224 (collect at SparkHoodieBackedTableMetadataWriter.java:146) | |
21/12/01 01:23:13 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1223) | |
21/12/01 01:23:13 INFO DAGScheduler: Missing parents: List() | |
21/12/01 01:23:13 INFO DAGScheduler: Submitting ResultStage 1224 (MapPartitionsRDD[2758] at flatMap at BaseSparkCommitActionExecutor.java:176), which has no missing parents | |
21/12/01 01:23:13 INFO MemoryStore: Block broadcast_1134 stored as values in memory (estimated size 424.6 KiB, free 364.6 MiB) | |
21/12/01 01:23:13 INFO MemoryStore: Block broadcast_1134_piece0 stored as bytes in memory (estimated size 150.1 KiB, free 364.5 MiB) | |
21/12/01 01:23:13 INFO BlockManagerInfo: Added broadcast_1134_piece0 in memory on 192.168.1.48:56496 (size: 150.1 KiB, free: 365.9 MiB) | |
21/12/01 01:23:13 INFO SparkContext: Created broadcast 1134 from broadcast at DAGScheduler.scala:1427 | |
21/12/01 01:23:13 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1224 (MapPartitionsRDD[2758] at flatMap at BaseSparkCommitActionExecutor.java:176) (first 15 tasks are for partitions Vector(0)) | |
21/12/01 01:23:13 INFO TaskSchedulerImpl: Adding task set 1224.0 with 1 tasks resource profile 0 | |
21/12/01 01:23:13 INFO TaskSetManager: Starting task 0.0 in stage 1224.0 (TID 2304) (192.168.1.48, executor driver, partition 0, PROCESS_LOCAL, 4271 bytes) taskResourceAssignments Map() | |
21/12/01 01:23:13 INFO Executor: Running task 0.0 in stage 1224.0 (TID 2304) | |
21/12/01 01:23:13 INFO BlockManager: Found block rdd_2758_0 locally | |
21/12/01 01:23:13 INFO Executor: Finished task 0.0 in stage 1224.0 (TID 2304). 1907 bytes result sent to driver | |
21/12/01 01:23:13 INFO TaskSetManager: Finished task 0.0 in stage 1224.0 (TID 2304) in 15 ms on 192.168.1.48 (executor driver) (1/1) | |
21/12/01 01:23:13 INFO TaskSchedulerImpl: Removed TaskSet 1224.0, whose tasks have all completed, from pool | |
21/12/01 01:23:13 INFO DAGScheduler: ResultStage 1224 (collect at SparkHoodieBackedTableMetadataWriter.java:146) finished in 0.064 s | |
21/12/01 01:23:13 INFO DAGScheduler: Job 829 is finished. Cancelling potential speculative or zombie tasks for this job | |
21/12/01 01:23:13 INFO TaskSchedulerImpl: Killing all running tasks in stage 1224: Stage finished | |
21/12/01 01:23:13 INFO DAGScheduler: Job 829 finished: collect at SparkHoodieBackedTableMetadataWriter.java:146, took 0.064569 s | |
21/12/01 01:23:13 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201012216421__deltacommit__COMPLETED]} | |
21/12/01 01:23:13 INFO HoodieActiveTimeline: Checking for file exists ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201012216421.clean.inflight | |
21/12/01 01:23:13 INFO HoodieActiveTimeline: Create new file for toInstant ?s3a://hudi-testing/test_hoodie_table_2/.hoodie/20211201012216421.clean | |
21/12/01 01:23:13 INFO CleanActionExecutor: Marked clean started on 20211201012216421 as complete | |
21/12/01 01:23:14 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201012216421__clean__COMPLETED]} | |
21/12/01 01:23:14 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201012216421__clean__COMPLETED]} | |
21/12/01 01:23:14 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:23:14 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:23:14 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:23:14 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:23:15 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:23:15 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:23:15 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201012216421__deltacommit__COMPLETED]} | |
21/12/01 01:23:15 INFO HoodieTimelineArchiveLog: Limiting archiving of instants to latest compaction on metadata table at 20211201004828250001 | |
21/12/01 01:23:15 INFO HoodieTimelineArchiveLog: No Instants to archive | |
21/12/01 01:23:15 INFO AbstractHoodieWriteClient: Committed 20211201011944112 | |
21/12/01 01:23:15 INFO MapPartitionsRDD: Removing RDD 2778 from persistence list | |
21/12/01 01:23:15 INFO UnionRDD: Removing RDD 2703 from persistence list | |
21/12/01 01:23:15 INFO BlockManager: Removing RDD 2778 | |
21/12/01 01:23:15 INFO MapPartitionsRDD: Removing RDD 2758 from persistence list | |
21/12/01 01:23:15 INFO BlockManager: Removing RDD 2703 | |
21/12/01 01:23:15 INFO MapPartitionsRDD: Removing RDD 2741 from persistence list | |
21/12/01 01:23:15 INFO BlockManager: Removing RDD 2758 | |
21/12/01 01:23:15 INFO BlockManager: Removing RDD 2741 | |
21/12/01 01:23:15 INFO MapPartitionsRDD: Removing RDD 2709 from persistence list | |
21/12/01 01:23:15 INFO MapPartitionsRDD: Removing RDD 2724 from persistence list | |
21/12/01 01:23:15 INFO BlockManager: Removing RDD 2709 | |
21/12/01 01:23:15 INFO BlockManager: Removing RDD 2724 | |
21/12/01 01:23:15 INFO MapPartitionsRDD: Removing RDD 2753 from persistence list | |
21/12/01 01:23:15 INFO BlockManager: Removing RDD 2753 | |
21/12/01 01:23:15 INFO MapPartitionsRDD: Removing RDD 2652 from persistence list | |
21/12/01 01:23:15 INFO BlockManager: Removing RDD 2652 | |
21/12/01 01:23:15 INFO FileSystemViewManager: Creating remote view for basePath s3a://hudi-testing/test_hoodie_table_2. Server=192.168.1.48:56507, Timeout=300 | |
21/12/01 01:23:15 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2 | |
21/12/01 01:23:17 INFO AbstractTableFileSystemView: Took 2161 ms to read 9 instants, 66 replaced file groups | |
21/12/01 01:23:18 INFO ClusteringUtils: Found 9 files in pending clustering operations | |
21/12/01 01:23:18 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/refresh/?basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201011818630&timelinehash=2c6fcddbbde6555c67569b39a63937bbe026f05ac6b0cf3d76179991f9893481) | |
21/12/01 01:23:18 INFO RocksDbBasedFileSystemView: Closing Rocksdb !! | |
21/12/01 01:23:18 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl.cc:463] Shutdown: canceling all background work | |
21/12/01 01:23:18 INFO RocksDBDAO: From Rocks DB : [/db_impl/db_impl.cc:642] Shutdown complete | |
21/12/01 01:23:18 INFO RocksDbBasedFileSystemView: Closed Rocksdb !! | |
21/12/01 01:23:20 INFO AbstractTableFileSystemView: Took 1979 ms to read 9 instants, 66 replaced file groups | |
21/12/01 01:23:20 INFO ClusteringUtils: Found 9 files in pending clustering operations | |
21/12/01 01:23:20 INFO DeltaSync: Commit 20211201011944112 successful! | |
21/12/01 01:23:20 INFO AbstractHoodieWriteClient: Scheduling table service CLUSTER | |
21/12/01 01:23:20 INFO AbstractHoodieWriteClient: Scheduling clustering at instant time :20211201012320936 | |
21/12/01 01:23:20 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:23:21 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:23:21 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:23:21 INFO HoodieTableMetaClient: Loading Active commit timeline for s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:23:21 INFO HoodieActiveTimeline: Loaded instants upto : Option{val=[20211201012216421__clean__COMPLETED]} | |
21/12/01 01:23:21 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:23:21 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/hoodie.properties | |
21/12/01 01:23:21 INFO HoodieTableMetaClient: Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:23:21 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:23:22 INFO HoodieTableConfig: Loading table properties from s3a://hudi-testing/test_hoodie_table_2/.hoodie/metadata/.hoodie/hoodie.properties | |
21/12/01 01:23:22 INFO HoodieTableMetaClient: Finished Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=HFILE) from s3a://hudi-testing/test_hoodie_table_2//.hoodie/metadata | |
21/12/01 01:23:22 INFO FileSystemViewManager: Creating View Manager with storage type :REMOTE_FIRST | |
21/12/01 01:23:22 INFO FileSystemViewManager: Creating remote first table view | |
21/12/01 01:23:22 INFO FileSystemViewManager: Creating remote view for basePath s3a://hudi-testing/test_hoodie_table_2. Server=192.168.1.48:56507, Timeout=300 | |
21/12/01 01:23:22 INFO FileSystemViewManager: Creating InMemory based view for basePath s3a://hudi-testing/test_hoodie_table_2 | |
21/12/01 01:23:24 INFO AbstractTableFileSystemView: Took 2030 ms to read 9 instants, 66 replaced file groups | |
21/12/01 01:23:25 INFO ClusteringUtils: Found 9 files in pending clustering operations | |
21/12/01 01:23:25 INFO RemoteHoodieTableFileSystemView: Sending request : (http://192.168.1.48:56507/v1/hoodie/view/refresh/?basepath=s3a%3A%2F%2Fhudi-testing%2Ftest_hoodie_table_2&lastinstantts=20211201012216421&timelinehash=0529dfed5c0609dc92b701eb362ab78157ef06a68da25241f0600f3c2058e54d) | |
21/12/01 01:23:27 INFO AbstractTableFileSystemView: Took 2050 ms to read 9 instants, 66 replaced file groups | |
21/12/01 01:23:27 INFO ClusteringUtils: Found 9 files in pending clustering operations | |
21/12/01 01:23:27 INFO BaseClusteringPlanActionExecutor: Checking if clustering needs to be run on s3a://hudi-testing/test_hoodie_table_2/ | |
21/12/01 01:23:27 INFO BaseClusteringPlanActionExecutor: Not scheduling async clustering as only 3 commits was found since last clustering Option{val=[20211201010227744__replacecommit__COMPLETED]}. Waiting for 4 | |
21/12/01 01:23:27 INFO HoodieDeltaStreamer: Delta Sync shutdown. Error ?false | |
21/12/01 01:23:27 WARN HoodieDeltaStreamer: Gracefully shutting down clustering service | |
21/12/01 01:23:27 INFO HoodieDeltaStreamer: Delta Sync shutting down | |
21/12/01 01:23:27 INFO HoodieDeltaStreamer: DeltaSync shutdown. Closing write client. Error?false | |
21/12/01 01:23:27 INFO DeltaSync: Shutting down embedded timeline server | |
21/12/01 01:23:27 INFO EmbeddedTimelineService: Closing Timeline server | |
21/12/01 01:23:27 INFO TimelineService: Closing Timeline Service | |
21/12/01 01:23:27 INFO Javalin: Stopping Javalin ... | |
21/12/01 01:23:27 INFO Javalin: Javalin has stopped | |
21/12/01 01:23:27 INFO TimelineService: Closed Timeline Service | |
21/12/01 01:23:27 INFO EmbeddedTimelineService: Closed Timeline server | |
21/12/01 01:23:27 INFO SparkUI: Stopped Spark web UI at http://192.168.1.48:4040 | |
21/12/01 01:23:27 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! | |
21/12/01 01:23:27 INFO MemoryStore: MemoryStore cleared | |
21/12/01 01:23:27 INFO BlockManager: BlockManager stopped | |
21/12/01 01:23:27 INFO BlockManagerMaster: BlockManagerMaster stopped | |
21/12/01 01:23:27 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! | |
21/12/01 01:23:27 INFO SparkContext: Successfully stopped SparkContext | |
21/12/01 01:23:27 INFO ShutdownHookManager: Shutdown hook called | |
21/12/01 01:23:27 INFO ShutdownHookManager: Deleting directory /private/var/folders/3w/k9yh6qsj38dfmcscllv6_9sm0000gn/T/spark-c0ac2d62-01d1-4909-a8ec-37a5009cf530 | |
21/12/01 01:23:27 INFO ShutdownHookManager: Deleting directory /private/var/folders/3w/k9yh6qsj38dfmcscllv6_9sm0000gn/T/spark-483fe6eb-1ddc-43ce-9411-d7885f401a54 | |
21/12/01 01:23:27 INFO MetricsSystemImpl: Stopping s3a-file-system metrics system... | |
21/12/01 01:23:27 INFO MetricsSystemImpl: s3a-file-system metrics system stopped. | |
21/12/01 01:23:27 INFO MetricsSystemImpl: s3a-file-system metrics system shutdown complete. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment