Created
August 12, 2022 14:02
-
-
Save lukemarsden/e99f0f3c2a7aac6c0e5bf08938356fca to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
22/08/12 14:59:23 WARN Utils: Your hostname, mind resolves to a loopback address: 127.0.1.1; using 10.1.255.235 instead (on interface enp6s0f0) | |
22/08/12 14:59:23 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address | |
22/08/12 14:59:23 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
22/08/12 14:59:24 INFO SparkContext: Running Spark version 3.3.0 | |
22/08/12 14:59:24 INFO ResourceUtils: ============================================================== | |
22/08/12 14:59:24 INFO ResourceUtils: No custom resources configured for spark.driver. | |
22/08/12 14:59:24 INFO ResourceUtils: ============================================================== | |
22/08/12 14:59:24 INFO SparkContext: Submitted application: spark.py | |
22/08/12 14:59:24 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0) | |
22/08/12 14:59:24 INFO ResourceProfile: Limiting resource is cpu | |
22/08/12 14:59:24 INFO ResourceProfileManager: Added ResourceProfile id: 0 | |
22/08/12 14:59:24 INFO SecurityManager: Changing view acls to: luke | |
22/08/12 14:59:24 INFO SecurityManager: Changing modify acls to: luke | |
22/08/12 14:59:24 INFO SecurityManager: Changing view acls groups to: | |
22/08/12 14:59:24 INFO SecurityManager: Changing modify acls groups to: | |
22/08/12 14:59:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(luke); groups with view permissions: Set(); users with modify permissions: Set(luke); groups with modify permissions: Set() | |
22/08/12 14:59:24 INFO Utils: Successfully started service 'sparkDriver' on port 37009. | |
22/08/12 14:59:24 INFO SparkEnv: Registering MapOutputTracker | |
22/08/12 14:59:24 INFO SparkEnv: Registering BlockManagerMaster | |
22/08/12 14:59:24 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information | |
22/08/12 14:59:24 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up | |
22/08/12 14:59:24 INFO SparkEnv: Registering BlockManagerMasterHeartbeat | |
22/08/12 14:59:24 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-98ce8263-be85-463f-acce-0d1cf7788114 | |
22/08/12 14:59:24 INFO MemoryStore: MemoryStore started with capacity 434.4 MiB | |
22/08/12 14:59:24 INFO SparkEnv: Registering OutputCommitCoordinator | |
22/08/12 14:59:24 INFO Utils: Successfully started service 'SparkUI' on port 4040. | |
22/08/12 14:59:24 INFO SparkContext: Added JAR file:///home/luke/pp/pachyderm/spark/hadoop-aws-3.3.3.jar at spark://10.1.255.235:37009/jars/hadoop-aws-3.3.3.jar with timestamp 1660312764190 | |
22/08/12 14:59:24 INFO SparkContext: Added JAR file:///home/luke/pp/pachyderm/spark/aws-java-sdk-bundle-1.12.264.jar at spark://10.1.255.235:37009/jars/aws-java-sdk-bundle-1.12.264.jar with timestamp 1660312764190 | |
22/08/12 14:59:24 INFO Executor: Starting executor ID driver on host 10.1.255.235 | |
22/08/12 14:59:24 INFO Executor: Starting executor with user classpath (userClassPathFirst = false): '' | |
22/08/12 14:59:24 INFO Executor: Fetching spark://10.1.255.235:37009/jars/hadoop-aws-3.3.3.jar with timestamp 1660312764190 | |
22/08/12 14:59:24 INFO TransportClientFactory: Successfully created connection to /10.1.255.235:37009 after 30 ms (0 ms spent in bootstraps) | |
22/08/12 14:59:24 INFO Utils: Fetching spark://10.1.255.235:37009/jars/hadoop-aws-3.3.3.jar to /tmp/spark-1d457905-e782-4bdc-99aa-a824cf000a41/userFiles-f7bf2ef3-90b5-4720-a32f-15c2dcee06c2/fetchFileTemp12079591392553852332.tmp | |
22/08/12 14:59:25 INFO Executor: Adding file:/tmp/spark-1d457905-e782-4bdc-99aa-a824cf000a41/userFiles-f7bf2ef3-90b5-4720-a32f-15c2dcee06c2/hadoop-aws-3.3.3.jar to class loader | |
22/08/12 14:59:25 INFO Executor: Fetching spark://10.1.255.235:37009/jars/aws-java-sdk-bundle-1.12.264.jar with timestamp 1660312764190 | |
22/08/12 14:59:25 INFO Utils: Fetching spark://10.1.255.235:37009/jars/aws-java-sdk-bundle-1.12.264.jar to /tmp/spark-1d457905-e782-4bdc-99aa-a824cf000a41/userFiles-f7bf2ef3-90b5-4720-a32f-15c2dcee06c2/fetchFileTemp9989175407800010657.tmp | |
22/08/12 14:59:25 INFO Executor: Adding file:/tmp/spark-1d457905-e782-4bdc-99aa-a824cf000a41/userFiles-f7bf2ef3-90b5-4720-a32f-15c2dcee06c2/aws-java-sdk-bundle-1.12.264.jar to class loader | |
22/08/12 14:59:25 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44159. | |
22/08/12 14:59:25 INFO NettyBlockTransferService: Server created on 10.1.255.235:44159 | |
22/08/12 14:59:25 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy | |
22/08/12 14:59:25 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.1.255.235, 44159, None) | |
22/08/12 14:59:25 INFO BlockManagerMasterEndpoint: Registering block manager 10.1.255.235:44159 with 434.4 MiB RAM, BlockManagerId(driver, 10.1.255.235, 44159, None) | |
22/08/12 14:59:25 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.1.255.235, 44159, None) | |
22/08/12 14:59:25 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.1.255.235, 44159, None) | |
[('spark.driver.extraJavaOptions', '-XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED'), ('spark.hadoop.fs.s3a.connection.ssl.enabled', 'false'), ('spark.driver.host', '10.1.255.235'), ('spark.hadoop.fs.s3a.path.style.access', 'true'), ('spark.repl.local.jars', 'file:///home/luke/pp/pachyderm/spark/hadoop-aws-3.3.3.jar,file:///home/luke/pp/pachyderm/spark/aws-java-sdk-bundle-1.12.264.jar'), ('spark.app.initial.jar.urls', 'spark://10.1.255.235:37009/jars/aws-java-sdk-bundle-1.12.264.jar,spark://10.1.255.235:37009/jars/hadoop-aws-3.3.3.jar'), ('spark.app.id', 'local-1660312764864'), ('spark.executor.id', 'driver'), ('spark.hadoop.fs.s3a.change.detection.mode', 'none'), ('spark.jars', 'file:///home/luke/pp/pachyderm/spark/hadoop-aws-3.3.3.jar,file:///home/luke/pp/pachyderm/spark/aws-java-sdk-bundle-1.12.264.jar'), ('spark.hadoop.fs.s3a.impl', 'org.apache.hadoop.fs.s3a.S3AFileSystem'), ('spark.app.startTime', '1660312764190'), ('spark.driver.port', '37009'), ('spark.hadoop.fs.s3a.change.detection.version.required', 'false'), ('spark.app.name', 'spark.py'), ('spark.rdd.compress', 'True'), ('spark.hadoop.fs.s3a.endpoint', 'http://localhost:30600'), ('spark.executor.extraJavaOptions', '-XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED'), ('spark.hadoop.fs.s3a.access.key', 'lemon'), ('spark.serializer.objectStreamReset', '100'), ('spark.master', 'local[*]'), ('spark.submit.pyFiles', ''), ('spark.hadoop.fs.s3a.secret.key', 'lemon'), ('spark.submit.deployMode', 'client'), ('spark.app.submitTime', '1660312763713')] | |
22/08/12 14:59:25 DEBUG FileSystem: Looking for FS supporting file | |
22/08/12 14:59:25 DEBUG FileSystem: looking for configuration option fs.file.impl | |
22/08/12 14:59:25 DEBUG FileSystem: Looking in service filesystems for implementation class | |
22/08/12 14:59:25 DEBUG FileSystem: FS for file is class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem | |
22/08/12 14:59:25 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir. | |
22/08/12 14:59:25 INFO SharedState: Warehouse path is 'file:/home/luke/pp/pachyderm/spark/spark-warehouse'. | |
22/08/12 14:59:25 DEBUG FsUrlStreamHandlerFactory: Creating handler for protocol jar | |
22/08/12 14:59:25 DEBUG FileSystem: Looking for FS supporting jar | |
22/08/12 14:59:25 DEBUG FileSystem: looking for configuration option fs.jar.impl | |
22/08/12 14:59:25 DEBUG FileSystem: Looking in service filesystems for implementation class | |
22/08/12 14:59:25 DEBUG FsUrlStreamHandlerFactory: Unknown protocol jar, delegating to default implementation | |
22/08/12 14:59:26 DEBUG ClosureCleaner: Cleaning indylambda closure: $anonfun$pythonToJava$1 | |
22/08/12 14:59:26 DEBUG ClosureCleaner: +++ indylambda closure ($anonfun$pythonToJava$1) is now cleaned +++ | |
22/08/12 14:59:26 DEBUG ClosureCleaner: Cleaning indylambda closure: $anonfun$toJavaArray$1 | |
22/08/12 14:59:26 DEBUG ClosureCleaner: +++ indylambda closure ($anonfun$toJavaArray$1) is now cleaned +++ | |
22/08/12 14:59:26 DEBUG ClosureCleaner: Cleaning indylambda closure: $anonfun$applySchemaToPythonRDD$1 | |
22/08/12 14:59:26 DEBUG ClosureCleaner: +++ indylambda closure ($anonfun$applySchemaToPythonRDD$1) is now cleaned +++ | |
22/08/12 14:59:26 DEBUG CatalystSqlParser: Parsing command: spark_grouping_id | |
22/08/12 14:59:28 DEBUG WholeStageCodegenExec: | |
/* 001 */ public Object generate(Object[] references) { | |
/* 002 */ return new GeneratedIteratorForCodegenStage1(references); | |
/* 003 */ } | |
/* 004 */ | |
/* 005 */ // codegenStageId=1 | |
/* 006 */ final class GeneratedIteratorForCodegenStage1 extends org.apache.spark.sql.execution.BufferedRowIterator { | |
/* 007 */ private Object[] references; | |
/* 008 */ private scala.collection.Iterator[] inputs; | |
/* 009 */ private scala.collection.Iterator rdd_input_0; | |
/* 010 */ private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter[] rdd_mutableStateArray_0 = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter[2]; | |
/* 011 */ | |
/* 012 */ public GeneratedIteratorForCodegenStage1(Object[] references) { | |
/* 013 */ this.references = references; | |
/* 014 */ } | |
/* 015 */ | |
/* 016 */ public void init(int index, scala.collection.Iterator[] inputs) { | |
/* 017 */ partitionIndex = index; | |
/* 018 */ this.inputs = inputs; | |
/* 019 */ rdd_input_0 = inputs[0]; | |
/* 020 */ rdd_mutableStateArray_0[0] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(2, 0); | |
/* 021 */ rdd_mutableStateArray_0[1] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(2, 64); | |
/* 022 */ | |
/* 023 */ } | |
/* 024 */ | |
/* 025 */ protected void processNext() throws java.io.IOException { | |
/* 026 */ while ( rdd_input_0.hasNext()) { | |
/* 027 */ InternalRow rdd_row_0 = (InternalRow) rdd_input_0.next(); | |
/* 028 */ ((org.apache.spark.sql.execution.metric.SQLMetric) references[0] /* numOutputRows */).add(1); | |
/* 029 */ // common sub-expressions | |
/* 030 */ | |
/* 031 */ boolean rdd_isNull_0 = rdd_row_0.isNullAt(0); | |
/* 032 */ long rdd_value_0 = rdd_isNull_0 ? | |
/* 033 */ -1L : (rdd_row_0.getLong(0)); | |
/* 034 */ boolean project_isNull_0 = rdd_isNull_0; | |
/* 035 */ UTF8String project_value_0 = null; | |
/* 036 */ if (!rdd_isNull_0) { | |
/* 037 */ project_value_0 = UTF8String.fromString(String.valueOf(rdd_value_0)); | |
/* 038 */ } | |
/* 039 */ boolean rdd_isNull_1 = rdd_row_0.isNullAt(1); | |
/* 040 */ double rdd_value_1 = rdd_isNull_1 ? | |
/* 041 */ -1.0 : (rdd_row_0.getDouble(1)); | |
/* 042 */ boolean project_isNull_2 = rdd_isNull_1; | |
/* 043 */ UTF8String project_value_2 = null; | |
/* 044 */ if (!rdd_isNull_1) { | |
/* 045 */ project_value_2 = UTF8String.fromString(String.valueOf(rdd_value_1)); | |
/* 046 */ } | |
/* 047 */ rdd_mutableStateArray_0[1].reset(); | |
/* 048 */ | |
/* 049 */ rdd_mutableStateArray_0[1].zeroOutNullBytes(); | |
/* 050 */ | |
/* 051 */ if (project_isNull_0) { | |
/* 052 */ rdd_mutableStateArray_0[1].setNullAt(0); | |
/* 053 */ } else { | |
/* 054 */ rdd_mutableStateArray_0[1].write(0, project_value_0); | |
/* 055 */ } | |
/* 056 */ | |
/* 057 */ if (project_isNull_2) { | |
/* 058 */ rdd_mutableStateArray_0[1].setNullAt(1); | |
/* 059 */ } else { | |
/* 060 */ rdd_mutableStateArray_0[1].write(1, project_value_2); | |
/* 061 */ } | |
/* 062 */ append((rdd_mutableStateArray_0[1].getRow())); | |
/* 063 */ if (shouldStop()) return; | |
/* 064 */ } | |
/* 065 */ } | |
/* 066 */ | |
/* 067 */ } | |
22/08/12 14:59:28 DEBUG CodeGenerator: | |
/* 001 */ public Object generate(Object[] references) { | |
/* 002 */ return new GeneratedIteratorForCodegenStage1(references); | |
/* 003 */ } | |
/* 004 */ | |
/* 005 */ // codegenStageId=1 | |
/* 006 */ final class GeneratedIteratorForCodegenStage1 extends org.apache.spark.sql.execution.BufferedRowIterator { | |
/* 007 */ private Object[] references; | |
/* 008 */ private scala.collection.Iterator[] inputs; | |
/* 009 */ private scala.collection.Iterator rdd_input_0; | |
/* 010 */ private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter[] rdd_mutableStateArray_0 = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter[2]; | |
/* 011 */ | |
/* 012 */ public GeneratedIteratorForCodegenStage1(Object[] references) { | |
/* 013 */ this.references = references; | |
/* 014 */ } | |
/* 015 */ | |
/* 016 */ public void init(int index, scala.collection.Iterator[] inputs) { | |
/* 017 */ partitionIndex = index; | |
/* 018 */ this.inputs = inputs; | |
/* 019 */ rdd_input_0 = inputs[0]; | |
/* 020 */ rdd_mutableStateArray_0[0] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(2, 0); | |
/* 021 */ rdd_mutableStateArray_0[1] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(2, 64); | |
/* 022 */ | |
/* 023 */ } | |
/* 024 */ | |
/* 025 */ protected void processNext() throws java.io.IOException { | |
/* 026 */ while ( rdd_input_0.hasNext()) { | |
/* 027 */ InternalRow rdd_row_0 = (InternalRow) rdd_input_0.next(); | |
/* 028 */ ((org.apache.spark.sql.execution.metric.SQLMetric) references[0] /* numOutputRows */).add(1); | |
/* 029 */ // common sub-expressions | |
/* 030 */ | |
/* 031 */ boolean rdd_isNull_0 = rdd_row_0.isNullAt(0); | |
/* 032 */ long rdd_value_0 = rdd_isNull_0 ? | |
/* 033 */ -1L : (rdd_row_0.getLong(0)); | |
/* 034 */ boolean project_isNull_0 = rdd_isNull_0; | |
/* 035 */ UTF8String project_value_0 = null; | |
/* 036 */ if (!rdd_isNull_0) { | |
/* 037 */ project_value_0 = UTF8String.fromString(String.valueOf(rdd_value_0)); | |
/* 038 */ } | |
/* 039 */ boolean rdd_isNull_1 = rdd_row_0.isNullAt(1); | |
/* 040 */ double rdd_value_1 = rdd_isNull_1 ? | |
/* 041 */ -1.0 : (rdd_row_0.getDouble(1)); | |
/* 042 */ boolean project_isNull_2 = rdd_isNull_1; | |
/* 043 */ UTF8String project_value_2 = null; | |
/* 044 */ if (!rdd_isNull_1) { | |
/* 045 */ project_value_2 = UTF8String.fromString(String.valueOf(rdd_value_1)); | |
/* 046 */ } | |
/* 047 */ rdd_mutableStateArray_0[1].reset(); | |
/* 048 */ | |
/* 049 */ rdd_mutableStateArray_0[1].zeroOutNullBytes(); | |
/* 050 */ | |
/* 051 */ if (project_isNull_0) { | |
/* 052 */ rdd_mutableStateArray_0[1].setNullAt(0); | |
/* 053 */ } else { | |
/* 054 */ rdd_mutableStateArray_0[1].write(0, project_value_0); | |
/* 055 */ } | |
/* 056 */ | |
/* 057 */ if (project_isNull_2) { | |
/* 058 */ rdd_mutableStateArray_0[1].setNullAt(1); | |
/* 059 */ } else { | |
/* 060 */ rdd_mutableStateArray_0[1].write(1, project_value_2); | |
/* 061 */ } | |
/* 062 */ append((rdd_mutableStateArray_0[1].getRow())); | |
/* 063 */ if (shouldStop()) return; | |
/* 064 */ } | |
/* 065 */ } | |
/* 066 */ | |
/* 067 */ } | |
22/08/12 14:59:28 INFO CodeGenerator: Code generated in 167.33332 ms | |
22/08/12 14:59:28 DEBUG ClosureCleaner: Cleaning indylambda closure: $anonfun$doExecute$4$adapted | |
22/08/12 14:59:28 DEBUG ClosureCleaner: +++ indylambda closure ($anonfun$doExecute$4$adapted) is now cleaned +++ | |
22/08/12 14:59:28 DEBUG ClosureCleaner: Cleaning indylambda closure: $anonfun$executeTake$2 | |
22/08/12 14:59:28 DEBUG ClosureCleaner: +++ indylambda closure ($anonfun$executeTake$2) is now cleaned +++ | |
22/08/12 14:59:28 DEBUG ClosureCleaner: Cleaning indylambda closure: $anonfun$runJob$5 | |
22/08/12 14:59:28 DEBUG ClosureCleaner: +++ indylambda closure ($anonfun$runJob$5) is now cleaned +++ | |
22/08/12 14:59:28 INFO SparkContext: Starting job: showString at NativeMethodAccessorImpl.java:0 | |
22/08/12 14:59:28 DEBUG DAGScheduler: eagerlyComputePartitionsForRddAndAncestors for RDD 6 took 0.001078 seconds | |
22/08/12 14:59:28 DEBUG DAGScheduler: Merging stage rdd profiles: Set() | |
22/08/12 14:59:28 INFO DAGScheduler: Got job 0 (showString at NativeMethodAccessorImpl.java:0) with 1 output partitions | |
22/08/12 14:59:28 INFO DAGScheduler: Final stage: ResultStage 0 (showString at NativeMethodAccessorImpl.java:0) | |
22/08/12 14:59:28 INFO DAGScheduler: Parents of final stage: List() | |
22/08/12 14:59:28 INFO DAGScheduler: Missing parents: List() | |
22/08/12 14:59:28 DEBUG DAGScheduler: submitStage(ResultStage 0 (name=showString at NativeMethodAccessorImpl.java:0;jobs=0)) | |
22/08/12 14:59:28 DEBUG DAGScheduler: missing: List() | |
22/08/12 14:59:28 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[6] at showString at NativeMethodAccessorImpl.java:0), which has no missing parents | |
22/08/12 14:59:28 DEBUG DAGScheduler: submitMissingTasks(ResultStage 0) | |
22/08/12 14:59:28 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 12.4 KiB, free 434.4 MiB) | |
22/08/12 14:59:28 DEBUG BlockManager: Put block broadcast_0 locally took 32 ms | |
22/08/12 14:59:28 DEBUG BlockManager: Putting block broadcast_0 without replication took 34 ms | |
22/08/12 14:59:28 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 6.6 KiB, free 434.4 MiB) | |
22/08/12 14:59:28 DEBUG BlockManagerMasterEndpoint: Updating block info on master broadcast_0_piece0 for BlockManagerId(driver, 10.1.255.235, 44159, None) | |
22/08/12 14:59:28 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.1.255.235:44159 (size: 6.6 KiB, free: 434.4 MiB) | |
22/08/12 14:59:28 DEBUG BlockManagerMaster: Updated info of block broadcast_0_piece0 | |
22/08/12 14:59:28 DEBUG BlockManager: Told master about block broadcast_0_piece0 | |
22/08/12 14:59:28 DEBUG BlockManager: Put block broadcast_0_piece0 locally took 7 ms | |
22/08/12 14:59:28 DEBUG BlockManager: Putting block broadcast_0_piece0 without replication took 7 ms | |
22/08/12 14:59:28 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1513 | |
22/08/12 14:59:28 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[6] at showString at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0)) | |
22/08/12 14:59:28 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0 | |
22/08/12 14:59:28 DEBUG TaskSetManager: Epoch for TaskSet 0.0: 0 | |
22/08/12 14:59:28 DEBUG TaskSetManager: Adding pending tasks took 2 ms | |
22/08/12 14:59:28 DEBUG TaskSetManager: Valid locality levels for TaskSet 0.0: NO_PREF, ANY | |
22/08/12 14:59:28 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0.0, runningTasks: 0 | |
22/08/12 14:59:28 DEBUG TaskSetManager: Valid locality levels for TaskSet 0.0: NO_PREF, ANY | |
22/08/12 14:59:28 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (10.1.255.235, executor driver, partition 0, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:28 DEBUG TaskSetManager: No tasks for locality level NO_PREF, so moving to locality level ANY | |
22/08/12 14:59:28 INFO Executor: Running task 0.0 in stage 0.0 (TID 0) | |
22/08/12 14:59:28 DEBUG ExecutorMetricsPoller: stageTCMP: (0, 0) -> 1 | |
22/08/12 14:59:28 DEBUG BlockManager: Getting local block broadcast_0 | |
22/08/12 14:59:28 DEBUG BlockManager: Level for block broadcast_0 is StorageLevel(disk, memory, deserialized, 1 replicas) | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 406, boot = 351, init = 55, finish = 0 | |
22/08/12 14:59:29 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1789 bytes result sent to driver | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (0, 0) -> 0 | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 546 ms on 10.1.255.235 (executor driver) (1/1) | |
22/08/12 14:59:29 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool | |
22/08/12 14:59:29 INFO PythonAccumulatorV2: Connected to AccumulatorServer at host: 127.0.0.1 port: 35829 | |
22/08/12 14:59:29 INFO DAGScheduler: ResultStage 0 (showString at NativeMethodAccessorImpl.java:0) finished in 0.734 s | |
22/08/12 14:59:29 DEBUG DAGScheduler: After removal of stage 0, remaining stages = 0 | |
22/08/12 14:59:29 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job | |
22/08/12 14:59:29 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished | |
22/08/12 14:59:29 INFO DAGScheduler: Job 0 finished: showString at NativeMethodAccessorImpl.java:0, took 0.769170 s | |
22/08/12 14:59:29 DEBUG ClosureCleaner: Cleaning indylambda closure: $anonfun$executeTake$2 | |
22/08/12 14:59:29 DEBUG ClosureCleaner: +++ indylambda closure ($anonfun$executeTake$2) is now cleaned +++ | |
22/08/12 14:59:29 DEBUG ClosureCleaner: Cleaning indylambda closure: $anonfun$runJob$5 | |
22/08/12 14:59:29 DEBUG ClosureCleaner: +++ indylambda closure ($anonfun$runJob$5) is now cleaned +++ | |
22/08/12 14:59:29 INFO SparkContext: Starting job: showString at NativeMethodAccessorImpl.java:0 | |
22/08/12 14:59:29 DEBUG DAGScheduler: eagerlyComputePartitionsForRddAndAncestors for RDD 6 took 0.000171 seconds | |
22/08/12 14:59:29 DEBUG DAGScheduler: Merging stage rdd profiles: Set() | |
22/08/12 14:59:29 INFO DAGScheduler: Got job 1 (showString at NativeMethodAccessorImpl.java:0) with 4 output partitions | |
22/08/12 14:59:29 INFO DAGScheduler: Final stage: ResultStage 1 (showString at NativeMethodAccessorImpl.java:0) | |
22/08/12 14:59:29 INFO DAGScheduler: Parents of final stage: List() | |
22/08/12 14:59:29 INFO DAGScheduler: Missing parents: List() | |
22/08/12 14:59:29 DEBUG DAGScheduler: submitStage(ResultStage 1 (name=showString at NativeMethodAccessorImpl.java:0;jobs=1)) | |
22/08/12 14:59:29 DEBUG DAGScheduler: missing: List() | |
22/08/12 14:59:29 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[6] at showString at NativeMethodAccessorImpl.java:0), which has no missing parents | |
22/08/12 14:59:29 DEBUG DAGScheduler: submitMissingTasks(ResultStage 1) | |
22/08/12 14:59:29 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 12.4 KiB, free 434.4 MiB) | |
22/08/12 14:59:29 DEBUG BlockManager: Put block broadcast_1 locally took 1 ms | |
22/08/12 14:59:29 DEBUG BlockManager: Putting block broadcast_1 without replication took 1 ms | |
22/08/12 14:59:29 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 6.6 KiB, free 434.4 MiB) | |
22/08/12 14:59:29 DEBUG BlockManagerMasterEndpoint: Updating block info on master broadcast_1_piece0 for BlockManagerId(driver, 10.1.255.235, 44159, None) | |
22/08/12 14:59:29 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.1.255.235:44159 (size: 6.6 KiB, free: 434.4 MiB) | |
22/08/12 14:59:29 DEBUG BlockManagerMaster: Updated info of block broadcast_1_piece0 | |
22/08/12 14:59:29 DEBUG BlockManager: Told master about block broadcast_1_piece0 | |
22/08/12 14:59:29 DEBUG BlockManager: Put block broadcast_1_piece0 locally took 3 ms | |
22/08/12 14:59:29 DEBUG BlockManager: Putting block broadcast_1_piece0 without replication took 3 ms | |
22/08/12 14:59:29 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1513 | |
22/08/12 14:59:29 INFO DAGScheduler: Submitting 4 missing tasks from ResultStage 1 (MapPartitionsRDD[6] at showString at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(1, 2, 3, 4)) | |
22/08/12 14:59:29 INFO TaskSchedulerImpl: Adding task set 1.0 with 4 tasks resource profile 0 | |
22/08/12 14:59:29 DEBUG TaskSetManager: Epoch for TaskSet 1.0: 0 | |
22/08/12 14:59:29 DEBUG TaskSetManager: Adding pending tasks took 0 ms | |
22/08/12 14:59:29 DEBUG TaskSetManager: Valid locality levels for TaskSet 1.0: NO_PREF, ANY | |
22/08/12 14:59:29 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_1.0, runningTasks: 0 | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1) (10.1.255.235, executor driver, partition 1, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 2) (10.1.255.235, executor driver, partition 2, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 2.0 in stage 1.0 (TID 3) (10.1.255.235, executor driver, partition 3, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 3.0 in stage 1.0 (TID 4) (10.1.255.235, executor driver, partition 4, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 DEBUG TaskSetManager: No tasks for locality level NO_PREF, so moving to locality level ANY | |
22/08/12 14:59:29 INFO Executor: Running task 0.0 in stage 1.0 (TID 1) | |
22/08/12 14:59:29 INFO Executor: Running task 1.0 in stage 1.0 (TID 2) | |
22/08/12 14:59:29 INFO Executor: Running task 2.0 in stage 1.0 (TID 3) | |
22/08/12 14:59:29 INFO Executor: Running task 3.0 in stage 1.0 (TID 4) | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (1, 0) -> 1 | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (1, 0) -> 2 | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (1, 0) -> 3 | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (1, 0) -> 4 | |
22/08/12 14:59:29 DEBUG BlockManager: Getting local block broadcast_1 | |
22/08/12 14:59:29 DEBUG BlockManager: Level for block broadcast_1 is StorageLevel(disk, memory, deserialized, 1 replicas) | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 84, boot = 6, init = 78, finish = 0 | |
22/08/12 14:59:29 INFO Executor: Finished task 2.0 in stage 1.0 (TID 3). 1789 bytes result sent to driver | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (1, 0) -> 3 | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 2.0 in stage 1.0 (TID 3) in 111 ms on 10.1.255.235 (executor driver) (1/4) | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 94, boot = -82, init = 176, finish = 0 | |
22/08/12 14:59:29 INFO Executor: Finished task 3.0 in stage 1.0 (TID 4). 1789 bytes result sent to driver | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 93, boot = 10, init = 83, finish = 0 | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (1, 0) -> 2 | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 3.0 in stage 1.0 (TID 4) in 117 ms on 10.1.255.235 (executor driver) (2/4) | |
22/08/12 14:59:29 INFO Executor: Finished task 1.0 in stage 1.0 (TID 2). 1789 bytes result sent to driver | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (1, 0) -> 1 | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 2) in 124 ms on 10.1.255.235 (executor driver) (3/4) | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 106, boot = 4, init = 101, finish = 1 | |
22/08/12 14:59:29 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1789 bytes result sent to driver | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (1, 0) -> 0 | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 134 ms on 10.1.255.235 (executor driver) (4/4) | |
22/08/12 14:59:29 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool | |
22/08/12 14:59:29 INFO DAGScheduler: ResultStage 1 (showString at NativeMethodAccessorImpl.java:0) finished in 0.150 s | |
22/08/12 14:59:29 DEBUG DAGScheduler: After removal of stage 1, remaining stages = 0 | |
22/08/12 14:59:29 INFO DAGScheduler: Job 1 is finished. Cancelling potential speculative or zombie tasks for this job | |
22/08/12 14:59:29 INFO TaskSchedulerImpl: Killing all running tasks in stage 1: Stage finished | |
22/08/12 14:59:29 INFO DAGScheduler: Job 1 finished: showString at NativeMethodAccessorImpl.java:0, took 0.156063 s | |
22/08/12 14:59:29 DEBUG ClosureCleaner: Cleaning indylambda closure: $anonfun$executeTake$2 | |
22/08/12 14:59:29 DEBUG ClosureCleaner: +++ indylambda closure ($anonfun$executeTake$2) is now cleaned +++ | |
22/08/12 14:59:29 DEBUG ClosureCleaner: Cleaning indylambda closure: $anonfun$runJob$5 | |
22/08/12 14:59:29 DEBUG ClosureCleaner: +++ indylambda closure ($anonfun$runJob$5) is now cleaned +++ | |
22/08/12 14:59:29 INFO SparkContext: Starting job: showString at NativeMethodAccessorImpl.java:0 | |
22/08/12 14:59:29 DEBUG DAGScheduler: eagerlyComputePartitionsForRddAndAncestors for RDD 6 took 0.000205 seconds | |
22/08/12 14:59:29 DEBUG DAGScheduler: Merging stage rdd profiles: Set() | |
22/08/12 14:59:29 INFO DAGScheduler: Got job 2 (showString at NativeMethodAccessorImpl.java:0) with 11 output partitions | |
22/08/12 14:59:29 INFO DAGScheduler: Final stage: ResultStage 2 (showString at NativeMethodAccessorImpl.java:0) | |
22/08/12 14:59:29 INFO DAGScheduler: Parents of final stage: List() | |
22/08/12 14:59:29 INFO DAGScheduler: Missing parents: List() | |
22/08/12 14:59:29 DEBUG DAGScheduler: submitStage(ResultStage 2 (name=showString at NativeMethodAccessorImpl.java:0;jobs=2)) | |
22/08/12 14:59:29 DEBUG DAGScheduler: missing: List() | |
22/08/12 14:59:29 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[6] at showString at NativeMethodAccessorImpl.java:0), which has no missing parents | |
22/08/12 14:59:29 DEBUG DAGScheduler: submitMissingTasks(ResultStage 2) | |
22/08/12 14:59:29 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 12.4 KiB, free 434.4 MiB) | |
22/08/12 14:59:29 DEBUG BlockManager: Put block broadcast_2 locally took 1 ms | |
22/08/12 14:59:29 DEBUG BlockManager: Putting block broadcast_2 without replication took 1 ms | |
22/08/12 14:59:29 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 6.6 KiB, free 434.3 MiB) | |
22/08/12 14:59:29 DEBUG BlockManagerMasterEndpoint: Updating block info on master broadcast_2_piece0 for BlockManagerId(driver, 10.1.255.235, 44159, None) | |
22/08/12 14:59:29 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 10.1.255.235:44159 (size: 6.6 KiB, free: 434.4 MiB) | |
22/08/12 14:59:29 DEBUG BlockManagerMaster: Updated info of block broadcast_2_piece0 | |
22/08/12 14:59:29 DEBUG BlockManager: Told master about block broadcast_2_piece0 | |
22/08/12 14:59:29 DEBUG BlockManager: Put block broadcast_2_piece0 locally took 2 ms | |
22/08/12 14:59:29 DEBUG BlockManager: Putting block broadcast_2_piece0 without replication took 2 ms | |
22/08/12 14:59:29 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1513 | |
22/08/12 14:59:29 INFO DAGScheduler: Submitting 11 missing tasks from ResultStage 2 (MapPartitionsRDD[6] at showString at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)) | |
22/08/12 14:59:29 INFO TaskSchedulerImpl: Adding task set 2.0 with 11 tasks resource profile 0 | |
22/08/12 14:59:29 DEBUG TaskSetManager: Epoch for TaskSet 2.0: 0 | |
22/08/12 14:59:29 DEBUG TaskSetManager: Adding pending tasks took 0 ms | |
22/08/12 14:59:29 DEBUG TaskSetManager: Valid locality levels for TaskSet 2.0: NO_PREF, ANY | |
22/08/12 14:59:29 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_2.0, runningTasks: 0 | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 5) (10.1.255.235, executor driver, partition 5, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 1.0 in stage 2.0 (TID 6) (10.1.255.235, executor driver, partition 6, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 2.0 in stage 2.0 (TID 7) (10.1.255.235, executor driver, partition 7, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 3.0 in stage 2.0 (TID 8) (10.1.255.235, executor driver, partition 8, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 4.0 in stage 2.0 (TID 9) (10.1.255.235, executor driver, partition 9, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 5.0 in stage 2.0 (TID 10) (10.1.255.235, executor driver, partition 10, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 6.0 in stage 2.0 (TID 11) (10.1.255.235, executor driver, partition 11, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 7.0 in stage 2.0 (TID 12) (10.1.255.235, executor driver, partition 12, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 8.0 in stage 2.0 (TID 13) (10.1.255.235, executor driver, partition 13, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 9.0 in stage 2.0 (TID 14) (10.1.255.235, executor driver, partition 14, PROCESS_LOCAL, 4433 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 INFO TaskSetManager: Starting task 10.0 in stage 2.0 (TID 15) (10.1.255.235, executor driver, partition 15, PROCESS_LOCAL, 4471 bytes) taskResourceAssignments Map() | |
22/08/12 14:59:29 DEBUG TaskSetManager: No tasks for locality level NO_PREF, so moving to locality level ANY | |
22/08/12 14:59:29 INFO Executor: Running task 1.0 in stage 2.0 (TID 6) | |
22/08/12 14:59:29 INFO Executor: Running task 2.0 in stage 2.0 (TID 7) | |
22/08/12 14:59:29 INFO Executor: Running task 3.0 in stage 2.0 (TID 8) | |
22/08/12 14:59:29 INFO Executor: Running task 0.0 in stage 2.0 (TID 5) | |
22/08/12 14:59:29 INFO Executor: Running task 4.0 in stage 2.0 (TID 9) | |
22/08/12 14:59:29 INFO Executor: Running task 5.0 in stage 2.0 (TID 10) | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 1 | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 2 | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 4 | |
22/08/12 14:59:29 INFO Executor: Running task 7.0 in stage 2.0 (TID 12) | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 3 | |
22/08/12 14:59:29 INFO Executor: Running task 6.0 in stage 2.0 (TID 11) | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 5 | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 6 | |
22/08/12 14:59:29 INFO Executor: Running task 8.0 in stage 2.0 (TID 13) | |
22/08/12 14:59:29 INFO Executor: Running task 9.0 in stage 2.0 (TID 14) | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 7 | |
22/08/12 14:59:29 DEBUG BlockManager: Getting local block broadcast_2 | |
22/08/12 14:59:29 DEBUG BlockManager: Level for block broadcast_2 is StorageLevel(disk, memory, deserialized, 1 replicas) | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 8 | |
22/08/12 14:59:29 INFO Executor: Running task 10.0 in stage 2.0 (TID 15) | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 9 | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 10 | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 11 | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 63, boot = -58, init = 121, finish = 0 | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 62, boot = -51, init = 113, finish = 0 | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 64, boot = -57, init = 121, finish = 0 | |
22/08/12 14:59:29 INFO Executor: Finished task 2.0 in stage 2.0 (TID 7). 1789 bytes result sent to driver | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 10 | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 2.0 in stage 2.0 (TID 7) in 92 ms on 10.1.255.235 (executor driver) (1/11) | |
22/08/12 14:59:29 INFO Executor: Finished task 8.0 in stage 2.0 (TID 13). 1789 bytes result sent to driver | |
22/08/12 14:59:29 INFO Executor: Finished task 1.0 in stage 2.0 (TID 6). 1789 bytes result sent to driver | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 76, boot = 6, init = 70, finish = 0 | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 9 | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 8 | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 8.0 in stage 2.0 (TID 13) in 99 ms on 10.1.255.235 (executor driver) (2/11) | |
22/08/12 14:59:29 INFO Executor: Finished task 9.0 in stage 2.0 (TID 14). 1789 bytes result sent to driver | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 7 | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 1.0 in stage 2.0 (TID 6) in 107 ms on 10.1.255.235 (executor driver) (3/11) | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 9.0 in stage 2.0 (TID 14) in 106 ms on 10.1.255.235 (executor driver) (4/11) | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 102, boot = -62, init = 164, finish = 0 | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 97, boot = 11, init = 86, finish = 0 | |
22/08/12 14:59:29 INFO Executor: Finished task 7.0 in stage 2.0 (TID 12). 1789 bytes result sent to driver | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 6 | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 110, boot = 19, init = 90, finish = 1 | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 99, boot = 7, init = 91, finish = 1 | |
22/08/12 14:59:29 INFO Executor: Finished task 5.0 in stage 2.0 (TID 10). 1789 bytes result sent to driver | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 7.0 in stage 2.0 (TID 12) in 139 ms on 10.1.255.235 (executor driver) (5/11) | |
22/08/12 14:59:29 INFO Executor: Finished task 6.0 in stage 2.0 (TID 11). 1789 bytes result sent to driver | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 5 | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 4 | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 115, boot = 18, init = 97, finish = 0 | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 6.0 in stage 2.0 (TID 11) in 144 ms on 10.1.255.235 (executor driver) (6/11) | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 119, boot = 14, init = 105, finish = 0 | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 5.0 in stage 2.0 (TID 10) in 148 ms on 10.1.255.235 (executor driver) (7/11) | |
22/08/12 14:59:29 INFO Executor: Finished task 10.0 in stage 2.0 (TID 15). 1830 bytes result sent to driver | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 3 | |
22/08/12 14:59:29 INFO Executor: Finished task 0.0 in stage 2.0 (TID 5). 1789 bytes result sent to driver | |
22/08/12 14:59:29 INFO Executor: Finished task 4.0 in stage 2.0 (TID 9). 1789 bytes result sent to driver | |
22/08/12 14:59:29 INFO PythonRunner: Times: total = 124, boot = 24, init = 100, finish = 0 | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 2 | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 10.0 in stage 2.0 (TID 15) in 152 ms on 10.1.255.235 (executor driver) (8/11) | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 1 | |
22/08/12 14:59:29 INFO Executor: Finished task 3.0 in stage 2.0 (TID 8). 1789 bytes result sent to driver | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 5) in 161 ms on 10.1.255.235 (executor driver) (9/11) | |
22/08/12 14:59:29 DEBUG ExecutorMetricsPoller: stageTCMP: (2, 0) -> 0 | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 4.0 in stage 2.0 (TID 9) in 159 ms on 10.1.255.235 (executor driver) (10/11) | |
22/08/12 14:59:29 INFO TaskSetManager: Finished task 3.0 in stage 2.0 (TID 8) in 162 ms on 10.1.255.235 (executor driver) (11/11) | |
22/08/12 14:59:29 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool | |
22/08/12 14:59:29 INFO DAGScheduler: ResultStage 2 (showString at NativeMethodAccessorImpl.java:0) finished in 0.177 s | |
22/08/12 14:59:29 DEBUG DAGScheduler: After removal of stage 2, remaining stages = 0 | |
22/08/12 14:59:29 INFO DAGScheduler: Job 2 is finished. Cancelling potential speculative or zombie tasks for this job | |
22/08/12 14:59:29 INFO TaskSchedulerImpl: Killing all running tasks in stage 2: Stage finished | |
22/08/12 14:59:29 INFO DAGScheduler: Job 2 finished: showString at NativeMethodAccessorImpl.java:0, took 0.189328 s | |
22/08/12 14:59:29 DEBUG GenerateSafeProjection: code for createexternalrow(input[0, string, true].toString, input[1, string, true].toString, StructField(a,StringType,true), StructField(b,StringType,true)): | |
/* 001 */ public java.lang.Object generate(Object[] references) { | |
/* 002 */ return new SpecificSafeProjection(references); | |
/* 003 */ } | |
/* 004 */ | |
/* 005 */ class SpecificSafeProjection extends org.apache.spark.sql.catalyst.expressions.codegen.BaseProjection { | |
/* 006 */ | |
/* 007 */ private Object[] references; | |
/* 008 */ private InternalRow mutableRow; | |
/* 009 */ | |
/* 010 */ | |
/* 011 */ public SpecificSafeProjection(Object[] references) { | |
/* 012 */ this.references = references; | |
/* 013 */ mutableRow = (InternalRow) references[references.length - 1]; | |
/* 014 */ | |
/* 015 */ } | |
/* 016 */ | |
/* 017 */ public void initialize(int partitionIndex) { | |
/* 018 */ | |
/* 019 */ } | |
/* 020 */ | |
/* 021 */ public java.lang.Object apply(java.lang.Object _i) { | |
/* 022 */ InternalRow i = (InternalRow) _i; | |
/* 023 */ org.apache.spark.sql.Row value_5 = CreateExternalRow_0(i); | |
/* 024 */ if (false) { | |
/* 025 */ mutableRow.setNullAt(0); | |
/* 026 */ } else { | |
/* 027 */ | |
/* 028 */ mutableRow.update(0, value_5); | |
/* 029 */ } | |
/* 030 */ | |
/* 031 */ return mutableRow; | |
/* 032 */ } | |
/* 033 */ | |
/* 034 */ | |
/* 035 */ private org.apache.spark.sql.Row CreateExternalRow_0(InternalRow i) { | |
/* 036 */ Object[] values_0 = new Object[2]; | |
/* 037 */ | |
/* 038 */ boolean isNull_2 = i.isNullAt(0); | |
/* 039 */ UTF8String value_2 = isNull_2 ? | |
/* 040 */ null : (i.getUTF8String(0)); | |
/* 041 */ boolean isNull_1 = true; | |
/* 042 */ java.lang.String value_1 = null; | |
/* 043 */ if (!isNull_2) { | |
/* 044 */ isNull_1 = false; | |
/* 045 */ if (!isNull_1) { | |
/* 046 */ | |
/* 047 */ Object funcResult_0 = null; | |
/* 048 */ funcResult_0 = value_2.toString(); | |
/* 049 */ value_1 = (java.lang.String) funcResult_0; | |
/* 050 */ | |
/* 051 */ } | |
/* 052 */ } | |
/* 053 */ if (isNull_1) { | |
/* 054 */ values_0[0] = null; | |
/* 055 */ } else { | |
/* 056 */ values_0[0] = value_1; | |
/* 057 */ } | |
/* 058 */ | |
/* 059 */ boolean isNull_4 = i.isNullAt(1); | |
/* 060 */ UTF8String value_4 = isNull_4 ? | |
/* 061 */ null : (i.getUTF8String(1)); | |
/* 062 */ boolean isNull_3 = true; | |
/* 063 */ java.lang.String value_3 = null; | |
/* 064 */ if (!isNull_4) { | |
/* 065 */ isNull_3 = false; | |
/* 066 */ if (!isNull_3) { | |
/* 067 */ | |
/* 068 */ Object funcResult_1 = null; | |
/* 069 */ funcResult_1 = value_4.toString(); | |
/* 070 */ value_3 = (java.lang.String) funcResult_1; | |
/* 071 */ | |
/* 072 */ } | |
/* 073 */ } | |
/* 074 */ if (isNull_3) { | |
/* 075 */ values_0[1] = null; | |
/* 076 */ } else { | |
/* 077 */ values_0[1] = value_3; | |
/* 078 */ } | |
/* 079 */ | |
/* 080 */ final org.apache.spark.sql.Row value_0 = new org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema(values_0, ((org.apache.spark.sql.types.StructType) references[0] /* schema */)); | |
/* 081 */ | |
/* 082 */ return value_0; | |
/* 083 */ } | |
/* 084 */ | |
/* 085 */ } | |
22/08/12 14:59:29 DEBUG CodeGenerator: | |
/* 001 */ public java.lang.Object generate(Object[] references) { | |
/* 002 */ return new SpecificSafeProjection(references); | |
/* 003 */ } | |
/* 004 */ | |
/* 005 */ class SpecificSafeProjection extends org.apache.spark.sql.catalyst.expressions.codegen.BaseProjection { | |
/* 006 */ | |
/* 007 */ private Object[] references; | |
/* 008 */ private InternalRow mutableRow; | |
/* 009 */ | |
/* 010 */ | |
/* 011 */ public SpecificSafeProjection(Object[] references) { | |
/* 012 */ this.references = references; | |
/* 013 */ mutableRow = (InternalRow) references[references.length - 1]; | |
/* 014 */ | |
/* 015 */ } | |
/* 016 */ | |
/* 017 */ public void initialize(int partitionIndex) { | |
/* 018 */ | |
/* 019 */ } | |
/* 020 */ | |
/* 021 */ public java.lang.Object apply(java.lang.Object _i) { | |
/* 022 */ InternalRow i = (InternalRow) _i; | |
/* 023 */ org.apache.spark.sql.Row value_5 = CreateExternalRow_0(i); | |
/* 024 */ if (false) { | |
/* 025 */ mutableRow.setNullAt(0); | |
/* 026 */ } else { | |
/* 027 */ | |
/* 028 */ mutableRow.update(0, value_5); | |
/* 029 */ } | |
/* 030 */ | |
/* 031 */ return mutableRow; | |
/* 032 */ } | |
/* 033 */ | |
/* 034 */ | |
/* 035 */ private org.apache.spark.sql.Row CreateExternalRow_0(InternalRow i) { | |
/* 036 */ Object[] values_0 = new Object[2]; | |
/* 037 */ | |
/* 038 */ boolean isNull_2 = i.isNullAt(0); | |
/* 039 */ UTF8String value_2 = isNull_2 ? | |
/* 040 */ null : (i.getUTF8String(0)); | |
/* 041 */ boolean isNull_1 = true; | |
/* 042 */ java.lang.String value_1 = null; | |
/* 043 */ if (!isNull_2) { | |
/* 044 */ isNull_1 = false; | |
/* 045 */ if (!isNull_1) { | |
/* 046 */ | |
/* 047 */ Object funcResult_0 = null; | |
/* 048 */ funcResult_0 = value_2.toString(); | |
/* 049 */ value_1 = (java.lang.String) funcResult_0; | |
/* 050 */ | |
/* 051 */ } | |
/* 052 */ } | |
/* 053 */ if (isNull_1) { | |
/* 054 */ values_0[0] = null; | |
/* 055 */ } else { | |
/* 056 */ values_0[0] = value_1; | |
/* 057 */ } | |
/* 058 */ | |
/* 059 */ boolean isNull_4 = i.isNullAt(1); | |
/* 060 */ UTF8String value_4 = isNull_4 ? | |
/* 061 */ null : (i.getUTF8String(1)); | |
/* 062 */ boolean isNull_3 = true; | |
/* 063 */ java.lang.String value_3 = null; | |
/* 064 */ if (!isNull_4) { | |
/* 065 */ isNull_3 = false; | |
/* 066 */ if (!isNull_3) { | |
/* 067 */ | |
/* 068 */ Object funcResult_1 = null; | |
/* 069 */ funcResult_1 = value_4.toString(); | |
/* 070 */ value_3 = (java.lang.String) funcResult_1; | |
/* 071 */ | |
/* 072 */ } | |
/* 073 */ } | |
/* 074 */ if (isNull_3) { | |
/* 075 */ values_0[1] = null; | |
/* 076 */ } else { | |
/* 077 */ values_0[1] = value_3; | |
/* 078 */ } | |
/* 079 */ | |
/* 080 */ final org.apache.spark.sql.Row value_0 = new org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema(values_0, ((org.apache.spark.sql.types.StructType) references[0] /* schema */)); | |
/* 081 */ | |
/* 082 */ return value_0; | |
/* 083 */ } | |
/* 084 */ | |
/* 085 */ } | |
22/08/12 14:59:29 INFO CodeGenerator: Code generated in 17.461014 ms | |
+---+---+ | |
| a| b| | |
+---+---+ | |
| 1|2.0| | |
+---+---+ | |
22/08/12 14:59:29 DEBUG FileSystem: Starting: Acquiring creator semaphore for s3a://master.rando2/nonemptyprefix3 | |
22/08/12 14:59:29 DEBUG FileSystem: Acquiring creator semaphore for s3a://master.rando2/nonemptyprefix3: duration 0:00.000s | |
22/08/12 14:59:29 DEBUG FileSystem: Starting: Creating FS s3a://master.rando2/nonemptyprefix3 | |
22/08/12 14:59:29 DEBUG FileSystem: Looking for FS supporting s3a | |
22/08/12 14:59:29 DEBUG FileSystem: looking for configuration option fs.s3a.impl | |
22/08/12 14:59:29 DEBUG Configuration: Reloading 9 existing configurations | |
22/08/12 14:59:29 DEBUG FileSystem: Filesystem s3a defined in configuration option | |
22/08/12 14:59:29 DEBUG FileSystem: FS for s3a is class org.apache.hadoop.fs.s3a.S3AFileSystem | |
22/08/12 14:59:29 DEBUG AbstractService: Service: NoopAuditor entered state INITED | |
22/08/12 14:59:29 DEBUG AbstractService: Service NoopAuditor is started | |
22/08/12 14:59:29 DEBUG S3AFileSystem: Initializing S3AFileSystem for master.rando2 | |
22/08/12 14:59:29 DEBUG S3AUtils: Propagating entries under fs.s3a.bucket.master.rando2. | |
22/08/12 14:59:30 DEBUG S3AUtils: Data is unencrypted | |
22/08/12 14:59:30 DEBUG S3ARetryPolicy: Retrying on recoverable AWS failures 7 times with an initial interval of 500ms | |
22/08/12 14:59:30 DEBUG MetricsSystemImpl: from system property: null | |
22/08/12 14:59:30 DEBUG MetricsSystemImpl: from environment variable: null | |
22/08/12 14:59:30 DEBUG MetricsConfig: Could not locate file hadoop-metrics2-s3a-file-system.properties | |
org.apache.hadoop.shaded.org.apache.commons.configuration2.ex.ConfigurationException: Could not locate: org.apache.hadoop.shaded.org.apache.commons.configuration2.io.FileLocator@5cbfc157[fileName=hadoop-metrics2-s3a-file-system.properties,basePath=<null>,sourceURL=,encoding=<null>,fileSystem=<null>,locationStrategy=<null>] | |
at org.apache.hadoop.shaded.org.apache.commons.configuration2.io.FileLocatorUtils.locateOrThrow(FileLocatorUtils.java:346) | |
at org.apache.hadoop.shaded.org.apache.commons.configuration2.io.FileHandler.load(FileHandler.java:972) | |
at org.apache.hadoop.shaded.org.apache.commons.configuration2.io.FileHandler.load(FileHandler.java:702) | |
at org.apache.hadoop.metrics2.impl.MetricsConfig.loadFirst(MetricsConfig.java:118) | |
at org.apache.hadoop.metrics2.impl.MetricsConfig.create(MetricsConfig.java:97) | |
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:482) | |
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:188) | |
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:163) | |
at org.apache.hadoop.fs.s3a.S3AInstrumentation.getMetricsSystem(S3AInstrumentation.java:251) | |
at org.apache.hadoop.fs.s3a.S3AInstrumentation.registerAsMetricsSource(S3AInstrumentation.java:264) | |
at org.apache.hadoop.fs.s3a.S3AInstrumentation.<init>(S3AInstrumentation.java:235) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:466) | |
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469) | |
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174) | |
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574) | |
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521) | |
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540) | |
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) | |
at org.apache.spark.sql.execution.datasources.DataSource.planForWritingFileFormat(DataSource.scala:461) | |
at org.apache.spark.sql.execution.datasources.DataSource.planForWriting(DataSource.scala:558) | |
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:390) | |
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:363) | |
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239) | |
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:793) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.base/java.lang.reflect.Method.invoke(Method.java:566) | |
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) | |
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) | |
at py4j.Gateway.invoke(Gateway.java:282) | |
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) | |
at py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) | |
at py4j.ClientServerConnection.run(ClientServerConnection.java:106) | |
at java.base/java.lang.Thread.run(Thread.java:829) | |
22/08/12 14:59:30 DEBUG MetricsConfig: Could not locate file hadoop-metrics2.properties | |
org.apache.hadoop.shaded.org.apache.commons.configuration2.ex.ConfigurationException: Could not locate: org.apache.hadoop.shaded.org.apache.commons.configuration2.io.FileLocator@31b2b6a0[fileName=hadoop-metrics2.properties,basePath=<null>,sourceURL=,encoding=<null>,fileSystem=<null>,locationStrategy=<null>] | |
at org.apache.hadoop.shaded.org.apache.commons.configuration2.io.FileLocatorUtils.locateOrThrow(FileLocatorUtils.java:346) | |
at org.apache.hadoop.shaded.org.apache.commons.configuration2.io.FileHandler.load(FileHandler.java:972) | |
at org.apache.hadoop.shaded.org.apache.commons.configuration2.io.FileHandler.load(FileHandler.java:702) | |
at org.apache.hadoop.metrics2.impl.MetricsConfig.loadFirst(MetricsConfig.java:118) | |
at org.apache.hadoop.metrics2.impl.MetricsConfig.create(MetricsConfig.java:97) | |
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:482) | |
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:188) | |
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:163) | |
at org.apache.hadoop.fs.s3a.S3AInstrumentation.getMetricsSystem(S3AInstrumentation.java:251) | |
at org.apache.hadoop.fs.s3a.S3AInstrumentation.registerAsMetricsSource(S3AInstrumentation.java:264) | |
at org.apache.hadoop.fs.s3a.S3AInstrumentation.<init>(S3AInstrumentation.java:235) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:466) | |
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469) | |
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174) | |
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574) | |
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521) | |
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540) | |
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) | |
at org.apache.spark.sql.execution.datasources.DataSource.planForWritingFileFormat(DataSource.scala:461) | |
at org.apache.spark.sql.execution.datasources.DataSource.planForWriting(DataSource.scala:558) | |
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:390) | |
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:363) | |
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239) | |
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:793) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.base/java.lang.reflect.Method.invoke(Method.java:566) | |
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) | |
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) | |
at py4j.Gateway.invoke(Gateway.java:282) | |
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) | |
at py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) | |
at py4j.ClientServerConnection.run(ClientServerConnection.java:106) | |
at java.base/java.lang.Thread.run(Thread.java:829) | |
22/08/12 14:59:30 WARN MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties | |
22/08/12 14:59:30 DEBUG MetricsConfig: poking parent 'PropertiesConfiguration' for key: period | |
22/08/12 14:59:30 DEBUG MetricsConfig: poking parent 'PropertiesConfiguration' for key: periodMillis | |
22/08/12 14:59:30 DEBUG MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableCounterLong org.apache.hadoop.metrics2.impl.MetricsSystemImpl.droppedPubAll with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Dropped updates by all sinks"}) | |
22/08/12 14:59:30 DEBUG MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableStat org.apache.hadoop.metrics2.impl.MetricsSystemImpl.publishStat with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Publish", "Publishing stats"}) | |
22/08/12 14:59:30 DEBUG MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableStat org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotStat with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Snapshot", "Snapshot stats"}) | |
22/08/12 14:59:30 DEBUG MetricsConfig: poking parent 'PropertiesConfiguration' for key: source.source.start_mbeans | |
22/08/12 14:59:30 DEBUG MetricsConfig: poking parent 'MetricsConfig' for key: source.start_mbeans | |
22/08/12 14:59:30 DEBUG MetricsConfig: poking parent 'PropertiesConfiguration' for key: *.source.start_mbeans | |
22/08/12 14:59:30 DEBUG MetricsSourceAdapter: Updating attr cache... | |
22/08/12 14:59:30 DEBUG MetricsSourceAdapter: Done. # tags & metrics=10 | |
22/08/12 14:59:30 DEBUG MetricsSourceAdapter: Updating info cache... | |
22/08/12 14:59:30 DEBUG MetricsSystemImpl: [javax.management.MBeanAttributeInfo[description=Metrics context, name=tag.Context, type=java.lang.String, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Number of active metrics sources, name=NumActiveSources, type=java.lang.Integer, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Number of all registered metrics sources, name=NumAllSources, type=java.lang.Integer, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Number of active metrics sinks, name=NumActiveSinks, type=java.lang.Integer, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Number of all registered metrics sinks, name=NumAllSinks, type=java.lang.Integer, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Dropped updates by all sinks, name=DroppedPubAll, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Number of ops for publishing stats, name=PublishNumOps, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Average time for publishing stats, name=PublishAvgTime, type=java.lang.Double, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Number of ops for snapshot stats, name=SnapshotNumOps, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Average time for snapshot stats, name=SnapshotAvgTime, type=java.lang.Double, read-only, descriptor={}]] | |
22/08/12 14:59:30 DEBUG MetricsSourceAdapter: Done | |
22/08/12 14:59:30 DEBUG MBeans: Registered Hadoop:service=s3a-file-system,name=MetricsSystem,sub=Stats | |
22/08/12 14:59:30 DEBUG MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. | |
22/08/12 14:59:30 INFO MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). | |
22/08/12 14:59:30 INFO MetricsSystemImpl: s3a-file-system metrics system started | |
22/08/12 14:59:30 DEBUG MBeans: Registered Hadoop:service=s3a-file-system,name=MetricsSystem,sub=Control | |
22/08/12 14:59:30 DEBUG MetricsSystemImpl: S3AMetrics1-master.rando2, | |
22/08/12 14:59:30 DEBUG MetricsConfig: poking parent 'PropertiesConfiguration' for key: source.source.start_mbeans | |
22/08/12 14:59:30 DEBUG MetricsConfig: poking parent 'MetricsConfig' for key: source.start_mbeans | |
22/08/12 14:59:30 DEBUG MetricsConfig: poking parent 'PropertiesConfiguration' for key: *.source.start_mbeans | |
22/08/12 14:59:30 DEBUG MetricsSourceAdapter: Updating attr cache... | |
22/08/12 14:59:30 DEBUG MetricsSourceAdapter: Done. # tags & metrics=187 | |
22/08/12 14:59:30 DEBUG MetricsSourceAdapter: Updating info cache... | |
22/08/12 14:59:30 DEBUG MetricsSystemImpl: [javax.management.MBeanAttributeInfo[description=Metrics context, name=tag.Context, type=java.lang.String, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=A unique identifier for the instance, name=tag.s3aFileSystemId, type=java.lang.String, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Hostname from the FS URL, name=tag.bucket, type=java.lang.String, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Local hostname, name=tag.Hostname, type=java.lang.String, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total number of directories created through the object store., name=directories_created, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total number of directories deleted through the object store., name=directories_deleted, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total number of files copied within the object store., name=files_copied, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total number of bytes copied within the object store., name=files_copied_bytes, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total number of files created through the object store., name=files_created, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total number of files deleted from the object store., name=files_deleted, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total number of files whose delete request was rejected, name=files_delete_rejected, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total number of fake directory entries created in the object store., name=fake_directories_created, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total number of fake directory deletes submitted to object store., name=fake_directories_deleted, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Errors caught and ignored, name=ignored_errors, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of createNonRecursive(), name=op_create_non_recursive, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of hflush(), name=op_hflush, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of hsync(), name=op_hsync, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of listLocatedStatus(), name=op_list_located_status, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of open(), name=op_open, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Object copy requests, name=object_copy_requests, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Objects deleted in delete requests, name=object_delete_objects, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of requests for object metadata, name=object_metadata_request, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Object put/multipart upload completed count, name=object_put_request_completed, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=number of bytes uploaded, name=object_put_bytes, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of S3 Select requests issued, name=object_select_requests, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of times the TCP stream was aborted, name=stream_aborted, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Bytes read from an input stream in read() calls, name=stream_read_bytes, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of bytes discarded by aborting an input stream, name=stream_read_bytes_discarded_in_abort, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of bytes read and discarded when closing an input stream, name=stream_read_bytes_discarded_in_close, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of times the TCP stream was closed, name=stream_read_closed, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total count of times an attempt to close an input stream was made, name=stream_read_close_operations, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of exceptions raised during input stream reads, name=stream_read_exceptions, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of readFully() operations in an input stream, name=stream_read_fully_operations, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total count of times an input stream to object store data was opened, name=stream_read_opened, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of read() operations in an input stream, name=stream_read_operations, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of incomplete read() operations in an input stream, name=stream_read_operations_incomplete, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of version mismatches encountered while reading an input stream, name=stream_read_version_mismatches, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of executed seek operations which went backwards in a stream, name=stream_read_seek_backward_operations, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of bytes moved backwards during seek operations in an input stream, name=stream_read_bytes_backwards_on_seek, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of bytes read and discarded during seek() in an input stream, name=stream_read_seek_bytes_discarded, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of bytes skipped during forward seek operations an input stream, name=stream_read_seek_bytes_skipped, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of executed seek operations which went forward in an input stream, name=stream_read_seek_forward_operations, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of seek operations in an input stream, name=stream_read_seek_operations, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of times the seek policy was dynamically changed in an input stream, name=stream_read_seek_policy_changed, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total count of bytes read from an input stream, name=stream_read_total_bytes, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of stream write failures reported, name=stream_write_exceptions, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of failures when finalizing a multipart upload, name=stream_write_exceptions_completing_upload, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of block/partition uploads completed, name=stream_write_block_uploads, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of number of block uploads committed, name=stream_write_block_uploads_committed, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of number of block uploads aborted, name=stream_write_block_uploads_aborted, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of total time taken for uploads to complete, name=stream_write_total_time, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of total data uploaded, name=stream_write_total_data, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of bytes written to output stream (including all not yet uploaded), name=stream_write_bytes, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of files to commit created, name=committer_commits_created, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of files committed, name=committer_commits_completed, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of successful jobs, name=committer_jobs_completed, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of failed jobs, name=committer_jobs_failed, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of successful tasks, name=committer_tasks_completed, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of failed tasks, name=committer_tasks_failed, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Amount of data committed, name=committer_bytes_committed, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of bytes uploaded duing commit operations, name=committer_bytes_uploaded, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of commits failed, name=committer_commits.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of commits aborted, name=committer_commits_aborted, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of commits reverted, name=committer_commits_reverted, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of files created under 'magic' paths, name=committer_magic_files_created, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=S3Guard metadata store put one metadata path request, name=s3guard_metadatastore_put_path_request, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=S3Guard metadata store initialization times, name=s3guard_metadatastore_initialization, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=S3Guard metadata store records deleted, name=s3guard_metadatastore_record_deletes, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=S3Guard metadata store records read, name=s3guard_metadatastore_record_reads, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=S3Guard metadata store records written, name=s3guard_metadatastore_record_writes, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=S3Guard metadata store retry events, name=s3guard_metadatastore_retry, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=S3Guard metadata store throttled events, name=s3guard_metadatastore_throttled, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=S3Guard metadata store authoritative directories updated from S3, name=s3guard_metadatastore_authoritative_directories_updated, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=requests made of the remote store, name=store_io_request, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=retried requests made of the remote store, name=store_io_retry, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Requests throttled and retried, name=store_io_throttled, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Multipart Uploader Instantiated, name=multipart_instantiated, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Multipart Part Put Operation, name=multipart_upload_part_put, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Multipart Part Put Bytes, name=multipart_upload_part_put_bytes, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Multipart Upload Aborted, name=multipart_upload_aborted, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Multipart Upload Abort Unner Path Invoked, name=multipart_upload_abort_under_path_invoked, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Multipart Upload Completed, name=multipart_upload_completed, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Multipart Upload Started, name=multipart_upload_started, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Audit access check was rejected, name=audit_access_check_failure, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Audit Span Created, name=audit_span_creation, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Audit failure/rejection, name=audit_failure, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=AWS request made, name=audit_request_execution, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Current number of active put requests, name=object_put_request_active, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=number of bytes queued for upload/being actively uploaded, name=object_put_bytes_pending, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of block/partition uploads active, name=stream_write_block_uploads_active, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Gauge of block/partitions uploads queued to be written, name=stream_write_block_uploads_pending, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Gauge of data queued to be written, name=stream_write_block_uploads_data_pending, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=gauge to indicate if client side encryption is enabled, name=client_side_encryption_enabled, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Executor acquired., name=action_executor_acquired, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Executor acquired., name=action_executor_acquired.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=HEAD request., name=action_http_head_request, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=HEAD request., name=action_http_head_request.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=GET request., name=action_http_get_request, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=GET request., name=action_http_get_request.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of abort(), name=op_abort, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of abort(), name=op_abort.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of access(), name=op_access, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of access(), name=op_access.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of copyFromLocalFile(), name=op_copy_from_local_file, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of copyFromLocalFile(), name=op_copy_from_local_file.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of create(), name=op_create, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of create(), name=op_create.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of delete(), name=op_delete, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of delete(), name=op_delete.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of exists(), name=op_exists, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of exists(), name=op_exists.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getContentSummary(), name=op_get_content_summary, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getContentSummary(), name=op_get_content_summary.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getDelegationToken(), name=op_get_delegation_token, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getDelegationToken(), name=op_get_delegation_token.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getFileChecksum(), name=op_get_file_checksum, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getFileChecksum(), name=op_get_file_checksum.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getFileStatus(), name=op_get_file_status, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getFileStatus(), name=op_get_file_status.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of globStatus(), name=op_glob_status, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of globStatus(), name=op_glob_status.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of isDirectory(), name=op_is_directory, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of isDirectory(), name=op_is_directory.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of isFile(), name=op_is_file, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of isFile(), name=op_is_file.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of listFiles(), name=op_list_files, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of listFiles(), name=op_list_files.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of listStatus(), name=op_list_status, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of listStatus(), name=op_list_status.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of mkdirs(), name=op_mkdirs, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of mkdirs(), name=op_mkdirs.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of rename(), name=op_rename, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of rename(), name=op_rename.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getXAttrs(Path path), name=op_xattr_get_map, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getXAttrs(Path path), name=op_xattr_get_map.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getXAttr(Path, String), name=op_xattr_get_named, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getXAttr(Path, String), name=op_xattr_get_named.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of xattr(), name=op_xattr_get_named_map, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of xattr(), name=op_xattr_get_named_map.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getXAttrs(Path path, List<String> names), name=op_xattr_list, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Calls of getXAttrs(Path path, List<String> names), name=op_xattr_list.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Object delete requests, name=object_delete_request, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Object delete requests, name=object_delete_request.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Object bulk delete requests, name=object_bulk_delete_request, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Object bulk delete requests, name=object_bulk_delete_request.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of object listings made, name=object_list_request, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of object listings made, name=object_list_request.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of continued object listings made, name=object_continue_list_request, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of continued object listings made, name=object_continue_list_request.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Object multipart upload initiated, name=object_multipart_initiated, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Object multipart upload initiated, name=object_multipart_initiated.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Object multipart upload aborted, name=object_multipart_aborted, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Object multipart upload aborted, name=object_multipart_aborted.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Object put/multipart upload count, name=object_put_request, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Object put/multipart upload count, name=object_put_request.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total queue duration of all block uploads, name=stream_write_queue_duration, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Total queue duration of all block uploads, name=stream_write_queue_duration.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Duration Tracking of time to commit an entire job, name=committer_commit_job, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Duration Tracking of time to commit an entire job, name=committer_commit_job.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Duration Tracking of time to materialize a file in job commit, name=committer_materialize_file, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Duration Tracking of time to materialize a file in job commit, name=committer_materialize_file.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Duration Tracking of files uploaded from a local staging path, name=committer_stage_file_upload, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Duration Tracking of files uploaded from a local staging path, name=committer_stage_file_upload.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Store Existence Probe, name=store_exists_probe, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Store Existence Probe, name=store_exists_probe.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of delegation tokens issued, name=delegation_tokens_issued, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Count of delegation tokens issued, name=delegation_tokens_issued.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Multipart Upload List, name=multipart_upload_list, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Multipart Upload List, name=multipart_upload_list.failures, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Number of ops for s3Guard metadata store put one metadata path latency with 1s interval, name=S3guard_metadatastore_put_path_latencyNumOps, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=50 percentile latency with 1 second interval for s3Guard metadata store put one metadata path latency, name=S3guard_metadatastore_put_path_latency50thPercentileLatency, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=75 percentile latency with 1 second interval for s3Guard metadata store put one metadata path latency, name=S3guard_metadatastore_put_path_latency75thPercentileLatency, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=90 percentile latency with 1 second interval for s3Guard metadata store put one metadata path latency, name=S3guard_metadatastore_put_path_latency90thPercentileLatency, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=95 percentile latency with 1 second interval for s3Guard metadata store put one metadata path latency, name=S3guard_metadatastore_put_path_latency95thPercentileLatency, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=99 percentile latency with 1 second interval for s3Guard metadata store put one metadata path latency, name=S3guard_metadatastore_put_path_latency99thPercentileLatency, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Number of events for s3Guard metadata store throttle rate with 1s interval, name=S3guard_metadatastore_throttle_rateNumEvents, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=50 percentile frequency (Hz) with 1 second interval for s3Guard metadata store throttle rate, name=S3guard_metadatastore_throttle_rate50thPercentileFrequency (Hz), type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=75 percentile frequency (Hz) with 1 second interval for s3Guard metadata store throttle rate, name=S3guard_metadatastore_throttle_rate75thPercentileFrequency (Hz), type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=90 percentile frequency (Hz) with 1 second interval for s3Guard metadata store throttle rate, name=S3guard_metadatastore_throttle_rate90thPercentileFrequency (Hz), type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=95 percentile frequency (Hz) with 1 second interval for s3Guard metadata store throttle rate, name=S3guard_metadatastore_throttle_rate95thPercentileFrequency (Hz), type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=99 percentile frequency (Hz) with 1 second interval for s3Guard metadata store throttle rate, name=S3guard_metadatastore_throttle_rate99thPercentileFrequency (Hz), type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Number of events for rate of S3 request throttling with 1s interval, name=Store_io_throttle_rateNumEvents, type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=50 percentile frequency (Hz) with 1 second interval for rate of S3 request throttling, name=Store_io_throttle_rate50thPercentileFrequency (Hz), type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=75 percentile frequency (Hz) with 1 second interval for rate of S3 request throttling, name=Store_io_throttle_rate75thPercentileFrequency (Hz), type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=90 percentile frequency (Hz) with 1 second interval for rate of S3 request throttling, name=Store_io_throttle_rate90thPercentileFrequency (Hz), type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=95 percentile frequency (Hz) with 1 second interval for rate of S3 request throttling, name=Store_io_throttle_rate95thPercentileFrequency (Hz), type=java.lang.Long, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=99 percentile frequency (Hz) with 1 second interval for rate of S3 request throttling, name=Store_io_throttle_rate99thPercentileFrequency (Hz), type=java.lang.Long, read-only, descriptor={}]] | |
22/08/12 14:59:30 DEBUG MetricsSourceAdapter: Done | |
22/08/12 14:59:30 DEBUG MBeans: Registered Hadoop:service=s3a-file-system,name=S3AMetrics1-master.rando2 | |
22/08/12 14:59:30 DEBUG MetricsSourceAdapter: MBean for source S3AMetrics1-master.rando2 registered. | |
22/08/12 14:59:30 DEBUG MetricsSystemImpl: Registered source S3AMetrics1-master.rando2 | |
22/08/12 14:59:30 DEBUG S3AFileSystem: Client Side Encryption enabled: false | |
22/08/12 14:59:30 DEBUG S3ARetryPolicy: Retrying on recoverable AWS failures 7 times with an initial interval of 500ms | |
22/08/12 14:59:30 DEBUG S3GuardExistsRetryPolicy: Retrying on recoverable S3Guard table/S3 inconsistencies 7 times with an initial interval of 2000ms | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.paging.maximum is 5000 | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.block.size is 33554432 | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.readahead.range is 65536 | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.max.total.tasks is 32 | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.threads.keepalivetime is 60 | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.executor.capacity is 16 | |
22/08/12 14:59:30 DEBUG SignerManager: No custom signers specified | |
22/08/12 14:59:30 DEBUG AuditIntegration: auditing is disabled | |
22/08/12 14:59:30 DEBUG AbstractService: Service: NoopAuditManagerS3A entered state INITED | |
22/08/12 14:59:30 DEBUG CompositeService: NoopAuditManagerS3A: initing services, size=0 | |
22/08/12 14:59:30 DEBUG CompositeService: Adding service NoopAuditor | |
22/08/12 14:59:30 DEBUG AbstractService: Service: NoopAuditor entered state INITED | |
22/08/12 14:59:30 DEBUG CompositeService: NoopAuditManagerS3A: starting services, size=1 | |
22/08/12 14:59:30 DEBUG AbstractService: Service NoopAuditor is started | |
22/08/12 14:59:30 DEBUG AbstractService: Service NoopAuditManagerS3A is started | |
22/08/12 14:59:30 DEBUG AuditIntegration: Started Audit Manager Service NoopAuditManagerS3A in state NoopAuditManagerS3A: STARTED | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.internal.upload.part.count.limit is 10000 | |
22/08/12 14:59:30 DEBUG S3ARetryPolicy: Retrying on recoverable AWS failures 7 times with an initial interval of 500ms | |
22/08/12 14:59:30 DEBUG S3AUtils: Credential provider class is org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider | |
22/08/12 14:59:30 DEBUG S3AUtils: Credential provider class is org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider | |
22/08/12 14:59:30 DEBUG S3AUtils: Credential provider class is com.amazonaws.auth.EnvironmentVariableCredentialsProvider | |
22/08/12 14:59:30 DEBUG S3AUtils: Credential provider class is org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider | |
22/08/12 14:59:30 DEBUG S3AUtils: For URI s3a://master.rando2/nonemptyprefix3, using credentials AWSCredentialProviderList[refcount= 1: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider, EnvironmentVariableCredentialsProvider, org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider@4a29569e] | |
22/08/12 14:59:30 DEBUG S3AFileSystem: Using credential provider AWSCredentialProviderList[refcount= 1: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider, EnvironmentVariableCredentialsProvider, org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider@4a29569e] | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.connection.maximum is 96 | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.attempts.maximum is 20 | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.connection.establish.timeout is 5000 | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.connection.timeout is 200000 | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.socket.send.buffer is 8192 | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.socket.recv.buffer is 8192 | |
22/08/12 14:59:30 DEBUG S3AUtils: Using User-Agent: Hadoop 3.3.2 | |
22/08/12 14:59:30 DEBUG S3AUtils: Data is unencrypted | |
22/08/12 14:59:30 DEBUG AmazonWebServiceClient: Internal logging successfully configured to commons logger: true | |
22/08/12 14:59:30 DEBUG AwsSdkMetrics: Admin mbean registered under com.amazonaws.management:type=AwsSdkMetrics | |
22/08/12 14:59:30 DEBUG DefaultS3ClientFactory: Creating endpoint configuration for "http://localhost:30600" | |
22/08/12 14:59:30 DEBUG DefaultS3ClientFactory: Endpoint URI = http://localhost:30600 | |
22/08/12 14:59:30 DEBUG DefaultS3ClientFactory: Endpoint http://localhost:30600 is not the default; parsing | |
22/08/12 14:59:30 DEBUG DefaultS3ClientFactory: Region for endpoint http://localhost:30600, URI http://localhost:30600 is determined as null | |
22/08/12 14:59:30 DEBUG FsUrlStreamHandlerFactory: Creating handler for protocol http | |
22/08/12 14:59:30 DEBUG FsUrlStreamHandlerFactory: Unknown protocol http, delegating to default implementation | |
22/08/12 14:59:30 DEBUG CsmConfigurationProviderChain: Unable to load configuration from com.amazonaws.monitoring.EnvironmentVariableCsmConfigurationProvider@7c2414c6: Unable to load Client Side Monitoring configurations from environment variables! | |
22/08/12 14:59:30 DEBUG CsmConfigurationProviderChain: Unable to load configuration from com.amazonaws.monitoring.SystemPropertyCsmConfigurationProvider@3e250c57: Unable to load Client Side Monitoring configurations from system properties variables! | |
22/08/12 14:59:30 DEBUG PoolingHttpClientConnectionManager: Closing connections idle longer than 60000 MILLISECONDS | |
22/08/12 14:59:30 DEBUG CsmConfigurationProviderChain: Unable to load configuration from com.amazonaws.monitoring.ProfileCsmConfigurationProvider@1a41b611: The 'default' profile does not define all the required properties! | |
22/08/12 14:59:30 DEBUG S3AFileSystem: skipping check for bucket existence | |
22/08/12 14:59:30 DEBUG S3AFileSystem: Input fadvise policy = normal | |
22/08/12 14:59:30 DEBUG S3AFileSystem: Change detection policy = ETagChangeDetectionPolicy mode=None | |
22/08/12 14:59:30 DEBUG S3AFileSystem: Filesystem support for magic committers is enabled | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.fast.upload.active.blocks is 4 | |
22/08/12 14:59:30 DEBUG S3AFileSystem: Using S3ABlockOutputStream with buffer = disk; block=67108864; queue limit=4 | |
22/08/12 14:59:30 DEBUG S3Guard: Metastore option source [core-default.xml] | |
22/08/12 14:59:30 DEBUG S3Guard: Using NullMetadataStore metadata store for s3a filesystem | |
22/08/12 14:59:30 DEBUG S3AFileSystem: S3Guard is disabled on this bucket: master.rando2 | |
22/08/12 14:59:30 DEBUG DirectoryPolicyImpl: Directory markers will be deleted | |
22/08/12 14:59:30 DEBUG S3AFileSystem: Directory marker retention policy is DirectoryMarkerRetention{policy='delete'} | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.multipart.purge.age is 86400 | |
22/08/12 14:59:30 DEBUG S3AUtils: Value of fs.s3a.bulk.delete.page.size is 250 | |
22/08/12 14:59:30 DEBUG FileSystem: Creating FS s3a://master.rando2/nonemptyprefix3: duration 0:00.792s | |
22/08/12 14:59:30 DEBUG FileCommitProtocol: Creating committer org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol; job 83583f71-4d13-4378-9a53-299f35df904c; output=s3a://master.rando2/nonemptyprefix3; dynamic=false | |
22/08/12 14:59:30 DEBUG FileCommitProtocol: Using (String, String, Boolean) constructor | |
22/08/12 14:59:30 DEBUG IOStatisticsStoreImpl: Incrementing counter op_exists by 1 with final value 1 | |
22/08/12 14:59:30 DEBUG S3AFileSystem: Getting path status for s3a://master.rando2/nonemptyprefix3 (nonemptyprefix3); needEmptyDirectory=false | |
22/08/12 14:59:30 DEBUG S3AFileSystem: S3GetFileStatus s3a://master.rando2/nonemptyprefix3 | |
22/08/12 14:59:30 DEBUG IOStatisticsStoreImpl: Incrementing counter object_metadata_request by 1 with final value 1 | |
22/08/12 14:59:30 DEBUG IOStatisticsStoreImpl: Incrementing counter action_http_head_request by 1 with final value 1 | |
22/08/12 14:59:30 DEBUG S3AFileSystem: HEAD nonemptyprefix3 with change tracker null | |
22/08/12 14:59:30 DEBUG Invoker: Starting: create credentials | |
22/08/12 14:59:30 DEBUG Invoker: create credentials: duration 0:00.006s | |
22/08/12 14:59:30 DEBUG AWSCredentialProviderList: No credentials from TemporaryAWSCredentialsProvider: org.apache.hadoop.fs.s3a.auth.NoAwsCredentialsException: Session credentials in Hadoop configuration: No AWS Credentials | |
22/08/12 14:59:30 DEBUG AWSCredentialProviderList: Using credentials from SimpleAWSCredentialsProvider | |
22/08/12 14:59:30 DEBUG request: Sending Request: HEAD http://localhost:30600 /master.rando2/nonemptyprefix3 Headers: (amz-sdk-invocation-id: 3689250e-819e-f351-dfd6-99446b23a639, Content-Type: application/octet-stream, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 14:59:30 DEBUG AWS4Signer: AWS4 Canonical Request: '"HEAD | |
/master.rando2/nonemptyprefix3 | |
amz-sdk-invocation-id:3689250e-819e-f351-dfd6-99446b23a639 | |
amz-sdk-request:attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-type:application/octet-stream | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
x-amz-date:20220812T135930Z | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date | |
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" | |
22/08/12 14:59:30 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T135930Z | |
20220812/us-east-1/s3/aws4_request | |
2ce71d7b071396bf6d8df58737ad2483c7ee962b1dcf33843655b65fc35f5494" | |
22/08/12 14:59:30 DEBUG AWS4Signer: Generating a new signing key as the signing key not available in the cache for the date 1660262400000 | |
22/08/12 14:59:30 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 14:59:30 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 14:59:30 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 0; route allocated: 0 of 96; total allocated: 0 of 96] | |
22/08/12 14:59:30 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 0][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:30 DEBUG MainClientExec: Opening connection {}->http://localhost:30600 | |
22/08/12 14:59:30 DEBUG DefaultHttpClientConnectionOperator: Connecting to localhost/127.0.0.1:30600 | |
22/08/12 14:59:30 DEBUG DefaultHttpClientConnectionOperator: Connection established 127.0.0.1:59610<->127.0.0.1:30600 | |
22/08/12 14:59:30 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:30 DEBUG MainClientExec: Executing request HEAD /master.rando2/nonemptyprefix3 HTTP/1.1 | |
22/08/12 14:59:30 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 >> HEAD /master.rando2/nonemptyprefix3 HTTP/1.1 | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 >> Host: localhost:30600 | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 >> amz-sdk-invocation-id: 3689250e-819e-f351-dfd6-99446b23a639 | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 >> amz-sdk-request: attempt=1;max=21 | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 >> amz-sdk-retry: 0/0/500 | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=5ddbf5ff510f270e4e8541a77cb5ef765aac168b11d19fa1a594097dbaf1c528 | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 >> Content-Type: application/octet-stream | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 >> x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 >> X-Amz-Date: 20220812T135930Z | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 >> Connection: Keep-Alive | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 >> "HEAD /master.rando2/nonemptyprefix3 HTTP/1.1[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 >> "amz-sdk-invocation-id: 3689250e-819e-f351-dfd6-99446b23a639[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 >> "amz-sdk-request: attempt=1;max=21[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=5ddbf5ff510f270e4e8541a77cb5ef765aac168b11d19fa1a594097dbaf1c528[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 >> "Content-Type: application/octet-stream[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 >> "x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 >> "X-Amz-Date: 20220812T135930Z[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 << "HTTP/1.1 404 Not Found[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 << "Content-Type: application/xml[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 << "X-Amz-Id-2: bdd05823-588c-4254-ad2d-80755457c5a5[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 << "X-Amz-Request-Id: bdd05823-588c-4254-ad2d-80755457c5a5[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 << "Date: Fri, 12 Aug 2022 13:59:30 GMT[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 << "Content-Length: 238[\r][\n]" | |
22/08/12 14:59:30 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 << HTTP/1.1 404 Not Found | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 << Content-Type: application/xml | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 << X-Amz-Id-2: bdd05823-588c-4254-ad2d-80755457c5a5 | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 << X-Amz-Request-Id: bdd05823-588c-4254-ad2d-80755457c5a5 | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 << Date: Fri, 12 Aug 2022 13:59:30 GMT | |
22/08/12 14:59:30 DEBUG headers: http-outgoing-0 << Content-Length: 238 | |
22/08/12 14:59:30 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 14:59:30 DEBUG PoolingHttpClientConnectionManager: Connection [id: 0][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 14:59:30 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 0 | |
22/08/12 14:59:30 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 0][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:30 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 13:59:30 GMT | |
22/08/12 14:59:30 DEBUG request: Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: bdd05823-588c-4254-ad2d-80755457c5a5; S3 Extended Request ID: bdd05823-588c-4254-ad2d-80755457c5a5; Proxy: null), S3 Extended Request ID: bdd05823-588c-4254-ad2d-80755457c5a5 | |
22/08/12 14:59:30 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 1 | |
22/08/12 14:59:30 DEBUG latency: ServiceName=[Amazon S3], AWSErrorCode=[404 Not Found], StatusCode=[404], ServiceEndpoint=[http://localhost:30600], Exception=[com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: bdd05823-588c-4254-ad2d-80755457c5a5; S3 Extended Request ID: bdd05823-588c-4254-ad2d-80755457c5a5; Proxy: null), S3 Extended Request ID: bdd05823-588c-4254-ad2d-80755457c5a5], RequestType=[GetObjectMetadataRequest], AWSRequestID=[bdd05823-588c-4254-ad2d-80755457c5a5], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=0, RequestCount=1, Exception=1, HttpClientPoolLeasedCount=0, ClientExecuteTime=[145.51], HttpClientSendRequestTime=[3.333], HttpRequestTime=[74.243], ApiCallLatency=[125.672], RequestSigningTime=[31.856], CredentialsRequestTime=[8.209, 0.013], HttpClientReceiveResponseTime=[24.896], | |
22/08/12 14:59:30 DEBUG S3AFileSystem: LIST List master.rando2:/nonemptyprefix3/ delimiter=/ keys=2 requester pays=false | |
22/08/12 14:59:30 DEBUG S3AFileSystem: Starting: LIST | |
22/08/12 14:59:30 DEBUG IOStatisticsStoreImpl: Incrementing counter object_list_request by 1 with final value 1 | |
22/08/12 14:59:31 DEBUG request: Sending Request: GET http://localhost:30600 /master.rando2/ Parameters: ({"list-type":["2"],"delimiter":["/"],"max-keys":["2"],"prefix":["nonemptyprefix3/"],"fetch-owner":["false"]}Headers: (amz-sdk-invocation-id: 6023f33e-be7e-9657-900e-ae35c80230b8, Content-Type: application/octet-stream, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 Canonical Request: '"GET | |
/master.rando2/ | |
delimiter=%2F&fetch-owner=false&list-type=2&max-keys=2&prefix=nonemptyprefix3%2F | |
amz-sdk-invocation-id:6023f33e-be7e-9657-900e-ae35c80230b8 | |
amz-sdk-request:ttl=20220812T140251Z;attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-type:application/octet-stream | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
x-amz-date:20220812T135931Z | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date | |
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T135931Z | |
20220812/us-east-1/s3/aws4_request | |
eb922d1e06d55f5f3eba2dc92a349b09bd8f6e7ce52fdce3b271462953e585db" | |
22/08/12 14:59:31 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 14:59:31 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 0][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG MainClientExec: Executing request GET /master.rando2/?list-type=2&delimiter=%2F&max-keys=2&prefix=nonemptyprefix3%2F&fetch-owner=false HTTP/1.1 | |
22/08/12 14:59:31 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> GET /master.rando2/?list-type=2&delimiter=%2F&max-keys=2&prefix=nonemptyprefix3%2F&fetch-owner=false HTTP/1.1 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Host: localhost:30600 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-invocation-id: 6023f33e-be7e-9657-900e-ae35c80230b8 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-retry: 0/0/500 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=4739be50d5d956b708bb1bbb90c910c9f0ed4d9677ce716764a3744172da3a96 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Content-Type: application/octet-stream | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> X-Amz-Date: 20220812T135931Z | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Content-Length: 0 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Connection: Keep-Alive | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "GET /master.rando2/?list-type=2&delimiter=%2F&max-keys=2&prefix=nonemptyprefix3%2F&fetch-owner=false HTTP/1.1[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-invocation-id: 6023f33e-be7e-9657-900e-ae35c80230b8[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=4739be50d5d956b708bb1bbb90c910c9f0ed4d9677ce716764a3744172da3a96[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Content-Type: application/octet-stream[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "X-Amz-Date: 20220812T135931Z[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Content-Length: 0[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Type: application/xml[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Id-2: d947412f-fc79-4b65-a93e-5e4b02b9cc92[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Request-Id: d947412f-fc79-4b65-a93e-5e4b02b9cc92[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Date: Fri, 12 Aug 2022 13:59:31 GMT[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Length: 276[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "<?xml version="1.0" encoding="UTF-8"?>[\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Delimiter>/</Delimiter><IsTruncated>false</IsTruncated><Marker></Marker><MaxKeys>2</MaxKeys><Name>master.rando2</Name><Prefix>nonemptyprefix3/</Prefix></ListBucketResult>" | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << HTTP/1.1 200 OK | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Type: application/xml | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Id-2: d947412f-fc79-4b65-a93e-5e4b02b9cc92 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Request-Id: d947412f-fc79-4b65-a93e-5e4b02b9cc92 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Date: Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Length: 276 | |
22/08/12 14:59:31 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 14:59:31 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG XmlResponsesSaxParser: Sanitizing XML document destined for handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection [id: 0][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 0 | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 0][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG XmlResponsesSaxParser: Parsing XML response document with handler: class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler | |
22/08/12 14:59:31 DEBUG XmlResponsesSaxParser: Examining listing for bucket: master.rando2 | |
22/08/12 14:59:31 DEBUG request: Received successful response: 200, AWS Request ID: d947412f-fc79-4b65-a93e-5e4b02b9cc92 | |
22/08/12 14:59:31 DEBUG requestId: x-amzn-RequestId: not available | |
22/08/12 14:59:31 DEBUG requestId: AWS Request ID: d947412f-fc79-4b65-a93e-5e4b02b9cc92 | |
22/08/12 14:59:31 DEBUG requestId: AWS Extended Request ID: d947412f-fc79-4b65-a93e-5e4b02b9cc92 | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 2 | |
22/08/12 14:59:31 DEBUG latency: ServiceName=[Amazon S3], StatusCode=[200], ServiceEndpoint=[http://localhost:30600], RequestType=[ListObjectsV2Request], AWSRequestID=[d947412f-fc79-4b65-a93e-5e4b02b9cc92], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, HttpClientPoolLeasedCount=0, ResponseProcessingTime=[38.604], ClientExecuteTime=[73.059], HttpClientSendRequestTime=[1.832], HttpRequestTime=[15.987], ApiCallLatency=[72.811], RequestSigningTime=[0.63], CredentialsRequestTime=[0.007, 0.012], HttpClientReceiveResponseTime=[11.958], | |
22/08/12 14:59:31 DEBUG S3AFileSystem: LIST: duration 0:00.075s | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Not Found: s3a://master.rando2/nonemptyprefix3 | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter op_exists.failures by 1 with final value 1 | |
22/08/12 14:59:31 INFO ParquetFileFormat: Using default output committer for Parquet: org.apache.parquet.hadoop.ParquetOutputCommitter | |
22/08/12 14:59:31 DEBUG PathOutputCommitter: Instantiating committer FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_202208121459318392790952053399899_0000}; taskId=attempt_202208121459318392790952053399899_0000_m_000000_0, status=''}; org.apache.parquet.hadoop.ParquetOutputCommitter@7478c44c}; outputPath=null, workPath=null, algorithmVersion=0, skipCleanup=false, ignoreCleanupFailures=false} with output path s3a://master.rando2/nonemptyprefix3 and job context TaskAttemptContextImpl{JobContextImpl{jobId=job_202208121459318392790952053399899_0000}; taskId=attempt_202208121459318392790952053399899_0000_m_000000_0, status=''} | |
22/08/12 14:59:31 INFO FileOutputCommitter: File Output Committer Algorithm version is 1 | |
22/08/12 14:59:31 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false | |
22/08/12 14:59:31 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter | |
22/08/12 14:59:31 DEBUG PathOutputCommitter: Instantiating committer FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_202208121459318392790952053399899_0000}; taskId=attempt_202208121459318392790952053399899_0000_m_000000_0, status=''}; org.apache.parquet.hadoop.ParquetOutputCommitter@424ec884}; outputPath=null, workPath=null, algorithmVersion=0, skipCleanup=false, ignoreCleanupFailures=false} with output path s3a://master.rando2/nonemptyprefix3 and job context TaskAttemptContextImpl{JobContextImpl{jobId=job_202208121459318392790952053399899_0000}; taskId=attempt_202208121459318392790952053399899_0000_m_000000_0, status=''} | |
22/08/12 14:59:31 INFO FileOutputCommitter: File Output Committer Algorithm version is 1 | |
22/08/12 14:59:31 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false | |
22/08/12 14:59:31 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter op_mkdirs by 1 with final value 1 | |
22/08/12 14:59:31 DEBUG MkdirOperation: Making directory: s3a://master.rando2/nonemptyprefix3/_temporary/0 | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Getting path status for s3a://master.rando2/nonemptyprefix3/_temporary/0 (nonemptyprefix3/_temporary/0); needEmptyDirectory=false | |
22/08/12 14:59:31 DEBUG S3AFileSystem: S3GetFileStatus s3a://master.rando2/nonemptyprefix3/_temporary/0 | |
22/08/12 14:59:31 DEBUG S3AFileSystem: LIST List master.rando2:/nonemptyprefix3/_temporary/0/ delimiter=/ keys=2 requester pays=false | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Starting: LIST | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter object_list_request by 1 with final value 2 | |
22/08/12 14:59:31 DEBUG request: Sending Request: GET http://localhost:30600 /master.rando2/ Parameters: ({"list-type":["2"],"delimiter":["/"],"max-keys":["2"],"prefix":["nonemptyprefix3/_temporary/0/"],"fetch-owner":["false"]}Headers: (amz-sdk-invocation-id: 2dd3e328-5140-6429-3f67-abc271103dfc, Content-Type: application/octet-stream, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 Canonical Request: '"GET | |
/master.rando2/ | |
delimiter=%2F&fetch-owner=false&list-type=2&max-keys=2&prefix=nonemptyprefix3%2F_temporary%2F0%2F | |
amz-sdk-invocation-id:2dd3e328-5140-6429-3f67-abc271103dfc | |
amz-sdk-request:ttl=20220812T140251Z;attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-type:application/octet-stream | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
x-amz-date:20220812T135931Z | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date | |
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T135931Z | |
20220812/us-east-1/s3/aws4_request | |
8a267687105099195cc7ce8b662c892dd277c5a3012bdcd5cada4d9dbd48d282" | |
22/08/12 14:59:31 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 14:59:31 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 0][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG MainClientExec: Executing request GET /master.rando2/?list-type=2&delimiter=%2F&max-keys=2&prefix=nonemptyprefix3%2F_temporary%2F0%2F&fetch-owner=false HTTP/1.1 | |
22/08/12 14:59:31 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> GET /master.rando2/?list-type=2&delimiter=%2F&max-keys=2&prefix=nonemptyprefix3%2F_temporary%2F0%2F&fetch-owner=false HTTP/1.1 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Host: localhost:30600 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-invocation-id: 2dd3e328-5140-6429-3f67-abc271103dfc | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-retry: 0/0/500 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=48042dd754aa4fa392e5d253e2255f36eb8ad7b70e90897d6553d01b53ba1d9c | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Content-Type: application/octet-stream | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> X-Amz-Date: 20220812T135931Z | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Content-Length: 0 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Connection: Keep-Alive | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "GET /master.rando2/?list-type=2&delimiter=%2F&max-keys=2&prefix=nonemptyprefix3%2F_temporary%2F0%2F&fetch-owner=false HTTP/1.1[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-invocation-id: 2dd3e328-5140-6429-3f67-abc271103dfc[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=48042dd754aa4fa392e5d253e2255f36eb8ad7b70e90897d6553d01b53ba1d9c[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Content-Type: application/octet-stream[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "X-Amz-Date: 20220812T135931Z[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Content-Length: 0[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Type: application/xml[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Id-2: f054de18-9c35-4358-bf40-c6192ad94b65[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Request-Id: f054de18-9c35-4358-bf40-c6192ad94b65[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Date: Fri, 12 Aug 2022 13:59:31 GMT[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Length: 289[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "<?xml version="1.0" encoding="UTF-8"?>[\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Delimiter>/</Delimiter><IsTruncated>false</IsTruncated><Marker></Marker><MaxKeys>2</MaxKeys><Name>master.rando2</Name><Prefix>nonemptyprefix3/_temporary/0/</Prefix></ListBucketResult>" | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << HTTP/1.1 200 OK | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Type: application/xml | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Id-2: f054de18-9c35-4358-bf40-c6192ad94b65 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Request-Id: f054de18-9c35-4358-bf40-c6192ad94b65 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Date: Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Length: 289 | |
22/08/12 14:59:31 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 14:59:31 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG XmlResponsesSaxParser: Sanitizing XML document destined for handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection [id: 0][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 0 | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 0][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG XmlResponsesSaxParser: Parsing XML response document with handler: class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler | |
22/08/12 14:59:31 DEBUG XmlResponsesSaxParser: Examining listing for bucket: master.rando2 | |
22/08/12 14:59:31 DEBUG request: Received successful response: 200, AWS Request ID: f054de18-9c35-4358-bf40-c6192ad94b65 | |
22/08/12 14:59:31 DEBUG requestId: x-amzn-RequestId: not available | |
22/08/12 14:59:31 DEBUG requestId: AWS Request ID: f054de18-9c35-4358-bf40-c6192ad94b65 | |
22/08/12 14:59:31 DEBUG requestId: AWS Extended Request ID: f054de18-9c35-4358-bf40-c6192ad94b65 | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 3 | |
22/08/12 14:59:31 DEBUG latency: ServiceName=[Amazon S3], StatusCode=[200], ServiceEndpoint=[http://localhost:30600], RequestType=[ListObjectsV2Request], AWSRequestID=[f054de18-9c35-4358-bf40-c6192ad94b65], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, HttpClientPoolLeasedCount=0, ResponseProcessingTime=[2.495], ClientExecuteTime=[22.913], HttpClientSendRequestTime=[1.315], HttpRequestTime=[18.583], ApiCallLatency=[22.652], RequestSigningTime=[0.522], CredentialsRequestTime=[0.009, 0.004], HttpClientReceiveResponseTime=[16.172], | |
22/08/12 14:59:31 DEBUG S3AFileSystem: LIST: duration 0:00.023s | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Not Found: s3a://master.rando2/nonemptyprefix3/_temporary/0 | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Getting path status for s3a://master.rando2/nonemptyprefix3/_temporary/0 (nonemptyprefix3/_temporary/0); needEmptyDirectory=false | |
22/08/12 14:59:31 DEBUG S3AFileSystem: S3GetFileStatus s3a://master.rando2/nonemptyprefix3/_temporary/0 | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter object_metadata_request by 1 with final value 2 | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter action_http_head_request by 1 with final value 2 | |
22/08/12 14:59:31 DEBUG S3AFileSystem: HEAD nonemptyprefix3/_temporary/0 with change tracker null | |
22/08/12 14:59:31 DEBUG request: Sending Request: HEAD http://localhost:30600 /master.rando2/nonemptyprefix3/_temporary/0 Headers: (amz-sdk-invocation-id: dfa5ca0b-6907-330a-71ef-0d36c0fdc91a, Content-Type: application/octet-stream, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 Canonical Request: '"HEAD | |
/master.rando2/nonemptyprefix3/_temporary/0 | |
amz-sdk-invocation-id:dfa5ca0b-6907-330a-71ef-0d36c0fdc91a | |
amz-sdk-request:ttl=20220812T140251Z;attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-type:application/octet-stream | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
x-amz-date:20220812T135931Z | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date | |
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T135931Z | |
20220812/us-east-1/s3/aws4_request | |
aaa519c1c80666b54eec9fec1385de07ca48dfe242d0a080f66d594d4ed80b4d" | |
22/08/12 14:59:31 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 14:59:31 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 0][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG MainClientExec: Executing request HEAD /master.rando2/nonemptyprefix3/_temporary/0 HTTP/1.1 | |
22/08/12 14:59:31 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> HEAD /master.rando2/nonemptyprefix3/_temporary/0 HTTP/1.1 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Host: localhost:30600 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-invocation-id: dfa5ca0b-6907-330a-71ef-0d36c0fdc91a | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-retry: 0/0/500 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=2a895d36c8a08ffcf16fb1c57920f8ed9cc1b7e6e9b91c3296608f225cd5c1e3 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Content-Type: application/octet-stream | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> X-Amz-Date: 20220812T135931Z | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Connection: Keep-Alive | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "HEAD /master.rando2/nonemptyprefix3/_temporary/0 HTTP/1.1[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-invocation-id: dfa5ca0b-6907-330a-71ef-0d36c0fdc91a[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=2a895d36c8a08ffcf16fb1c57920f8ed9cc1b7e6e9b91c3296608f225cd5c1e3[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Content-Type: application/octet-stream[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "X-Amz-Date: 20220812T135931Z[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "HTTP/1.1 404 Not Found[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Type: application/xml[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Id-2: 8f53ca20-d8ad-4bb1-bb2c-a9e734dbc526[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Request-Id: 8f53ca20-d8ad-4bb1-bb2c-a9e734dbc526[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Date: Fri, 12 Aug 2022 13:59:31 GMT[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Length: 251[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << HTTP/1.1 404 Not Found | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Type: application/xml | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Id-2: 8f53ca20-d8ad-4bb1-bb2c-a9e734dbc526 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Request-Id: 8f53ca20-d8ad-4bb1-bb2c-a9e734dbc526 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Date: Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Length: 251 | |
22/08/12 14:59:31 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection [id: 0][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 0 | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 0][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG request: Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: 8f53ca20-d8ad-4bb1-bb2c-a9e734dbc526; S3 Extended Request ID: 8f53ca20-d8ad-4bb1-bb2c-a9e734dbc526; Proxy: null), S3 Extended Request ID: 8f53ca20-d8ad-4bb1-bb2c-a9e734dbc526 | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 4 | |
22/08/12 14:59:31 DEBUG latency: ServiceName=[Amazon S3], AWSErrorCode=[404 Not Found], StatusCode=[404], ServiceEndpoint=[http://localhost:30600], Exception=[com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: 8f53ca20-d8ad-4bb1-bb2c-a9e734dbc526; S3 Extended Request ID: 8f53ca20-d8ad-4bb1-bb2c-a9e734dbc526; Proxy: null), S3 Extended Request ID: 8f53ca20-d8ad-4bb1-bb2c-a9e734dbc526], RequestType=[GetObjectMetadataRequest], AWSRequestID=[8f53ca20-d8ad-4bb1-bb2c-a9e734dbc526], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, Exception=1, HttpClientPoolLeasedCount=0, ClientExecuteTime=[17.246], HttpClientSendRequestTime=[1.027], HttpRequestTime=[15.613], ApiCallLatency=[16.876], RequestSigningTime=[0.352], CredentialsRequestTime=[0.008, 0.003], HttpClientReceiveResponseTime=[13.22], | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Not Found: s3a://master.rando2/nonemptyprefix3/_temporary/0 | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Getting path status for s3a://master.rando2/nonemptyprefix3/_temporary (nonemptyprefix3/_temporary); needEmptyDirectory=false | |
22/08/12 14:59:31 DEBUG S3AFileSystem: S3GetFileStatus s3a://master.rando2/nonemptyprefix3/_temporary | |
22/08/12 14:59:31 DEBUG S3AFileSystem: LIST List master.rando2:/nonemptyprefix3/_temporary/ delimiter=/ keys=2 requester pays=false | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Starting: LIST | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter object_list_request by 1 with final value 3 | |
22/08/12 14:59:31 DEBUG request: Sending Request: GET http://localhost:30600 /master.rando2/ Parameters: ({"list-type":["2"],"delimiter":["/"],"max-keys":["2"],"prefix":["nonemptyprefix3/_temporary/"],"fetch-owner":["false"]}Headers: (amz-sdk-invocation-id: db23e648-b6e3-6251-7415-c372fd3adb61, Content-Type: application/octet-stream, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 Canonical Request: '"GET | |
/master.rando2/ | |
delimiter=%2F&fetch-owner=false&list-type=2&max-keys=2&prefix=nonemptyprefix3%2F_temporary%2F | |
amz-sdk-invocation-id:db23e648-b6e3-6251-7415-c372fd3adb61 | |
amz-sdk-request:ttl=20220812T140251Z;attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-type:application/octet-stream | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
x-amz-date:20220812T135931Z | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date | |
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T135931Z | |
20220812/us-east-1/s3/aws4_request | |
978c44171e448b030fe79034b2fe6df3bae97ccdd97dbd33ca0057ee596a6542" | |
22/08/12 14:59:31 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 14:59:31 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 0][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG MainClientExec: Executing request GET /master.rando2/?list-type=2&delimiter=%2F&max-keys=2&prefix=nonemptyprefix3%2F_temporary%2F&fetch-owner=false HTTP/1.1 | |
22/08/12 14:59:31 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> GET /master.rando2/?list-type=2&delimiter=%2F&max-keys=2&prefix=nonemptyprefix3%2F_temporary%2F&fetch-owner=false HTTP/1.1 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Host: localhost:30600 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-invocation-id: db23e648-b6e3-6251-7415-c372fd3adb61 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-retry: 0/0/500 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=ed31a02363ea51f5ba382533b1fa9d763be03353f56e015b86fabdcb833b3817 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Content-Type: application/octet-stream | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> X-Amz-Date: 20220812T135931Z | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Content-Length: 0 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Connection: Keep-Alive | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "GET /master.rando2/?list-type=2&delimiter=%2F&max-keys=2&prefix=nonemptyprefix3%2F_temporary%2F&fetch-owner=false HTTP/1.1[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-invocation-id: db23e648-b6e3-6251-7415-c372fd3adb61[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=ed31a02363ea51f5ba382533b1fa9d763be03353f56e015b86fabdcb833b3817[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Content-Type: application/octet-stream[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "X-Amz-Date: 20220812T135931Z[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Content-Length: 0[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Type: application/xml[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Id-2: 2178b6c2-e092-4a94-a7ae-8fb3be23e535[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Request-Id: 2178b6c2-e092-4a94-a7ae-8fb3be23e535[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Date: Fri, 12 Aug 2022 13:59:31 GMT[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Length: 287[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "<?xml version="1.0" encoding="UTF-8"?>[\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Delimiter>/</Delimiter><IsTruncated>false</IsTruncated><Marker></Marker><MaxKeys>2</MaxKeys><Name>master.rando2</Name><Prefix>nonemptyprefix3/_temporary/</Prefix></ListBucketResult>" | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << HTTP/1.1 200 OK | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Type: application/xml | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Id-2: 2178b6c2-e092-4a94-a7ae-8fb3be23e535 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Request-Id: 2178b6c2-e092-4a94-a7ae-8fb3be23e535 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Date: Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Length: 287 | |
22/08/12 14:59:31 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 14:59:31 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG XmlResponsesSaxParser: Sanitizing XML document destined for handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection [id: 0][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 0 | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 0][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG XmlResponsesSaxParser: Parsing XML response document with handler: class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler | |
22/08/12 14:59:31 DEBUG XmlResponsesSaxParser: Examining listing for bucket: master.rando2 | |
22/08/12 14:59:31 DEBUG request: Received successful response: 200, AWS Request ID: 2178b6c2-e092-4a94-a7ae-8fb3be23e535 | |
22/08/12 14:59:31 DEBUG requestId: x-amzn-RequestId: not available | |
22/08/12 14:59:31 DEBUG requestId: AWS Request ID: 2178b6c2-e092-4a94-a7ae-8fb3be23e535 | |
22/08/12 14:59:31 DEBUG requestId: AWS Extended Request ID: 2178b6c2-e092-4a94-a7ae-8fb3be23e535 | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 5 | |
22/08/12 14:59:31 DEBUG latency: ServiceName=[Amazon S3], StatusCode=[200], ServiceEndpoint=[http://localhost:30600], RequestType=[ListObjectsV2Request], AWSRequestID=[2178b6c2-e092-4a94-a7ae-8fb3be23e535], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, HttpClientPoolLeasedCount=0, ResponseProcessingTime=[2.0], ClientExecuteTime=[18.469], HttpClientSendRequestTime=[1.561], HttpRequestTime=[14.911], ApiCallLatency=[18.211], RequestSigningTime=[0.455], CredentialsRequestTime=[0.008, 0.003], HttpClientReceiveResponseTime=[11.94], | |
22/08/12 14:59:31 DEBUG S3AFileSystem: LIST: duration 0:00.019s | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Not Found: s3a://master.rando2/nonemptyprefix3/_temporary | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Getting path status for s3a://master.rando2/nonemptyprefix3/_temporary (nonemptyprefix3/_temporary); needEmptyDirectory=false | |
22/08/12 14:59:31 DEBUG S3AFileSystem: S3GetFileStatus s3a://master.rando2/nonemptyprefix3/_temporary | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter object_metadata_request by 1 with final value 3 | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter action_http_head_request by 1 with final value 3 | |
22/08/12 14:59:31 DEBUG S3AFileSystem: HEAD nonemptyprefix3/_temporary with change tracker null | |
22/08/12 14:59:31 DEBUG request: Sending Request: HEAD http://localhost:30600 /master.rando2/nonemptyprefix3/_temporary Headers: (amz-sdk-invocation-id: 4ad3274c-e2f1-4e81-ef61-9963277df089, Content-Type: application/octet-stream, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 Canonical Request: '"HEAD | |
/master.rando2/nonemptyprefix3/_temporary | |
amz-sdk-invocation-id:4ad3274c-e2f1-4e81-ef61-9963277df089 | |
amz-sdk-request:ttl=20220812T140251Z;attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-type:application/octet-stream | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
x-amz-date:20220812T135931Z | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date | |
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T135931Z | |
20220812/us-east-1/s3/aws4_request | |
51e40188631c83b4ac0f947eeb74217c7593637492650234b872625e3b3eba05" | |
22/08/12 14:59:31 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 14:59:31 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 0][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG MainClientExec: Executing request HEAD /master.rando2/nonemptyprefix3/_temporary HTTP/1.1 | |
22/08/12 14:59:31 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> HEAD /master.rando2/nonemptyprefix3/_temporary HTTP/1.1 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Host: localhost:30600 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-invocation-id: 4ad3274c-e2f1-4e81-ef61-9963277df089 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-retry: 0/0/500 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=30078ee0b0be1bf84e4d4fc32ebce330a60526b95e893835a467410639b06f18 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Content-Type: application/octet-stream | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> X-Amz-Date: 20220812T135931Z | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Connection: Keep-Alive | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "HEAD /master.rando2/nonemptyprefix3/_temporary HTTP/1.1[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-invocation-id: 4ad3274c-e2f1-4e81-ef61-9963277df089[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=30078ee0b0be1bf84e4d4fc32ebce330a60526b95e893835a467410639b06f18[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Content-Type: application/octet-stream[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "X-Amz-Date: 20220812T135931Z[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "HTTP/1.1 404 Not Found[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Type: application/xml[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Id-2: 90b59b7e-3609-4bb1-8778-900a46e1a3e7[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Request-Id: 90b59b7e-3609-4bb1-8778-900a46e1a3e7[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Date: Fri, 12 Aug 2022 13:59:31 GMT[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Length: 249[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << HTTP/1.1 404 Not Found | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Type: application/xml | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Id-2: 90b59b7e-3609-4bb1-8778-900a46e1a3e7 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Request-Id: 90b59b7e-3609-4bb1-8778-900a46e1a3e7 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Date: Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Length: 249 | |
22/08/12 14:59:31 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection [id: 0][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 0 | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 0][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG request: Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: 90b59b7e-3609-4bb1-8778-900a46e1a3e7; S3 Extended Request ID: 90b59b7e-3609-4bb1-8778-900a46e1a3e7; Proxy: null), S3 Extended Request ID: 90b59b7e-3609-4bb1-8778-900a46e1a3e7 | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 6 | |
22/08/12 14:59:31 DEBUG latency: ServiceName=[Amazon S3], AWSErrorCode=[404 Not Found], StatusCode=[404], ServiceEndpoint=[http://localhost:30600], Exception=[com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: 90b59b7e-3609-4bb1-8778-900a46e1a3e7; S3 Extended Request ID: 90b59b7e-3609-4bb1-8778-900a46e1a3e7; Proxy: null), S3 Extended Request ID: 90b59b7e-3609-4bb1-8778-900a46e1a3e7], RequestType=[GetObjectMetadataRequest], AWSRequestID=[90b59b7e-3609-4bb1-8778-900a46e1a3e7], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, Exception=1, HttpClientPoolLeasedCount=0, ClientExecuteTime=[17.0], HttpClientSendRequestTime=[1.178], HttpRequestTime=[15.595], ApiCallLatency=[16.772], RequestSigningTime=[0.398], CredentialsRequestTime=[0.007, 0.004], HttpClientReceiveResponseTime=[13.4], | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Not Found: s3a://master.rando2/nonemptyprefix3/_temporary | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Getting path status for s3a://master.rando2/nonemptyprefix3 (nonemptyprefix3); needEmptyDirectory=false | |
22/08/12 14:59:31 DEBUG S3AFileSystem: S3GetFileStatus s3a://master.rando2/nonemptyprefix3 | |
22/08/12 14:59:31 DEBUG S3AFileSystem: LIST List master.rando2:/nonemptyprefix3/ delimiter=/ keys=2 requester pays=false | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Starting: LIST | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter object_list_request by 1 with final value 4 | |
22/08/12 14:59:31 DEBUG request: Sending Request: GET http://localhost:30600 /master.rando2/ Parameters: ({"list-type":["2"],"delimiter":["/"],"max-keys":["2"],"prefix":["nonemptyprefix3/"],"fetch-owner":["false"]}Headers: (amz-sdk-invocation-id: 8a628fa8-dfe2-7297-4131-dccb2bcfea2b, Content-Type: application/octet-stream, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 Canonical Request: '"GET | |
/master.rando2/ | |
delimiter=%2F&fetch-owner=false&list-type=2&max-keys=2&prefix=nonemptyprefix3%2F | |
amz-sdk-invocation-id:8a628fa8-dfe2-7297-4131-dccb2bcfea2b | |
amz-sdk-request:ttl=20220812T140251Z;attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-type:application/octet-stream | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
x-amz-date:20220812T135931Z | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date | |
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T135931Z | |
20220812/us-east-1/s3/aws4_request | |
40bc67e92e274728bbfda6e656fcb034174130c3bbf5e5f991a07daf22e5a874" | |
22/08/12 14:59:31 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 14:59:31 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 0][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG MainClientExec: Executing request GET /master.rando2/?list-type=2&delimiter=%2F&max-keys=2&prefix=nonemptyprefix3%2F&fetch-owner=false HTTP/1.1 | |
22/08/12 14:59:31 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> GET /master.rando2/?list-type=2&delimiter=%2F&max-keys=2&prefix=nonemptyprefix3%2F&fetch-owner=false HTTP/1.1 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Host: localhost:30600 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-invocation-id: 8a628fa8-dfe2-7297-4131-dccb2bcfea2b | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-retry: 0/0/500 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=e526c3d6bc9ffbe3a5bbb64f762784462b467e1df57bf96082588d3464aab155 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Content-Type: application/octet-stream | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> X-Amz-Date: 20220812T135931Z | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Content-Length: 0 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Connection: Keep-Alive | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "GET /master.rando2/?list-type=2&delimiter=%2F&max-keys=2&prefix=nonemptyprefix3%2F&fetch-owner=false HTTP/1.1[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-invocation-id: 8a628fa8-dfe2-7297-4131-dccb2bcfea2b[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=e526c3d6bc9ffbe3a5bbb64f762784462b467e1df57bf96082588d3464aab155[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Content-Type: application/octet-stream[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "X-Amz-Date: 20220812T135931Z[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Content-Length: 0[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Type: application/xml[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Id-2: 6da9605d-5c59-4c96-a94a-b123d79b7243[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Request-Id: 6da9605d-5c59-4c96-a94a-b123d79b7243[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Date: Fri, 12 Aug 2022 13:59:31 GMT[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Length: 276[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "<?xml version="1.0" encoding="UTF-8"?>[\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Delimiter>/</Delimiter><IsTruncated>false</IsTruncated><Marker></Marker><MaxKeys>2</MaxKeys><Name>master.rando2</Name><Prefix>nonemptyprefix3/</Prefix></ListBucketResult>" | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << HTTP/1.1 200 OK | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Type: application/xml | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Id-2: 6da9605d-5c59-4c96-a94a-b123d79b7243 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Request-Id: 6da9605d-5c59-4c96-a94a-b123d79b7243 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Date: Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Length: 276 | |
22/08/12 14:59:31 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 14:59:31 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG XmlResponsesSaxParser: Sanitizing XML document destined for handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection [id: 0][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 0 | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 0][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG XmlResponsesSaxParser: Parsing XML response document with handler: class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler | |
22/08/12 14:59:31 DEBUG XmlResponsesSaxParser: Examining listing for bucket: master.rando2 | |
22/08/12 14:59:31 DEBUG request: Received successful response: 200, AWS Request ID: 6da9605d-5c59-4c96-a94a-b123d79b7243 | |
22/08/12 14:59:31 DEBUG requestId: x-amzn-RequestId: not available | |
22/08/12 14:59:31 DEBUG requestId: AWS Request ID: 6da9605d-5c59-4c96-a94a-b123d79b7243 | |
22/08/12 14:59:31 DEBUG requestId: AWS Extended Request ID: 6da9605d-5c59-4c96-a94a-b123d79b7243 | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 7 | |
22/08/12 14:59:31 DEBUG latency: ServiceName=[Amazon S3], StatusCode=[200], ServiceEndpoint=[http://localhost:30600], RequestType=[ListObjectsV2Request], AWSRequestID=[6da9605d-5c59-4c96-a94a-b123d79b7243], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, HttpClientPoolLeasedCount=0, ResponseProcessingTime=[2.02], ClientExecuteTime=[17.459], HttpClientSendRequestTime=[0.979], HttpRequestTime=[14.005], ApiCallLatency=[17.244], RequestSigningTime=[0.435], CredentialsRequestTime=[0.006, 0.004], HttpClientReceiveResponseTime=[12.122], | |
22/08/12 14:59:31 DEBUG S3AFileSystem: LIST: duration 0:00.019s | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Not Found: s3a://master.rando2/nonemptyprefix3 | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Getting path status for s3a://master.rando2/nonemptyprefix3 (nonemptyprefix3); needEmptyDirectory=false | |
22/08/12 14:59:31 DEBUG S3AFileSystem: S3GetFileStatus s3a://master.rando2/nonemptyprefix3 | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter object_metadata_request by 1 with final value 4 | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter action_http_head_request by 1 with final value 4 | |
22/08/12 14:59:31 DEBUG S3AFileSystem: HEAD nonemptyprefix3 with change tracker null | |
22/08/12 14:59:31 DEBUG request: Sending Request: HEAD http://localhost:30600 /master.rando2/nonemptyprefix3 Headers: (amz-sdk-invocation-id: 856c157f-20e6-caa3-561f-4f42229f4674, Content-Type: application/octet-stream, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 Canonical Request: '"HEAD | |
/master.rando2/nonemptyprefix3 | |
amz-sdk-invocation-id:856c157f-20e6-caa3-561f-4f42229f4674 | |
amz-sdk-request:ttl=20220812T140251Z;attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-type:application/octet-stream | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
x-amz-date:20220812T135931Z | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date | |
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T135931Z | |
20220812/us-east-1/s3/aws4_request | |
a0e7131dad71f413b83246d1915dc4617f6898547aa1fe9fd9e3bbad5b59e125" | |
22/08/12 14:59:31 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 14:59:31 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 0][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG MainClientExec: Executing request HEAD /master.rando2/nonemptyprefix3 HTTP/1.1 | |
22/08/12 14:59:31 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> HEAD /master.rando2/nonemptyprefix3 HTTP/1.1 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Host: localhost:30600 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-invocation-id: 856c157f-20e6-caa3-561f-4f42229f4674 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-retry: 0/0/500 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=6a5f2a9db5939c990b7f88ba1d82ff0bdb96ae5a8f9404f6a3e76eef5a8ffbf7 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Content-Type: application/octet-stream | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> X-Amz-Date: 20220812T135931Z | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Connection: Keep-Alive | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "HEAD /master.rando2/nonemptyprefix3 HTTP/1.1[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-invocation-id: 856c157f-20e6-caa3-561f-4f42229f4674[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-request: ttl=20220812T140251Z;attempt=1;max=21[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=6a5f2a9db5939c990b7f88ba1d82ff0bdb96ae5a8f9404f6a3e76eef5a8ffbf7[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Content-Type: application/octet-stream[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "X-Amz-Date: 20220812T135931Z[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "HTTP/1.1 404 Not Found[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Type: application/xml[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Id-2: e15cf03f-a7ec-49e1-bf42-42899212beeb[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Request-Id: e15cf03f-a7ec-49e1-bf42-42899212beeb[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Date: Fri, 12 Aug 2022 13:59:31 GMT[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Length: 238[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << HTTP/1.1 404 Not Found | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Type: application/xml | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Id-2: e15cf03f-a7ec-49e1-bf42-42899212beeb | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Request-Id: e15cf03f-a7ec-49e1-bf42-42899212beeb | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Date: Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Length: 238 | |
22/08/12 14:59:31 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection [id: 0][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 0 | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 0][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG request: Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: e15cf03f-a7ec-49e1-bf42-42899212beeb; S3 Extended Request ID: e15cf03f-a7ec-49e1-bf42-42899212beeb; Proxy: null), S3 Extended Request ID: e15cf03f-a7ec-49e1-bf42-42899212beeb | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 8 | |
22/08/12 14:59:31 DEBUG latency: ServiceName=[Amazon S3], AWSErrorCode=[404 Not Found], StatusCode=[404], ServiceEndpoint=[http://localhost:30600], Exception=[com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: e15cf03f-a7ec-49e1-bf42-42899212beeb; S3 Extended Request ID: e15cf03f-a7ec-49e1-bf42-42899212beeb; Proxy: null), S3 Extended Request ID: e15cf03f-a7ec-49e1-bf42-42899212beeb], RequestType=[GetObjectMetadataRequest], AWSRequestID=[e15cf03f-a7ec-49e1-bf42-42899212beeb], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, Exception=1, HttpClientPoolLeasedCount=0, ClientExecuteTime=[15.281], HttpClientSendRequestTime=[0.775], HttpRequestTime=[13.506], ApiCallLatency=[15.05], RequestSigningTime=[0.318], CredentialsRequestTime=[0.006, 0.004], HttpClientReceiveResponseTime=[11.799], | |
22/08/12 14:59:31 DEBUG S3AFileSystem: Not Found: s3a://master.rando2/nonemptyprefix3 | |
22/08/12 14:59:31 DEBUG Invoker: Starting: PUT 0-byte object | |
22/08/12 14:59:31 DEBUG S3AFileSystem: PUT 0 bytes to nonemptyprefix3/_temporary/0/ | |
22/08/12 14:59:31 DEBUG S3AFileSystem: PUT start 0 bytes | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request by 1 with final value 1 | |
22/08/12 14:59:31 DEBUG request: Sending Request: PUT http://localhost:30600 /master.rando2/nonemptyprefix3/_temporary/0/ Headers: (amz-sdk-invocation-id: a2b8a521-8731-4b9a-1e7b-a37fe8438e15, Content-Length: 0, Content-Type: application/x-directory, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 Canonical Request: '"PUT | |
/master.rando2/nonemptyprefix3/_temporary/0/ | |
amz-sdk-invocation-id:a2b8a521-8731-4b9a-1e7b-a37fe8438e15 | |
amz-sdk-request:attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-length:86 | |
content-type:application/x-directory | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
x-amz-date:20220812T135931Z | |
x-amz-decoded-content-length:0 | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length | |
STREAMING-AWS4-HMAC-SHA256-PAYLOAD" | |
22/08/12 14:59:31 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T135931Z | |
20220812/us-east-1/s3/aws4_request | |
28171b9304a36e8463db510bd7db7c7fd616264b540fd4272edfcaffd5bfa3b7" | |
22/08/12 14:59:31 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 14:59:31 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 0][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:31 DEBUG MainClientExec: Executing request PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 14:59:31 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Host: localhost:30600 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-invocation-id: a2b8a521-8731-4b9a-1e7b-a37fe8438e15 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-request: attempt=1;max=21 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> amz-sdk-retry: 0/0/500 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=bdd0e0eee322eeb48b2b455e09e5976eef7774f7a00e458776faebc438bc7029 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Content-Type: application/x-directory | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> X-Amz-Date: 20220812T135931Z | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> x-amz-decoded-content-length: 0 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Content-Length: 86 | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Connection: Keep-Alive | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 >> Expect: 100-continue | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-invocation-id: a2b8a521-8731-4b9a-1e7b-a37fe8438e15[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-request: attempt=1;max=21[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=bdd0e0eee322eeb48b2b455e09e5976eef7774f7a00e458776faebc438bc7029[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Content-Type: application/x-directory[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "X-Amz-Date: 20220812T135931Z[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "x-amz-decoded-content-length: 0[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Content-Length: 86[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "Expect: 100-continue[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "HTTP/1.1 100 Continue[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << HTTP/1.1 100 Continue | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "0;chunk-signature=ffbeff62be55998ca87d94892ff8a91d11320680bf618484eff19e41583f7d01[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8"[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "X-Amz-Version-Id: a3566e4308a04d53af761a59fcd2474b[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Date: Fri, 12 Aug 2022 13:59:31 GMT[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "Content-Length: 0[\r][\n]" | |
22/08/12 14:59:31 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << HTTP/1.1 200 OK | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8" | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << X-Amz-Version-Id: a3566e4308a04d53af761a59fcd2474b | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Date: Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG headers: http-outgoing-0 << Content-Length: 0 | |
22/08/12 14:59:31 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection [id: 0][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 14:59:31 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 0 | |
22/08/12 14:59:31 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 0][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:31 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 13:59:31 GMT | |
22/08/12 14:59:31 DEBUG request: Received successful response: 200, AWS Request ID: null | |
22/08/12 14:59:31 DEBUG requestId: x-amzn-RequestId: not available | |
22/08/12 14:59:31 DEBUG requestId: AWS Request ID: not available | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 9 | |
22/08/12 14:59:31 DEBUG latency: ServiceName=[Amazon S3], StatusCode=[200], ServiceEndpoint=[http://localhost:30600], RequestType=[PutObjectRequest], AWSRequestID=[null], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, HttpClientPoolLeasedCount=0, ResponseProcessingTime=[0.268], ClientExecuteTime=[98.078], HttpClientSendRequestTime=[3.751], HttpRequestTime=[93.759], ApiCallLatency=[97.703], RequestSigningTime=[0.772], CredentialsRequestTime=[0.008, 0.004], HttpClientReceiveResponseTime=[88.968], | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request.failures by 1 with final value 1 | |
22/08/12 14:59:31 DEBUG S3AFileSystem: PUT completed success=false; 0 bytes | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request_completed by 1 with final value 1 | |
22/08/12 14:59:31 DEBUG Invoker: PUT 0-byte object : duration 0:00.162s | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_retry by 1 with final value 1 | |
22/08/12 14:59:31 DEBUG IOStatisticsStoreImpl: Incrementing counter ignored_errors by 1 with final value 1 | |
22/08/12 14:59:32 DEBUG Invoker: retry #1 | |
22/08/12 14:59:32 DEBUG Invoker: Starting: PUT 0-byte object | |
22/08/12 14:59:32 DEBUG S3AFileSystem: PUT 0 bytes to nonemptyprefix3/_temporary/0/ | |
22/08/12 14:59:32 DEBUG S3AFileSystem: PUT start 0 bytes | |
22/08/12 14:59:32 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request by 1 with final value 2 | |
22/08/12 14:59:32 DEBUG request: Sending Request: PUT http://localhost:30600 /master.rando2/nonemptyprefix3/_temporary/0/ Headers: (amz-sdk-invocation-id: 50ef29fc-74ab-8469-bdf5-0adc44752d7d, Content-Length: 0, Content-Type: application/x-directory, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 14:59:32 DEBUG AWS4Signer: AWS4 Canonical Request: '"PUT | |
/master.rando2/nonemptyprefix3/_temporary/0/ | |
amz-sdk-invocation-id:50ef29fc-74ab-8469-bdf5-0adc44752d7d | |
amz-sdk-request:attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-length:86 | |
content-type:application/x-directory | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
x-amz-date:20220812T135932Z | |
x-amz-decoded-content-length:0 | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length | |
STREAMING-AWS4-HMAC-SHA256-PAYLOAD" | |
22/08/12 14:59:32 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T135932Z | |
20220812/us-east-1/s3/aws4_request | |
41bb594b6bb09199e680446ea29834d0144f83a734831da20b143cc4e3ee8e63" | |
22/08/12 14:59:32 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 14:59:32 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 14:59:32 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:32 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 0][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:32 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:32 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:32 DEBUG MainClientExec: Executing request PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 14:59:32 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> Host: localhost:30600 | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> amz-sdk-invocation-id: 50ef29fc-74ab-8469-bdf5-0adc44752d7d | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> amz-sdk-request: attempt=1;max=21 | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> amz-sdk-retry: 0/0/500 | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=d884298e79184b27fb4625e0299c59b6145b59d159d3a9f93511bb2bd3d92a2d | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> Content-Type: application/x-directory | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> X-Amz-Date: 20220812T135932Z | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> x-amz-decoded-content-length: 0 | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> Content-Length: 86 | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> Connection: Keep-Alive | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 >> Expect: 100-continue | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "amz-sdk-invocation-id: 50ef29fc-74ab-8469-bdf5-0adc44752d7d[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "amz-sdk-request: attempt=1;max=21[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=d884298e79184b27fb4625e0299c59b6145b59d159d3a9f93511bb2bd3d92a2d[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "Content-Type: application/x-directory[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "X-Amz-Date: 20220812T135932Z[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "x-amz-decoded-content-length: 0[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "Content-Length: 86[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "Expect: 100-continue[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 << "HTTP/1.1 100 Continue[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 << HTTP/1.1 100 Continue | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "0;chunk-signature=430a5eb9afb31b39a278e36866bd852d4638352fb195ae350781f0ef6ad82adf[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 << "Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8"[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 << "X-Amz-Version-Id: 1aca2572a1ac4c7bb888a208e96b7c66[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 << "Date: Fri, 12 Aug 2022 13:59:32 GMT[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 << "Content-Length: 0[\r][\n]" | |
22/08/12 14:59:32 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 << HTTP/1.1 200 OK | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 << Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8" | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 << X-Amz-Version-Id: 1aca2572a1ac4c7bb888a208e96b7c66 | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 << Date: Fri, 12 Aug 2022 13:59:32 GMT | |
22/08/12 14:59:32 DEBUG headers: http-outgoing-0 << Content-Length: 0 | |
22/08/12 14:59:32 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 14:59:32 DEBUG PoolingHttpClientConnectionManager: Connection [id: 0][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 14:59:32 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 0 | |
22/08/12 14:59:32 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 0][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:32 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 13:59:32 GMT | |
22/08/12 14:59:32 DEBUG request: Received successful response: 200, AWS Request ID: null | |
22/08/12 14:59:32 DEBUG requestId: x-amzn-RequestId: not available | |
22/08/12 14:59:32 DEBUG requestId: AWS Request ID: not available | |
22/08/12 14:59:32 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 10 | |
22/08/12 14:59:32 DEBUG latency: ServiceName=[Amazon S3], StatusCode=[200], ServiceEndpoint=[http://localhost:30600], RequestType=[PutObjectRequest], AWSRequestID=[null], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, HttpClientPoolLeasedCount=0, ResponseProcessingTime=[0.259], ClientExecuteTime=[203.961], HttpClientSendRequestTime=[4.354], HttpRequestTime=[201.852], ApiCallLatency=[203.585], RequestSigningTime=[0.662], CredentialsRequestTime=[0.011, 0.006], HttpClientReceiveResponseTime=[195.939], | |
22/08/12 14:59:32 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request.failures by 1 with final value 2 | |
22/08/12 14:59:32 DEBUG S3AFileSystem: PUT completed success=false; 0 bytes | |
22/08/12 14:59:32 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request_completed by 1 with final value 2 | |
22/08/12 14:59:32 DEBUG Invoker: PUT 0-byte object : duration 0:00.206s | |
22/08/12 14:59:32 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_retry by 1 with final value 2 | |
22/08/12 14:59:32 DEBUG IOStatisticsStoreImpl: Incrementing counter ignored_errors by 1 with final value 2 | |
22/08/12 14:59:35 DEBUG Invoker: retry #2 | |
22/08/12 14:59:35 DEBUG Invoker: Starting: PUT 0-byte object | |
22/08/12 14:59:35 DEBUG S3AFileSystem: PUT 0 bytes to nonemptyprefix3/_temporary/0/ | |
22/08/12 14:59:35 DEBUG S3AFileSystem: PUT start 0 bytes | |
22/08/12 14:59:35 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request by 1 with final value 3 | |
22/08/12 14:59:35 DEBUG request: Sending Request: PUT http://localhost:30600 /master.rando2/nonemptyprefix3/_temporary/0/ Headers: (amz-sdk-invocation-id: d5b0c1b0-ae19-bc4d-15a6-970e83cd2d4a, Content-Length: 0, Content-Type: application/x-directory, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 14:59:35 DEBUG AWS4Signer: AWS4 Canonical Request: '"PUT | |
/master.rando2/nonemptyprefix3/_temporary/0/ | |
amz-sdk-invocation-id:d5b0c1b0-ae19-bc4d-15a6-970e83cd2d4a | |
amz-sdk-request:attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-length:86 | |
content-type:application/x-directory | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
x-amz-date:20220812T135935Z | |
x-amz-decoded-content-length:0 | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length | |
STREAMING-AWS4-HMAC-SHA256-PAYLOAD" | |
22/08/12 14:59:35 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T135935Z | |
20220812/us-east-1/s3/aws4_request | |
ef649a84f2b6179400163eb655eb828f0949c75fc2caf47575e04b2abd8637da" | |
22/08/12 14:59:35 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 14:59:35 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 14:59:35 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:35 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 0][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:35 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:35 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:35 DEBUG MainClientExec: Executing request PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 14:59:35 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> Host: localhost:30600 | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> amz-sdk-invocation-id: d5b0c1b0-ae19-bc4d-15a6-970e83cd2d4a | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> amz-sdk-request: attempt=1;max=21 | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> amz-sdk-retry: 0/0/500 | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=49c0195fc7a60dbe4e0552d9c142a181612268974e1e64f823f654017fc3c4c7 | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> Content-Type: application/x-directory | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> X-Amz-Date: 20220812T135935Z | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> x-amz-decoded-content-length: 0 | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> Content-Length: 86 | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> Connection: Keep-Alive | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 >> Expect: 100-continue | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "amz-sdk-invocation-id: d5b0c1b0-ae19-bc4d-15a6-970e83cd2d4a[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "amz-sdk-request: attempt=1;max=21[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=49c0195fc7a60dbe4e0552d9c142a181612268974e1e64f823f654017fc3c4c7[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "Content-Type: application/x-directory[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "X-Amz-Date: 20220812T135935Z[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "x-amz-decoded-content-length: 0[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "Content-Length: 86[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "Expect: 100-continue[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 << "HTTP/1.1 100 Continue[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 << HTTP/1.1 100 Continue | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "0;chunk-signature=fe2a77b5acf46367c12e5febedc82fd910f68c91605e8daf801367265575322b[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 << "Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8"[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 << "X-Amz-Version-Id: f288c8872c754b5d8deba184ba8eb653[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 << "Date: Fri, 12 Aug 2022 13:59:35 GMT[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 << "Content-Length: 0[\r][\n]" | |
22/08/12 14:59:35 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 << HTTP/1.1 200 OK | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 << Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8" | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 << X-Amz-Version-Id: f288c8872c754b5d8deba184ba8eb653 | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 << Date: Fri, 12 Aug 2022 13:59:35 GMT | |
22/08/12 14:59:35 DEBUG headers: http-outgoing-0 << Content-Length: 0 | |
22/08/12 14:59:35 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 14:59:35 DEBUG PoolingHttpClientConnectionManager: Connection [id: 0][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 14:59:35 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 0 | |
22/08/12 14:59:35 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 0][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:35 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 13:59:35 GMT | |
22/08/12 14:59:35 DEBUG request: Received successful response: 200, AWS Request ID: null | |
22/08/12 14:59:35 DEBUG requestId: x-amzn-RequestId: not available | |
22/08/12 14:59:35 DEBUG requestId: AWS Request ID: not available | |
22/08/12 14:59:35 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 11 | |
22/08/12 14:59:35 DEBUG latency: ServiceName=[Amazon S3], StatusCode=[200], ServiceEndpoint=[http://localhost:30600], RequestType=[PutObjectRequest], AWSRequestID=[null], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, HttpClientPoolLeasedCount=0, ResponseProcessingTime=[0.224], ClientExecuteTime=[107.307], HttpClientSendRequestTime=[3.362], HttpRequestTime=[105.272], ApiCallLatency=[106.923], RequestSigningTime=[0.637], CredentialsRequestTime=[0.009, 0.009], HttpClientReceiveResponseTime=[100.603], | |
22/08/12 14:59:35 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request.failures by 1 with final value 3 | |
22/08/12 14:59:35 DEBUG S3AFileSystem: PUT completed success=false; 0 bytes | |
22/08/12 14:59:35 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request_completed by 1 with final value 3 | |
22/08/12 14:59:35 DEBUG Invoker: PUT 0-byte object : duration 0:00.108s | |
22/08/12 14:59:35 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_retry by 1 with final value 3 | |
22/08/12 14:59:35 DEBUG IOStatisticsStoreImpl: Incrementing counter ignored_errors by 1 with final value 3 | |
22/08/12 14:59:40 DEBUG ExecutorMetricsPoller: removing (1, 0) from stageTCMP | |
22/08/12 14:59:40 DEBUG ExecutorMetricsPoller: removing (2, 0) from stageTCMP | |
22/08/12 14:59:40 DEBUG ExecutorMetricsPoller: removing (0, 0) from stageTCMP | |
22/08/12 14:59:41 DEBUG Invoker: retry #3 | |
22/08/12 14:59:41 DEBUG Invoker: Starting: PUT 0-byte object | |
22/08/12 14:59:41 DEBUG S3AFileSystem: PUT 0 bytes to nonemptyprefix3/_temporary/0/ | |
22/08/12 14:59:41 DEBUG S3AFileSystem: PUT start 0 bytes | |
22/08/12 14:59:41 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request by 1 with final value 4 | |
22/08/12 14:59:41 DEBUG request: Sending Request: PUT http://localhost:30600 /master.rando2/nonemptyprefix3/_temporary/0/ Headers: (amz-sdk-invocation-id: 0ba27d84-0932-2261-ed26-743caea37dfc, Content-Length: 0, Content-Type: application/x-directory, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 14:59:41 DEBUG AWS4Signer: AWS4 Canonical Request: '"PUT | |
/master.rando2/nonemptyprefix3/_temporary/0/ | |
amz-sdk-invocation-id:0ba27d84-0932-2261-ed26-743caea37dfc | |
amz-sdk-request:attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-length:86 | |
content-type:application/x-directory | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
x-amz-date:20220812T135941Z | |
x-amz-decoded-content-length:0 | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length | |
STREAMING-AWS4-HMAC-SHA256-PAYLOAD" | |
22/08/12 14:59:41 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T135941Z | |
20220812/us-east-1/s3/aws4_request | |
8ce93b1ad5dfae3e1c5d1b82f7b052812c9c6b8a45519b2d8f92afdd8d92299b" | |
22/08/12 14:59:41 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 14:59:41 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 14:59:41 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 << "[read] I/O error: Read timed out" | |
22/08/12 14:59:41 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 0][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:41 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:41 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:41 DEBUG MainClientExec: Executing request PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 14:59:41 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> Host: localhost:30600 | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> amz-sdk-invocation-id: 0ba27d84-0932-2261-ed26-743caea37dfc | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> amz-sdk-request: attempt=1;max=21 | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> amz-sdk-retry: 0/0/500 | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=7fda7b80bd4445fae9ed130fb6ed4ba013d8bd23370982fef657746ee5309a5e | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> Content-Type: application/x-directory | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> X-Amz-Date: 20220812T135941Z | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> x-amz-decoded-content-length: 0 | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> Content-Length: 86 | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> Connection: Keep-Alive | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 >> Expect: 100-continue | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "amz-sdk-invocation-id: 0ba27d84-0932-2261-ed26-743caea37dfc[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "amz-sdk-request: attempt=1;max=21[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=7fda7b80bd4445fae9ed130fb6ed4ba013d8bd23370982fef657746ee5309a5e[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "Content-Type: application/x-directory[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "X-Amz-Date: 20220812T135941Z[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "x-amz-decoded-content-length: 0[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "Content-Length: 86[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "Expect: 100-continue[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 << "HTTP/1.1 100 Continue[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 << HTTP/1.1 100 Continue | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "0;chunk-signature=a7944f1794fdb454328de34aad3f15ea46d66eae8cad3f7c7917f2a87c3161c4[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 << "Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8"[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 << "X-Amz-Version-Id: cd27d298c00d4f22876bdb566d8a2704[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 << "Date: Fri, 12 Aug 2022 13:59:41 GMT[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 << "Content-Length: 0[\r][\n]" | |
22/08/12 14:59:41 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 << HTTP/1.1 200 OK | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 << Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8" | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 << X-Amz-Version-Id: cd27d298c00d4f22876bdb566d8a2704 | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 << Date: Fri, 12 Aug 2022 13:59:41 GMT | |
22/08/12 14:59:41 DEBUG headers: http-outgoing-0 << Content-Length: 0 | |
22/08/12 14:59:41 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 14:59:41 DEBUG PoolingHttpClientConnectionManager: Connection [id: 0][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 14:59:41 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 0 | |
22/08/12 14:59:41 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 0][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:41 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 13:59:41 GMT | |
22/08/12 14:59:41 DEBUG request: Received successful response: 200, AWS Request ID: null | |
22/08/12 14:59:41 DEBUG requestId: x-amzn-RequestId: not available | |
22/08/12 14:59:41 DEBUG requestId: AWS Request ID: not available | |
22/08/12 14:59:41 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 12 | |
22/08/12 14:59:41 DEBUG latency: ServiceName=[Amazon S3], StatusCode=[200], ServiceEndpoint=[http://localhost:30600], RequestType=[PutObjectRequest], AWSRequestID=[null], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, HttpClientPoolLeasedCount=0, ResponseProcessingTime=[0.173], ClientExecuteTime=[154.248], HttpClientSendRequestTime=[3.369], HttpRequestTime=[152.356], ApiCallLatency=[153.868], RequestSigningTime=[0.646], CredentialsRequestTime=[0.01, 0.006], HttpClientReceiveResponseTime=[145.988], | |
22/08/12 14:59:41 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request.failures by 1 with final value 4 | |
22/08/12 14:59:41 DEBUG S3AFileSystem: PUT completed success=false; 0 bytes | |
22/08/12 14:59:41 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request_completed by 1 with final value 4 | |
22/08/12 14:59:41 DEBUG Invoker: PUT 0-byte object : duration 0:00.156s | |
22/08/12 14:59:41 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_retry by 1 with final value 4 | |
22/08/12 14:59:41 DEBUG IOStatisticsStoreImpl: Incrementing counter ignored_errors by 1 with final value 4 | |
22/08/12 14:59:50 DEBUG Invoker: retry #4 | |
22/08/12 14:59:50 DEBUG Invoker: Starting: PUT 0-byte object | |
22/08/12 14:59:50 DEBUG S3AFileSystem: PUT 0 bytes to nonemptyprefix3/_temporary/0/ | |
22/08/12 14:59:50 DEBUG S3AFileSystem: PUT start 0 bytes | |
22/08/12 14:59:50 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request by 1 with final value 5 | |
22/08/12 14:59:50 DEBUG request: Sending Request: PUT http://localhost:30600 /master.rando2/nonemptyprefix3/_temporary/0/ Headers: (amz-sdk-invocation-id: 590755f8-903e-e278-c8af-3f365ad9caeb, Content-Length: 0, Content-Type: application/x-directory, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 14:59:50 DEBUG AWS4Signer: AWS4 Canonical Request: '"PUT | |
/master.rando2/nonemptyprefix3/_temporary/0/ | |
amz-sdk-invocation-id:590755f8-903e-e278-c8af-3f365ad9caeb | |
amz-sdk-request:attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-length:86 | |
content-type:application/x-directory | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
x-amz-date:20220812T135950Z | |
x-amz-decoded-content-length:0 | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length | |
STREAMING-AWS4-HMAC-SHA256-PAYLOAD" | |
22/08/12 14:59:50 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T135950Z | |
20220812/us-east-1/s3/aws4_request | |
aea5dd9ddaaceeaa0781f3db88f4689aad84908d6112ef3231e9e616d39e8555" | |
22/08/12 14:59:50 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 14:59:50 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 14:59:50 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 << "[read] I/O error: Read timed out" | |
22/08/12 14:59:50 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 0][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:50 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:50 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 200000 | |
22/08/12 14:59:50 DEBUG MainClientExec: Executing request PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 14:59:50 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> Host: localhost:30600 | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> amz-sdk-invocation-id: 590755f8-903e-e278-c8af-3f365ad9caeb | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> amz-sdk-request: attempt=1;max=21 | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> amz-sdk-retry: 0/0/500 | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=a13add7cbb3335b65200fc7c1ebd031d861f301481a9e5e704d7a5395aa15463 | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> Content-Type: application/x-directory | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> X-Amz-Date: 20220812T135950Z | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> x-amz-decoded-content-length: 0 | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> Content-Length: 86 | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> Connection: Keep-Alive | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 >> Expect: 100-continue | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "amz-sdk-invocation-id: 590755f8-903e-e278-c8af-3f365ad9caeb[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "amz-sdk-request: attempt=1;max=21[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=a13add7cbb3335b65200fc7c1ebd031d861f301481a9e5e704d7a5395aa15463[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "Content-Type: application/x-directory[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "X-Amz-Date: 20220812T135950Z[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "x-amz-decoded-content-length: 0[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "Content-Length: 86[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "Expect: 100-continue[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 << "HTTP/1.1 100 Continue[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 << HTTP/1.1 100 Continue | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "0;chunk-signature=a606f08d990e2e8ceb59010637b7e91cc1490711aeb1ec082cc90519044385d6[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 >> "[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 << "Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8"[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 << "X-Amz-Version-Id: 180094cde4514e47b16190a7d0c5e4dd[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 << "Date: Fri, 12 Aug 2022 13:59:50 GMT[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 << "Content-Length: 0[\r][\n]" | |
22/08/12 14:59:50 DEBUG wire: http-outgoing-0 << "[\r][\n]" | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 << HTTP/1.1 200 OK | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 << Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8" | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 << X-Amz-Version-Id: 180094cde4514e47b16190a7d0c5e4dd | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 << Date: Fri, 12 Aug 2022 13:59:50 GMT | |
22/08/12 14:59:50 DEBUG headers: http-outgoing-0 << Content-Length: 0 | |
22/08/12 14:59:50 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 14:59:50 DEBUG PoolingHttpClientConnectionManager: Connection [id: 0][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 14:59:50 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: set socket timeout to 0 | |
22/08/12 14:59:50 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 0][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 14:59:50 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 13:59:50 GMT | |
22/08/12 14:59:50 DEBUG request: Received successful response: 200, AWS Request ID: null | |
22/08/12 14:59:50 DEBUG requestId: x-amzn-RequestId: not available | |
22/08/12 14:59:50 DEBUG requestId: AWS Request ID: not available | |
22/08/12 14:59:50 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 13 | |
22/08/12 14:59:50 DEBUG latency: ServiceName=[Amazon S3], StatusCode=[200], ServiceEndpoint=[http://localhost:30600], RequestType=[PutObjectRequest], AWSRequestID=[null], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, HttpClientPoolLeasedCount=0, ResponseProcessingTime=[0.089], ClientExecuteTime=[116.181], HttpClientSendRequestTime=[3.363], HttpRequestTime=[114.247], ApiCallLatency=[115.529], RequestSigningTime=[0.587], CredentialsRequestTime=[0.011, 0.007], HttpClientReceiveResponseTime=[108.238], | |
22/08/12 14:59:50 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request.failures by 1 with final value 5 | |
22/08/12 14:59:50 DEBUG S3AFileSystem: PUT completed success=false; 0 bytes | |
22/08/12 14:59:50 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request_completed by 1 with final value 5 | |
22/08/12 14:59:50 DEBUG Invoker: PUT 0-byte object : duration 0:00.117s | |
22/08/12 14:59:50 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_retry by 1 with final value 5 | |
22/08/12 14:59:50 DEBUG IOStatisticsStoreImpl: Incrementing counter ignored_errors by 1 with final value 5 | |
22/08/12 15:00:02 DEBUG Invoker: retry #5 | |
22/08/12 15:00:02 DEBUG Invoker: Starting: PUT 0-byte object | |
22/08/12 15:00:02 DEBUG S3AFileSystem: PUT 0 bytes to nonemptyprefix3/_temporary/0/ | |
22/08/12 15:00:02 DEBUG S3AFileSystem: PUT start 0 bytes | |
22/08/12 15:00:02 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request by 1 with final value 6 | |
22/08/12 15:00:02 DEBUG request: Sending Request: PUT http://localhost:30600 /master.rando2/nonemptyprefix3/_temporary/0/ Headers: (amz-sdk-invocation-id: 4d9334bc-260c-332d-e891-fbf78c8c087a, Content-Length: 0, Content-Type: application/x-directory, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 15:00:02 DEBUG AWS4Signer: AWS4 Canonical Request: '"PUT | |
/master.rando2/nonemptyprefix3/_temporary/0/ | |
amz-sdk-invocation-id:4d9334bc-260c-332d-e891-fbf78c8c087a | |
amz-sdk-request:attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-length:86 | |
content-type:application/x-directory | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
x-amz-date:20220812T140002Z | |
x-amz-decoded-content-length:0 | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length | |
STREAMING-AWS4-HMAC-SHA256-PAYLOAD" | |
22/08/12 15:00:02 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T140002Z | |
20220812/us-east-1/s3/aws4_request | |
0e66d32618069eb076d51687beaf0a0cae44661f6ae55f2d56e23ccbdfc0ef37" | |
22/08/12 15:00:02 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 15:00:02 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 15:00:02 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-0 << "end of stream" | |
22/08/12 15:00:02 DEBUG DefaultManagedHttpClientConnection: http-outgoing-0: Close connection | |
22/08/12 15:00:02 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 1][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 15:00:02 DEBUG MainClientExec: Opening connection {}->http://localhost:30600 | |
22/08/12 15:00:02 DEBUG DefaultHttpClientConnectionOperator: Connecting to localhost/127.0.0.1:30600 | |
22/08/12 15:00:02 DEBUG DefaultHttpClientConnectionOperator: Connection established 127.0.0.1:59612<->127.0.0.1:30600 | |
22/08/12 15:00:02 DEBUG DefaultManagedHttpClientConnection: http-outgoing-1: set socket timeout to 200000 | |
22/08/12 15:00:02 DEBUG MainClientExec: Executing request PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 15:00:02 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> Host: localhost:30600 | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> amz-sdk-invocation-id: 4d9334bc-260c-332d-e891-fbf78c8c087a | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> amz-sdk-request: attempt=1;max=21 | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> amz-sdk-retry: 0/0/500 | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=4b30434490f1219342f831e6a688b41d419f2be9519a8826b7d7a8c4eae81688 | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> Content-Type: application/x-directory | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> X-Amz-Date: 20220812T140002Z | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> x-amz-decoded-content-length: 0 | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> Content-Length: 86 | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> Connection: Keep-Alive | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 >> Expect: 100-continue | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "amz-sdk-invocation-id: 4d9334bc-260c-332d-e891-fbf78c8c087a[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "amz-sdk-request: attempt=1;max=21[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=4b30434490f1219342f831e6a688b41d419f2be9519a8826b7d7a8c4eae81688[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "Content-Type: application/x-directory[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "X-Amz-Date: 20220812T140002Z[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "x-amz-decoded-content-length: 0[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "Content-Length: 86[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "Expect: 100-continue[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 << "HTTP/1.1 100 Continue[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 << "[\r][\n]" | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 << HTTP/1.1 100 Continue | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "0;chunk-signature=7c9947dc8d87ae1c9e4fe0d93f806bb563787d173faf8b30219b4cc7310a1bf0[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 >> "[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 << "HTTP/1.1 200 OK[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 << "Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8"[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 << "X-Amz-Version-Id: a375f8bd03ac4a779164f7ad3000091c[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 << "Date: Fri, 12 Aug 2022 14:00:02 GMT[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 << "Content-Length: 0[\r][\n]" | |
22/08/12 15:00:02 DEBUG wire: http-outgoing-1 << "[\r][\n]" | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 << HTTP/1.1 200 OK | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 << Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8" | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 << X-Amz-Version-Id: a375f8bd03ac4a779164f7ad3000091c | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 << Date: Fri, 12 Aug 2022 14:00:02 GMT | |
22/08/12 15:00:02 DEBUG headers: http-outgoing-1 << Content-Length: 0 | |
22/08/12 15:00:02 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 15:00:02 DEBUG PoolingHttpClientConnectionManager: Connection [id: 1][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 15:00:02 DEBUG DefaultManagedHttpClientConnection: http-outgoing-1: set socket timeout to 0 | |
22/08/12 15:00:02 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 1][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 15:00:02 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 14:00:02 GMT | |
22/08/12 15:00:02 DEBUG request: Received successful response: 200, AWS Request ID: null | |
22/08/12 15:00:02 DEBUG requestId: x-amzn-RequestId: not available | |
22/08/12 15:00:02 DEBUG requestId: AWS Request ID: not available | |
22/08/12 15:00:02 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 14 | |
22/08/12 15:00:02 DEBUG latency: ServiceName=[Amazon S3], StatusCode=[200], ServiceEndpoint=[http://localhost:30600], RequestType=[PutObjectRequest], AWSRequestID=[null], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, HttpClientPoolLeasedCount=0, ResponseProcessingTime=[0.17], ClientExecuteTime=[163.858], HttpClientSendRequestTime=[7.188], HttpRequestTime=[162.09], ApiCallLatency=[163.488], RequestSigningTime=[0.502], CredentialsRequestTime=[0.02, 0.006], HttpClientReceiveResponseTime=[152.602], | |
22/08/12 15:00:02 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request.failures by 1 with final value 6 | |
22/08/12 15:00:02 DEBUG S3AFileSystem: PUT completed success=false; 0 bytes | |
22/08/12 15:00:02 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request_completed by 1 with final value 6 | |
22/08/12 15:00:02 DEBUG Invoker: PUT 0-byte object : duration 0:00.165s | |
22/08/12 15:00:02 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_retry by 1 with final value 6 | |
22/08/12 15:00:02 DEBUG IOStatisticsStoreImpl: Incrementing counter ignored_errors by 1 with final value 6 | |
22/08/12 15:00:27 DEBUG Invoker: retry #6 | |
22/08/12 15:00:27 DEBUG Invoker: Starting: PUT 0-byte object | |
22/08/12 15:00:27 DEBUG S3AFileSystem: PUT 0 bytes to nonemptyprefix3/_temporary/0/ | |
22/08/12 15:00:27 DEBUG S3AFileSystem: PUT start 0 bytes | |
22/08/12 15:00:27 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request by 1 with final value 7 | |
22/08/12 15:00:27 DEBUG request: Sending Request: PUT http://localhost:30600 /master.rando2/nonemptyprefix3/_temporary/0/ Headers: (amz-sdk-invocation-id: 1d4a38c1-7760-b137-44f2-7077c37f318d, Content-Length: 0, Content-Type: application/x-directory, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 15:00:27 DEBUG AWS4Signer: AWS4 Canonical Request: '"PUT | |
/master.rando2/nonemptyprefix3/_temporary/0/ | |
amz-sdk-invocation-id:1d4a38c1-7760-b137-44f2-7077c37f318d | |
amz-sdk-request:attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-length:86 | |
content-type:application/x-directory | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
x-amz-date:20220812T140027Z | |
x-amz-decoded-content-length:0 | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length | |
STREAMING-AWS4-HMAC-SHA256-PAYLOAD" | |
22/08/12 15:00:27 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T140027Z | |
20220812/us-east-1/s3/aws4_request | |
4a9866d48fb6d7c13975134d44c2840ec9a60859c8cc6d372f536590c7f1eca5" | |
22/08/12 15:00:27 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 15:00:27 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 15:00:27 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-1 << "end of stream" | |
22/08/12 15:00:27 DEBUG DefaultManagedHttpClientConnection: http-outgoing-1: Close connection | |
22/08/12 15:00:27 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 2][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 15:00:27 DEBUG MainClientExec: Opening connection {}->http://localhost:30600 | |
22/08/12 15:00:27 DEBUG DefaultHttpClientConnectionOperator: Connecting to localhost/127.0.0.1:30600 | |
22/08/12 15:00:27 DEBUG DefaultHttpClientConnectionOperator: Connection established 127.0.0.1:59614<->127.0.0.1:30600 | |
22/08/12 15:00:27 DEBUG DefaultManagedHttpClientConnection: http-outgoing-2: set socket timeout to 200000 | |
22/08/12 15:00:27 DEBUG MainClientExec: Executing request PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 15:00:27 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> Host: localhost:30600 | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> amz-sdk-invocation-id: 1d4a38c1-7760-b137-44f2-7077c37f318d | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> amz-sdk-request: attempt=1;max=21 | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> amz-sdk-retry: 0/0/500 | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=b1d926687bee23a4b2c97efb04305eb80c5add7879ff377898f8e90d96ad2e94 | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> Content-Type: application/x-directory | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> X-Amz-Date: 20220812T140027Z | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> x-amz-decoded-content-length: 0 | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> Content-Length: 86 | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> Connection: Keep-Alive | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 >> Expect: 100-continue | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "amz-sdk-invocation-id: 1d4a38c1-7760-b137-44f2-7077c37f318d[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "amz-sdk-request: attempt=1;max=21[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=b1d926687bee23a4b2c97efb04305eb80c5add7879ff377898f8e90d96ad2e94[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "Content-Type: application/x-directory[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "X-Amz-Date: 20220812T140027Z[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "x-amz-decoded-content-length: 0[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "Content-Length: 86[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "Expect: 100-continue[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 << "HTTP/1.1 100 Continue[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 << "[\r][\n]" | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 << HTTP/1.1 100 Continue | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "0;chunk-signature=c0147f3a6c77df9d4d6f696e3faad6f43dd6e3c1ece3cbc7167696958942d3a4[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 >> "[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 << "HTTP/1.1 200 OK[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 << "Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8"[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 << "X-Amz-Version-Id: 373d53149d204a579377f799ed7c07e9[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 << "Date: Fri, 12 Aug 2022 14:00:27 GMT[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 << "Content-Length: 0[\r][\n]" | |
22/08/12 15:00:27 DEBUG wire: http-outgoing-2 << "[\r][\n]" | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 << HTTP/1.1 200 OK | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 << Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8" | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 << X-Amz-Version-Id: 373d53149d204a579377f799ed7c07e9 | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 << Date: Fri, 12 Aug 2022 14:00:27 GMT | |
22/08/12 15:00:27 DEBUG headers: http-outgoing-2 << Content-Length: 0 | |
22/08/12 15:00:27 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 15:00:27 DEBUG PoolingHttpClientConnectionManager: Connection [id: 2][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 15:00:27 DEBUG DefaultManagedHttpClientConnection: http-outgoing-2: set socket timeout to 0 | |
22/08/12 15:00:27 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 2][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 15:00:27 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 14:00:27 GMT | |
22/08/12 15:00:27 DEBUG request: Received successful response: 200, AWS Request ID: null | |
22/08/12 15:00:27 DEBUG requestId: x-amzn-RequestId: not available | |
22/08/12 15:00:27 DEBUG requestId: AWS Request ID: not available | |
22/08/12 15:00:27 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 15 | |
22/08/12 15:00:27 DEBUG latency: ServiceName=[Amazon S3], StatusCode=[200], ServiceEndpoint=[http://localhost:30600], RequestType=[PutObjectRequest], AWSRequestID=[null], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, HttpClientPoolLeasedCount=0, ResponseProcessingTime=[0.177], ClientExecuteTime=[104.727], HttpClientSendRequestTime=[6.901], HttpRequestTime=[102.951], ApiCallLatency=[104.431], RequestSigningTime=[0.584], CredentialsRequestTime=[0.009, 0.007], HttpClientReceiveResponseTime=[93.884], | |
22/08/12 15:00:27 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request.failures by 1 with final value 7 | |
22/08/12 15:00:27 DEBUG S3AFileSystem: PUT completed success=false; 0 bytes | |
22/08/12 15:00:27 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request_completed by 1 with final value 7 | |
22/08/12 15:00:27 DEBUG Invoker: PUT 0-byte object : duration 0:00.106s | |
22/08/12 15:00:27 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_retry by 1 with final value 7 | |
22/08/12 15:00:27 DEBUG IOStatisticsStoreImpl: Incrementing counter ignored_errors by 1 with final value 7 | |
22/08/12 15:00:30 DEBUG PoolingHttpClientConnectionManager: Closing connections idle longer than 60000 MILLISECONDS | |
22/08/12 15:01:30 DEBUG PoolingHttpClientConnectionManager: Closing connections idle longer than 60000 MILLISECONDS | |
22/08/12 15:01:30 DEBUG DefaultManagedHttpClientConnection: http-outgoing-2: Close connection | |
22/08/12 15:01:41 DEBUG Invoker: retry #7 | |
22/08/12 15:01:41 DEBUG Invoker: Starting: PUT 0-byte object | |
22/08/12 15:01:41 DEBUG S3AFileSystem: PUT 0 bytes to nonemptyprefix3/_temporary/0/ | |
22/08/12 15:01:41 DEBUG S3AFileSystem: PUT start 0 bytes | |
22/08/12 15:01:41 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request by 1 with final value 8 | |
22/08/12 15:01:41 DEBUG request: Sending Request: PUT http://localhost:30600 /master.rando2/nonemptyprefix3/_temporary/0/ Headers: (amz-sdk-invocation-id: 15cd98ab-6c4f-9c01-e0a7-e7a173b9e838, Content-Length: 0, Content-Type: application/x-directory, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy, ) | |
22/08/12 15:01:41 DEBUG AWS4Signer: AWS4 Canonical Request: '"PUT | |
/master.rando2/nonemptyprefix3/_temporary/0/ | |
amz-sdk-invocation-id:15cd98ab-6c4f-9c01-e0a7-e7a173b9e838 | |
amz-sdk-request:attempt=1;max=21 | |
amz-sdk-retry:0/0/500 | |
content-length:86 | |
content-type:application/x-directory | |
host:localhost:30600 | |
user-agent:Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
x-amz-content-sha256:STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
x-amz-date:20220812T140141Z | |
x-amz-decoded-content-length:0 | |
amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length | |
STREAMING-AWS4-HMAC-SHA256-PAYLOAD" | |
22/08/12 15:01:41 DEBUG AWS4Signer: AWS4 String to Sign: '"AWS4-HMAC-SHA256 | |
20220812T140141Z | |
20220812/us-east-1/s3/aws4_request | |
5af3dd5167735907ac374f12dc3a9b4828f98a922933c43b1d15bf8e05d5dcc0" | |
22/08/12 15:01:41 DEBUG RequestAddCookies: CookieSpec selected: default | |
22/08/12 15:01:41 DEBUG RequestAuthCache: Auth cache not set in the context | |
22/08/12 15:01:41 DEBUG PoolingHttpClientConnectionManager: Connection request: [route: {}->http://localhost:30600][total available: 0; route allocated: 0 of 96; total allocated: 0 of 96] | |
22/08/12 15:01:41 DEBUG PoolingHttpClientConnectionManager: Connection leased: [id: 3][route: {}->http://localhost:30600][total available: 0; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 15:01:41 DEBUG MainClientExec: Opening connection {}->http://localhost:30600 | |
22/08/12 15:01:41 DEBUG DefaultHttpClientConnectionOperator: Connecting to localhost/127.0.0.1:30600 | |
22/08/12 15:01:41 DEBUG DefaultHttpClientConnectionOperator: Connection established 127.0.0.1:59616<->127.0.0.1:30600 | |
22/08/12 15:01:41 DEBUG DefaultManagedHttpClientConnection: http-outgoing-3: set socket timeout to 200000 | |
22/08/12 15:01:41 DEBUG MainClientExec: Executing request PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 15:01:41 DEBUG MainClientExec: Proxy auth state: UNCHALLENGED | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1 | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> Host: localhost:30600 | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> amz-sdk-invocation-id: 15cd98ab-6c4f-9c01-e0a7-e7a173b9e838 | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> amz-sdk-request: attempt=1;max=21 | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> amz-sdk-retry: 0/0/500 | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=4bb7306ccd2eb40c7a848515bd411ba3ba24bfe3501752346966e00347ffee77 | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> Content-Type: application/x-directory | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> X-Amz-Date: 20220812T140141Z | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> x-amz-decoded-content-length: 0 | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> Content-Length: 86 | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> Connection: Keep-Alive | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 >> Expect: 100-continue | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "PUT /master.rando2/nonemptyprefix3/_temporary/0/ HTTP/1.1[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "Host: localhost:30600[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "amz-sdk-invocation-id: 15cd98ab-6c4f-9c01-e0a7-e7a173b9e838[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "amz-sdk-request: attempt=1;max=21[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "amz-sdk-retry: 0/0/500[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "Authorization: AWS4-HMAC-SHA256 Credential=lemon/20220812/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=4bb7306ccd2eb40c7a848515bd411ba3ba24bfe3501752346966e00347ffee77[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "Content-Type: application/x-directory[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "User-Agent: Hadoop 3.3.2, aws-sdk-java/1.12.264 Linux/5.15.0-40-generic OpenJDK_64-Bit_Server_VM/11.0.16+8-post-Ubuntu-0ubuntu122.04 java/11.0.16 scala/2.12.15 vendor/Ubuntu cfg/retry-mode/legacy[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "X-Amz-Date: 20220812T140141Z[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "x-amz-decoded-content-length: 0[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "Content-Length: 86[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "Connection: Keep-Alive[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "Expect: 100-continue[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 << "HTTP/1.1 100 Continue[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 << "[\r][\n]" | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 << HTTP/1.1 100 Continue | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "0;chunk-signature=340c1f2b0da383d8dd4207ce75bc5d7a6d81a2eceb1d680e3a68f9914f2f66f7[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 >> "[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 << "HTTP/1.1 200 OK[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 << "Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8"[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 << "X-Amz-Version-Id: 5865ad75b8274fadb8b8d1db161a2a87[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 << "Date: Fri, 12 Aug 2022 14:01:41 GMT[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 << "Content-Length: 0[\r][\n]" | |
22/08/12 15:01:41 DEBUG wire: http-outgoing-3 << "[\r][\n]" | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 << HTTP/1.1 200 OK | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 << Etag: "0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8" | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 << X-Amz-Version-Id: 5865ad75b8274fadb8b8d1db161a2a87 | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 << Date: Fri, 12 Aug 2022 14:01:41 GMT | |
22/08/12 15:01:41 DEBUG headers: http-outgoing-3 << Content-Length: 0 | |
22/08/12 15:01:41 DEBUG MainClientExec: Connection can be kept alive for 60000 MILLISECONDS | |
22/08/12 15:01:41 DEBUG PoolingHttpClientConnectionManager: Connection [id: 3][route: {}->http://localhost:30600] can be kept alive for 60.0 seconds | |
22/08/12 15:01:41 DEBUG DefaultManagedHttpClientConnection: http-outgoing-3: set socket timeout to 0 | |
22/08/12 15:01:41 DEBUG PoolingHttpClientConnectionManager: Connection released: [id: 3][route: {}->http://localhost:30600][total available: 1; route allocated: 1 of 96; total allocated: 1 of 96] | |
22/08/12 15:01:41 DEBUG ClockSkewAdjuster: Reported server date (from 'Date' header): Fri, 12 Aug 2022 14:01:41 GMT | |
22/08/12 15:01:41 DEBUG request: Received successful response: 200, AWS Request ID: null | |
22/08/12 15:01:41 DEBUG requestId: x-amzn-RequestId: not available | |
22/08/12 15:01:41 DEBUG requestId: AWS Request ID: not available | |
22/08/12 15:01:41 DEBUG IOStatisticsStoreImpl: Incrementing counter store_io_request by 1 with final value 16 | |
22/08/12 15:01:41 DEBUG latency: ServiceName=[Amazon S3], StatusCode=[200], ServiceEndpoint=[http://localhost:30600], RequestType=[PutObjectRequest], AWSRequestID=[null], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=0, RequestCount=1, HttpClientPoolLeasedCount=0, ResponseProcessingTime=[0.107], ClientExecuteTime=[198.661], HttpClientSendRequestTime=[5.938], HttpRequestTime=[197.39], ApiCallLatency=[198.399], RequestSigningTime=[0.407], CredentialsRequestTime=[0.007, 0.005], HttpClientReceiveResponseTime=[188.935], | |
22/08/12 15:01:41 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request.failures by 1 with final value 8 | |
22/08/12 15:01:41 DEBUG S3AFileSystem: PUT completed success=false; 0 bytes | |
22/08/12 15:01:41 DEBUG IOStatisticsStoreImpl: Incrementing counter object_put_request_completed by 1 with final value 8 | |
22/08/12 15:01:41 DEBUG Invoker: PUT 0-byte object : duration 0:00.200s | |
22/08/12 15:01:41 DEBUG IOStatisticsStoreImpl: Incrementing counter op_mkdirs.failures by 1 with final value 1 | |
Traceback (most recent call last): | |
File "/home/luke/pp/pachyderm/spark/spark.py", line 45, in <module> | |
df.write.parquet('s3a://master.rando2/nonemptyprefix3', mode="overwrite") | |
File "/home/luke/pp/pachyderm/venv/lib/python3.10/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 1140, in parquet | |
File "/home/luke/pp/pachyderm/venv/lib/python3.10/site-packages/pyspark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py", line 1321, in __call__ | |
File "/home/luke/pp/pachyderm/venv/lib/python3.10/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 190, in deco | |
File "/home/luke/pp/pachyderm/venv/lib/python3.10/site-packages/pyspark/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py", line 326, in get_return_value | |
py4j.protocol.Py4JJavaError: An error occurred while calling o241.parquet. | |
: org.apache.hadoop.fs.s3a.AWSClientIOException: PUT 0-byte object on nonemptyprefix3/_temporary/0: com.amazonaws.SdkClientException: Unable to verify integrity of data upload. Client calculated content hash (contentMD5: 1B2M2Y8AsgTpgAmY7PhCfg== in base 64) didn't match hash (etag: 0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8 in hex) calculated by Amazon S3. You may need to delete the data stored in Amazon S3. (metadata.contentMD5: null, md5DigestStream: com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream@2663c2b4, bucketName: master.rando2, key: nonemptyprefix3/_temporary/0/): Unable to verify integrity of data upload. Client calculated content hash (contentMD5: 1B2M2Y8AsgTpgAmY7PhCfg== in base 64) didn't match hash (etag: 0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8 in hex) calculated by Amazon S3. You may need to delete the data stored in Amazon S3. (metadata.contentMD5: null, md5DigestStream: com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream@2663c2b4, bucketName: master.rando2, key: nonemptyprefix3/_temporary/0/) | |
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:214) | |
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:119) | |
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:322) | |
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:414) | |
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:318) | |
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:293) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:4532) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.access$1900(S3AFileSystem.java:259) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem$MkdirOperationCallbacksImpl.createFakeDirectory(S3AFileSystem.java:3461) | |
at org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:121) | |
at org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:45) | |
at org.apache.hadoop.fs.s3a.impl.ExecutingStoreOperation.apply(ExecutingStoreOperation.java:76) | |
at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:499) | |
at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:444) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2337) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2356) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:3428) | |
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2388) | |
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:356) | |
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupJob(HadoopMapReduceCommitProtocol.scala:188) | |
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:209) | |
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186) | |
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113) | |
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111) | |
at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125) | |
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98) | |
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109) | |
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169) | |
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) | |
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) | |
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) | |
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98) | |
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94) | |
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584) | |
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176) | |
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584) | |
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30) | |
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267) | |
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263) | |
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30) | |
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30) | |
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560) | |
at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94) | |
at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81) | |
at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79) | |
at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:116) | |
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:860) | |
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:390) | |
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:363) | |
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239) | |
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:793) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.base/java.lang.reflect.Method.invoke(Method.java:566) | |
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) | |
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) | |
at py4j.Gateway.invoke(Gateway.java:282) | |
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) | |
at py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) | |
at py4j.ClientServerConnection.run(ClientServerConnection.java:106) | |
at java.base/java.lang.Thread.run(Thread.java:829) | |
Caused by: com.amazonaws.SdkClientException: Unable to verify integrity of data upload. Client calculated content hash (contentMD5: 1B2M2Y8AsgTpgAmY7PhCfg== in base 64) didn't match hash (etag: 0e5751c026e543b2e8ab2eb06099daa1d1e5df47778f7787faab45cdf12fe3a8 in hex) calculated by Amazon S3. You may need to delete the data stored in Amazon S3. (metadata.contentMD5: null, md5DigestStream: com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream@2663c2b4, bucketName: master.rando2, key: nonemptyprefix3/_temporary/0/) | |
at com.amazonaws.services.s3.AmazonS3Client.uploadObject(AmazonS3Client.java:1887) | |
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1821) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$putObjectDirect$17(S3AFileSystem.java:2877) | |
at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfSupplier(IOStatisticsBinding.java:604) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:2874) | |
at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$32(S3AFileSystem.java:4534) | |
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:117) | |
... 61 more | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(2) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 2 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 2 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(13) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 13 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 13 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(26) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 26 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 26 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(82) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 82 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 82 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(48) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 48 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 48 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(28) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 28 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 28 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(50) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 50 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 50 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(57) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 57 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 57 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanBroadcast(1) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning broadcast 1 | |
22/08/12 15:01:41 DEBUG TorrentBroadcast: Unpersisting TorrentBroadcast 1 | |
22/08/12 15:01:41 DEBUG BlockManagerStorageEndpoint: removing broadcast 1 | |
22/08/12 15:01:41 DEBUG BlockManager: Removing broadcast 1 | |
22/08/12 15:01:41 DEBUG BlockManager: Removing block broadcast_1 | |
22/08/12 15:01:41 DEBUG MemoryStore: Block broadcast_1 of size 12720 dropped from memory (free 455455830) | |
22/08/12 15:01:41 DEBUG BlockManager: Removing block broadcast_1_piece0 | |
22/08/12 15:01:41 DEBUG MemoryStore: Block broadcast_1_piece0 of size 6716 dropped from memory (free 455462546) | |
22/08/12 15:01:41 DEBUG BlockManagerMasterEndpoint: Updating block info on master broadcast_1_piece0 for BlockManagerId(driver, 10.1.255.235, 44159, None) | |
22/08/12 15:01:41 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 10.1.255.235:44159 in memory (size: 6.6 KiB, free: 434.4 MiB) | |
22/08/12 15:01:41 DEBUG BlockManagerMaster: Updated info of block broadcast_1_piece0 | |
22/08/12 15:01:41 DEBUG BlockManager: Told master about block broadcast_1_piece0 | |
22/08/12 15:01:41 DEBUG BlockManagerStorageEndpoint: Done removing broadcast 1, response is 0 | |
22/08/12 15:01:41 DEBUG BlockManagerStorageEndpoint: Sent response: 0 to 10.1.255.235:37009 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned broadcast 1 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(68) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 68 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 68 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(43) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 43 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 43 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(64) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 64 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 64 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(33) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 33 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 33 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(85) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 85 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 85 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(35) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 35 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 35 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(12) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 12 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 12 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(69) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 69 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 69 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(6) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 6 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 6 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(16) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 16 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 16 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(15) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 15 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 15 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(30) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 30 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 30 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(60) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 60 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 60 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(70) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 70 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 70 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(67) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 67 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 67 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(75) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 75 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 75 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(39) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 39 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 39 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(34) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 34 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 34 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(86) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 86 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 86 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(36) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 36 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 36 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(76) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 76 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 76 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(72) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 72 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 72 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(77) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 77 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 77 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(38) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 38 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 38 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(63) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 63 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 63 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(87) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 87 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 87 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(40) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 40 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 40 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(17) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 17 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 17 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(29) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 29 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 29 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(42) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 42 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 42 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(7) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 7 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 7 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(8) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 8 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 8 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(49) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 49 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 49 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(81) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 81 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 81 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(59) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 59 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 59 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(24) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 24 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 24 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(54) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 54 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 54 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(55) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 55 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 55 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(66) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 66 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 66 | |
22/08/12 15:01:41 INFO SparkContext: Invoking stop() from shutdown hook | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(51) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 51 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 51 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(65) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 65 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 65 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(21) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 21 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 21 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(19) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 19 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 19 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(41) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 41 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 41 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(1) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 1 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 1 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(47) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 47 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 47 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(74) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 74 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 74 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(73) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 73 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 73 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(45) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 45 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 45 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(61) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 61 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 61 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(62) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 62 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 62 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(71) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 71 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 71 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(32) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 32 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 32 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(56) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 56 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 56 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(27) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 27 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 27 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(18) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 18 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 18 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(14) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 14 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 14 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(5) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 5 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 5 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(37) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 37 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 37 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(9) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 9 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 9 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(31) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 31 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 31 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanBroadcast(0) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning broadcast 0 | |
22/08/12 15:01:41 DEBUG TorrentBroadcast: Unpersisting TorrentBroadcast 0 | |
22/08/12 15:01:41 DEBUG BlockManagerStorageEndpoint: removing broadcast 0 | |
22/08/12 15:01:41 DEBUG BlockManager: Removing broadcast 0 | |
22/08/12 15:01:41 DEBUG BlockManager: Removing block broadcast_0_piece0 | |
22/08/12 15:01:41 DEBUG MemoryStore: Block broadcast_0_piece0 of size 6710 dropped from memory (free 455469256) | |
22/08/12 15:01:41 DEBUG BlockManagerMasterEndpoint: Updating block info on master broadcast_0_piece0 for BlockManagerId(driver, 10.1.255.235, 44159, None) | |
22/08/12 15:01:41 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.1.255.235:44159 in memory (size: 6.6 KiB, free: 434.4 MiB) | |
22/08/12 15:01:41 DEBUG BlockManagerMaster: Updated info of block broadcast_0_piece0 | |
22/08/12 15:01:41 DEBUG BlockManager: Told master about block broadcast_0_piece0 | |
22/08/12 15:01:41 DEBUG BlockManager: Removing block broadcast_0 | |
22/08/12 15:01:41 DEBUG MemoryStore: Block broadcast_0 of size 12720 dropped from memory (free 455481976) | |
22/08/12 15:01:41 DEBUG BlockManagerStorageEndpoint: Done removing broadcast 0, response is 0 | |
22/08/12 15:01:41 DEBUG BlockManagerStorageEndpoint: Sent response: 0 to 10.1.255.235:37009 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned broadcast 0 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(52) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 52 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 52 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(22) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 22 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 22 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(44) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 44 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 44 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(58) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 58 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 58 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(80) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 80 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 80 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(23) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 23 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 23 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanBroadcast(2) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning broadcast 2 | |
22/08/12 15:01:41 DEBUG TorrentBroadcast: Unpersisting TorrentBroadcast 2 | |
22/08/12 15:01:41 DEBUG BlockManagerStorageEndpoint: removing broadcast 2 | |
22/08/12 15:01:41 DEBUG BlockManager: Removing broadcast 2 | |
22/08/12 15:01:41 DEBUG BlockManager: Removing block broadcast_2_piece0 | |
22/08/12 15:01:41 DEBUG MemoryStore: Block broadcast_2_piece0 of size 6718 dropped from memory (free 455488694) | |
22/08/12 15:01:41 DEBUG BlockManagerMasterEndpoint: Updating block info on master broadcast_2_piece0 for BlockManagerId(driver, 10.1.255.235, 44159, None) | |
22/08/12 15:01:41 INFO BlockManagerInfo: Removed broadcast_2_piece0 on 10.1.255.235:44159 in memory (size: 6.6 KiB, free: 434.4 MiB) | |
22/08/12 15:01:41 DEBUG BlockManagerMaster: Updated info of block broadcast_2_piece0 | |
22/08/12 15:01:41 DEBUG BlockManager: Told master about block broadcast_2_piece0 | |
22/08/12 15:01:41 DEBUG BlockManager: Removing block broadcast_2 | |
22/08/12 15:01:41 DEBUG MemoryStore: Block broadcast_2 of size 12720 dropped from memory (free 455501414) | |
22/08/12 15:01:41 DEBUG BlockManagerStorageEndpoint: Done removing broadcast 2, response is 0 | |
22/08/12 15:01:41 DEBUG BlockManagerStorageEndpoint: Sent response: 0 to 10.1.255.235:37009 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned broadcast 2 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(78) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 78 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 78 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(10) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 10 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 10 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(20) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 20 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 20 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(84) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 84 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 84 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(46) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 46 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 46 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(4) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 4 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 4 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(53) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 53 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 53 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(79) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 79 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 79 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(25) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 25 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 25 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(11) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 11 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 11 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(83) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 83 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 83 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Got cleaning task CleanAccum(3) | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaning accumulator 3 | |
22/08/12 15:01:41 DEBUG ContextCleaner: Cleaned accumulator 3 | |
22/08/12 15:01:41 INFO SparkUI: Stopped Spark web UI at http://10.1.255.235:4040 | |
22/08/12 15:01:41 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! | |
22/08/12 15:01:41 INFO MemoryStore: MemoryStore cleared | |
22/08/12 15:01:41 INFO BlockManager: BlockManager stopped | |
22/08/12 15:01:41 INFO BlockManagerMaster: BlockManagerMaster stopped | |
22/08/12 15:01:41 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! | |
22/08/12 15:01:41 DEBUG PoolThreadCache: Freed 4 thread-local buffer(s) from thread: rpc-server-4-1 | |
22/08/12 15:01:41 INFO SparkContext: Successfully stopped SparkContext | |
22/08/12 15:01:41 INFO ShutdownHookManager: Shutdown hook called | |
22/08/12 15:01:41 INFO ShutdownHookManager: Deleting directory /tmp/spark-d42dc402-653f-4de3-a1e4-071d125a358d | |
22/08/12 15:01:41 INFO ShutdownHookManager: Deleting directory /tmp/spark-1d457905-e782-4bdc-99aa-a824cf000a41 | |
22/08/12 15:01:42 INFO ShutdownHookManager: Deleting directory /tmp/spark-1d457905-e782-4bdc-99aa-a824cf000a41/pyspark-1ee43d5b-cad4-4540-814c-a33a24d3a788 | |
22/08/12 15:01:42 DEBUG S3AFileSystem: Filesystem s3a://master.rando2 is closed | |
22/08/12 15:01:42 DEBUG S3AFileSystem: IOStatistics: counters=((committer_commits_created=0) (object_continue_list_request=0) (object_delete_request=0) (stream_read_bytes_backwards_on_seek=0) (op_delete=0) (store_io_request=16) (multipart_upload_completed=0) (committer_commits_aborted=0) (op_exists.failures=1) (op_mkdirs=1) (s3guard_metadatastore_put_path_request=0) (stream_read_operations=0) (stream_read_bytes_discarded_in_close=0) (stream_read_close_operations=0) (stream_read_seek_policy_changed=0) (stream_write_block_uploads=0) (committer_commits.failures=0) (op_copy_from_local_file.failures=0) (store_exists_probe=0) (multipart_upload_part_put_bytes=0) (op_list_files.failures=0) (files_created=0) (op_rename.failures=0) (stream_write_block_uploads_aborted=0) (action_executor_acquired=0) (op_get_file_status.failures=0) (committer_stage_file_upload=0) (s3guard_metadatastore_authoritative_directories_updated=0) (stream_write_bytes=0) (op_xattr_get_map.failures=0) (object_metadata_request=4) (delegation_tokens_issued=0) (audit_span_creation=0) (stream_read_fully_operations=0) (action_http_head_request.failures=0) (op_xattr_get_map=0) (fake_directories_created=0) (files_deleted=0) (files_copied_bytes=0) (object_bulk_delete_request=0) (committer_magic_files_created=0) (op_open=0) (op_list_status.failures=0) (op_get_file_checksum.failures=0) (committer_commits_completed=0) (op_is_file.failures=0) (op_hsync=0) (stream_read_bytes=0) (op_access.failures=0) (s3guard_metadatastore_retry=0) (s3guard_metadatastore_throttled=0) (object_put_request=8) (multipart_upload_list.failures=0) (op_create_non_recursive=0) (op_rename=0) (s3guard_metadatastore_record_deletes=0) (committer_jobs_completed=0) (op_get_delegation_token=0) (op_list_files=0) (committer_jobs_failed=0) (op_glob_status.failures=0) (multipart_instantiated=0) (committer_commits_reverted=0) (op_is_directory=0) (op_create.failures=0) (stream_read_exceptions=0) (op_copy_from_local_file=0) (stream_read_seek_operations=0) (multipart_upload_list=0) (op_get_file_status=0) (files_delete_rejected=0) (op_glob_status=0) (op_xattr_get_named_map.failures=0) (object_put_request.failures=8) (op_xattr_list=0) (stream_read_seek_forward_operations=0) (op_exists=1) (multipart_upload_part_put=0) (stream_write_total_time=0) (store_io_retry=7) (stream_write_block_uploads_committed=0) (stream_read_seek_backward_operations=0) (committer_bytes_uploaded=0) (s3guard_metadatastore_record_reads=0) (object_multipart_initiated=0) (committer_commit_job.failures=0) (action_http_get_request=0) (action_http_head_request=4) (committer_tasks_failed=0) (op_is_file=0) (op_access=0) (stream_read_version_mismatches=0) (committer_tasks_completed=0) (committer_materialize_file.failures=0) (op_abort.failures=0) (stream_write_exceptions=0) (s3guard_metadatastore_record_writes=0) (object_delete_request.failures=0) (object_list_request.failures=0) (op_list_located_status=0) (stream_write_queue_duration=0) (op_abort=0) (object_multipart_aborted=0) (stream_read_bytes_discarded_in_abort=0) (audit_access_check_failure=0) (stream_write_total_data=0) (store_exists_probe.failures=0) (object_bulk_delete_request.failures=0) (stream_write_exceptions_completing_upload=0) (op_get_content_summary=0) (object_list_request=4) (files_copied=0) (op_xattr_list.failures=0) (stream_read_seek_bytes_discarded=0) (stream_write_queue_duration.failures=0) (s3guard_metadatastore_initialization=0) (audit_failure=0) (action_executor_acquired.failures=0) (op_create=0) (object_multipart_aborted.failures=0) (op_xattr_get_named.failures=0) (op_hflush=0) (op_is_directory.failures=0) (fake_directories_deleted=0) (object_multipart_initiated.failures=0) (object_put_request_completed=8) (object_delete_objects=0) (op_mkdirs.failures=1) (stream_read_total_bytes=0) (stream_read_closed=0) (op_delete.failures=0) (multipart_upload_aborted=0) (multipart_upload_abort_under_path_invoked=0) (stream_read_operations_incomplete=0) (directories_deleted=0) (object_select_requests=0) (committer_commit_job=0) (op_xattr_get_named_map=0) (audit_request_execution=0) (object_continue_list_request.failures=0) (stream_read_opened=0) (action_http_get_request.failures=0) (stream_aborted=0) (committer_bytes_committed=0) (stream_read_seek_bytes_skipped=0) (multipart_upload_started=0) (delegation_tokens_issued.failures=0) (op_get_delegation_token.failures=0) (object_put_bytes=0) (ignored_errors=7) (op_xattr_get_named=0) (store_io_throttled=0) (object_copy_requests=0) (op_list_status=0) (committer_stage_file_upload.failures=0) (op_get_content_summary.failures=0) (op_get_file_checksum=0) (committer_materialize_file=0) (directories_created=0)); | |
gauges=((client_side_encryption_enabled=0) (object_put_bytes_pending=0) (stream_write_block_uploads_pending=0) (stream_write_block_uploads_data_pending=0) (stream_write_block_uploads_active=0) (object_put_request_active=0)); | |
minimums=((op_get_delegation_token.failures.min=-1) (op_xattr_get_named_map.min=-1) (op_list_files.min=-1) (committer_commit_job.min=-1) (op_get_file_status.min=-1) (op_is_file.min=-1) (action_http_get_request.failures.min=-1) (op_xattr_list.min=-1) (committer_materialize_file.min=-1) (committer_commit_job.failures.min=-1) (op_glob_status.min=-1) (delegation_tokens_issued.min=-1) (object_continue_list_request.min=-1) (op_abort.failures.min=-1) (action_http_head_request.failures.min=-1) (op_get_delegation_token.min=-1) (op_xattr_get_map.failures.min=-1) (op_list_files.failures.min=-1) (op_exists.failures.min=245) (multipart_upload_list.failures.min=-1) (object_put_request.min=-1) (object_multipart_aborted.min=-1) (op_is_file.failures.min=-1) (op_get_file_checksum.failures.min=-1) (stream_write_queue_duration.min=-1) (op_copy_from_local_file.failures.min=-1) (object_put_request.failures.min=106) (action_http_get_request.min=-1) (object_continue_list_request.failures.min=-1) (object_bulk_delete_request.failures.min=-1) (object_list_request.failures.min=-1) (committer_stage_file_upload.failures.min=-1) (object_bulk_delete_request.min=-1) (op_create.min=-1) (committer_materialize_file.failures.min=-1) (op_list_status.min=-1) (op_get_file_status.failures.min=-1) (op_exists.min=-1) (op_xattr_get_named.failures.min=-1) (stream_write_queue_duration.failures.min=-1) (op_is_directory.failures.min=-1) (op_create.failures.min=-1) (store_exists_probe.failures.min=-1) (object_list_request.min=18) (op_access.failures.min=-1) (op_xattr_get_named.min=-1) (object_delete_request.min=-1) (op_glob_status.failures.min=-1) (op_is_directory.min=-1) (op_copy_from_local_file.min=-1) (store_exists_probe.min=-1) (op_list_status.failures.min=-1) (op_rename.min=-1) (op_mkdirs.failures.min=130712) (object_multipart_initiated.min=-1) (object_delete_request.failures.min=-1) (object_multipart_initiated.failures.min=-1) (op_get_content_summary.failures.min=-1) (op_xattr_list.failures.min=-1) (op_xattr_get_named_map.failures.min=-1) (op_abort.min=-1) (op_get_file_checksum.min=-1) (op_access.min=-1) (action_executor_acquired.failures.min=-1) (op_delete.min=-1) (op_rename.failures.min=-1) (action_executor_acquired.min=-1) (committer_stage_file_upload.min=-1) (op_xattr_get_map.min=-1) (object_multipart_aborted.failures.min=-1) (delegation_tokens_issued.failures.min=-1) (op_delete.failures.min=-1) (op_get_content_summary.min=-1) (action_http_head_request.min=16) (multipart_upload_list.min=-1) (op_mkdirs.min=-1)); | |
maximums=((multipart_upload_list.max=-1) (op_xattr_list.max=-1) (committer_materialize_file.max=-1) (op_glob_status.failures.max=-1) (committer_commit_job.max=-1) (object_continue_list_request.failures.max=-1) (op_exists.failures.max=245) (op_xattr_get_named.max=-1) (op_glob_status.max=-1) (op_get_file_checksum.failures.max=-1) (object_delete_request.max=-1) (delegation_tokens_issued.failures.max=-1) (object_multipart_initiated.max=-1) (op_is_directory.max=-1) (op_rename.max=-1) (op_get_delegation_token.max=-1) (action_executor_acquired.failures.max=-1) (action_http_get_request.failures.max=-1) (op_delete.max=-1) (op_access.max=-1) (action_http_get_request.max=-1) (action_http_head_request.max=160) (object_put_request.failures.max=205) (op_get_file_status.max=-1) (op_list_files.max=-1) (op_copy_from_local_file.max=-1) (op_list_status.failures.max=-1) (committer_stage_file_upload.failures.max=-1) (stream_write_queue_duration.max=-1) (op_get_content_summary.max=-1) (op_copy_from_local_file.failures.max=-1) (store_exists_probe.max=-1) (op_mkdirs.max=-1) (op_is_file.max=-1) (op_xattr_get_map.failures.max=-1) (op_xattr_get_named_map.max=-1) (op_is_directory.failures.max=-1) (action_executor_acquired.max=-1) (committer_commit_job.failures.max=-1) (op_create.failures.max=-1) (op_xattr_get_named.failures.max=-1) (op_abort.max=-1) (object_bulk_delete_request.failures.max=-1) (op_get_delegation_token.failures.max=-1) (stream_write_queue_duration.failures.max=-1) (op_list_status.max=-1) (multipart_upload_list.failures.max=-1) (op_abort.failures.max=-1) (op_delete.failures.max=-1) (op_is_file.failures.max=-1) (object_list_request.failures.max=-1) (object_multipart_aborted.failures.max=-1) (op_list_files.failures.max=-1) (committer_stage_file_upload.max=-1) (object_put_request.max=-1) (op_get_content_summary.failures.max=-1) (op_xattr_get_named_map.failures.max=-1) (committer_materialize_file.failures.max=-1) (op_xattr_get_map.max=-1) (object_list_request.max=74) (op_mkdirs.failures.max=130712) (op_rename.failures.max=-1) (object_multipart_aborted.max=-1) (object_multipart_initiated.failures.max=-1) (op_xattr_list.failures.max=-1) (op_exists.max=-1) (action_http_head_request.failures.max=-1) (op_access.failures.max=-1) (store_exists_probe.failures.max=-1) (op_get_file_checksum.max=-1) (op_create.max=-1) (object_continue_list_request.max=-1) (object_delete_request.failures.max=-1) (op_get_file_status.failures.max=-1) (object_bulk_delete_request.max=-1) (delegation_tokens_issued.max=-1)); | |
means=((op_mkdirs.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_list.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_materialize_file.mean=(samples=0, sum=0, mean=0.0000)) (op_create.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_abort.mean=(samples=0, sum=0, mean=0.0000)) (op_is_file.mean=(samples=0, sum=0, mean=0.0000)) (op_glob_status.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_named.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_commit_job.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_is_directory.mean=(samples=0, sum=0, mean=0.0000)) (multipart_upload_list.mean=(samples=0, sum=0, mean=0.0000)) (op_access.failures.mean=(samples=0, sum=0, mean=0.0000)) (multipart_upload_list.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_bulk_delete_request.mean=(samples=0, sum=0, mean=0.0000)) (op_list_files.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_copy_from_local_file.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_put_request.failures.mean=(samples=8, sum=1214, mean=151.7500)) (op_get_content_summary.mean=(samples=0, sum=0, mean=0.0000)) (committer_stage_file_upload.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_named_map.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_aborted.mean=(samples=0, sum=0, mean=0.0000)) (op_rename.mean=(samples=0, sum=0, mean=0.0000)) (object_list_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_get_file_checksum.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_write_queue_duration.mean=(samples=0, sum=0, mean=0.0000)) (op_exists.failures.mean=(samples=1, sum=245, mean=245.0000)) (committer_materialize_file.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_abort.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_delete_request.mean=(samples=0, sum=0, mean=0.0000)) (store_exists_probe.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_initiated.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_bulk_delete_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (delegation_tokens_issued.mean=(samples=0, sum=0, mean=0.0000)) (delegation_tokens_issued.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_commit_job.mean=(samples=0, sum=0, mean=0.0000)) (op_create.mean=(samples=0, sum=0, mean=0.0000)) (op_list_status.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_map.failures.mean=(samples=0, sum=0, mean=0.0000)) (store_exists_probe.mean=(samples=0, sum=0, mean=0.0000)) (stream_write_queue_duration.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_list_files.mean=(samples=0, sum=0, mean=0.0000)) (action_http_head_request.mean=(samples=4, sum=211, mean=52.7500)) (op_delete.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_initiated.mean=(samples=0, sum=0, mean=0.0000)) (op_copy_from_local_file.mean=(samples=0, sum=0, mean=0.0000)) (op_is_file.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_mkdirs.failures.mean=(samples=1, sum=130712, mean=130712.0000)) (op_list_status.mean=(samples=0, sum=0, mean=0.0000)) (op_rename.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_stage_file_upload.mean=(samples=0, sum=0, mean=0.0000)) (object_continue_list_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_continue_list_request.mean=(samples=0, sum=0, mean=0.0000)) (op_exists.mean=(samples=0, sum=0, mean=0.0000)) (action_executor_acquired.mean=(samples=0, sum=0, mean=0.0000)) (object_put_request.mean=(samples=0, sum=0, mean=0.0000)) (op_glob_status.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_is_directory.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_head_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_delete_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_named_map.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_named.mean=(samples=0, sum=0, mean=0.0000)) (op_get_file_checksum.mean=(samples=0, sum=0, mean=0.0000)) (op_get_content_summary.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_get_file_status.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_get_file_status.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_list.mean=(samples=0, sum=0, mean=0.0000)) (op_delete.mean=(samples=0, sum=0, mean=0.0000)) (op_get_delegation_token.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_list_request.mean=(samples=4, sum=134, mean=33.5000)) (op_xattr_get_map.mean=(samples=0, sum=0, mean=0.0000)) (action_executor_acquired.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_aborted.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_access.mean=(samples=0, sum=0, mean=0.0000)) (op_get_delegation_token.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000))); | |
22/08/12 15:01:42 DEBUG FileSystem: FileSystem.close() by method: org.apache.hadoop.fs.s3a.S3AFileSystem.close(S3AFileSystem.java:4069)); Key: (luke (auth:SIMPLE))@s3a://master.rando2; URI: s3a://master.rando2; Object Identity Hash: 11376190 | |
22/08/12 15:01:42 DEBUG PoolingHttpClientConnectionManager: Connection manager is shutting down | |
22/08/12 15:01:42 DEBUG IdleConnectionReaper: Reaper thread: | |
java.lang.InterruptedException: sleep interrupted | |
at java.base/java.lang.Thread.sleep(Native Method) | |
at com.amazonaws.http.IdleConnectionReaper.run(IdleConnectionReaper.java:188) | |
22/08/12 15:01:42 DEBUG IdleConnectionReaper: Shutting down reaper thread. | |
22/08/12 15:01:42 DEBUG DefaultManagedHttpClientConnection: http-outgoing-3: Close connection | |
22/08/12 15:01:42 DEBUG PoolingHttpClientConnectionManager: Connection manager shut down | |
22/08/12 15:01:42 DEBUG S3AFileSystem: Gracefully shutting down executor service. Waiting max 30 SECONDS | |
22/08/12 15:01:42 DEBUG S3AFileSystem: Succesfully shutdown executor service | |
22/08/12 15:01:42 DEBUG S3AFileSystem: Gracefully shutting down executor service. Waiting max 30 SECONDS | |
22/08/12 15:01:42 DEBUG S3AFileSystem: Succesfully shutdown executor service | |
22/08/12 15:01:42 DEBUG MBeans: Unregistering Hadoop:service=s3a-file-system,name=S3AMetrics1-master.rando2 | |
22/08/12 15:01:42 DEBUG S3AInstrumentation: Shutting down metrics publisher | |
22/08/12 15:01:42 DEBUG MetricsSystemImpl: refCount=1 | |
22/08/12 15:01:42 INFO MetricsSystemImpl: Stopping s3a-file-system metrics system... | |
22/08/12 15:01:42 DEBUG MBeans: Unregistering Hadoop:service=s3a-file-system,name=MetricsSystem,sub=Stats | |
22/08/12 15:01:42 INFO MetricsSystemImpl: s3a-file-system metrics system stopped. | |
22/08/12 15:01:42 DEBUG MBeans: Unregistering Hadoop:service=s3a-file-system,name=MetricsSystem,sub=Control | |
22/08/12 15:01:42 INFO MetricsSystemImpl: s3a-file-system metrics system shutdown complete. | |
22/08/12 15:01:42 DEBUG SignerManager: Unregistering fs from 0 initializers | |
22/08/12 15:01:42 DEBUG AbstractService: Service: NoopAuditManagerS3A entered state STOPPED | |
22/08/12 15:01:42 DEBUG CompositeService: NoopAuditManagerS3A: stopping services, size=1 | |
22/08/12 15:01:42 DEBUG CompositeService: Stopping service #0: Service NoopAuditor in state NoopAuditor: STARTED | |
22/08/12 15:01:42 DEBUG AbstractService: Service: NoopAuditor entered state STOPPED | |
22/08/12 15:01:42 DEBUG S3AFileSystem: Closing AWSCredentialProviderList[refcount= 1: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider, EnvironmentVariableCredentialsProvider, org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider@4a29569e] last provider: SimpleAWSCredentialsProvider | |
22/08/12 15:01:42 DEBUG AWSCredentialProviderList: Closing AWSCredentialProviderList[refcount= 0: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider, EnvironmentVariableCredentialsProvider, org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider@4a29569e] last provider: SimpleAWSCredentialsProvider | |
22/08/12 15:01:42 DEBUG S3AFileSystem: Statistics for s3a://master.rando2: counters=((action_http_head_request=4) | |
(ignored_errors=7) | |
(object_list_request=4) | |
(object_metadata_request=4) | |
(object_put_request=8) | |
(object_put_request.failures=8) | |
(object_put_request_completed=8) | |
(op_exists=1) | |
(op_exists.failures=1) | |
(op_mkdirs=1) | |
(op_mkdirs.failures=1) | |
(store_io_request=16) | |
(store_io_retry=7)); | |
gauges=(); | |
minimums=((action_http_head_request.min=16) | |
(object_list_request.min=18) | |
(object_put_request.failures.min=106) | |
(op_exists.failures.min=245) | |
(op_mkdirs.failures.min=130712)); | |
maximums=((action_http_head_request.max=160) | |
(object_list_request.max=74) | |
(object_put_request.failures.max=205) | |
(op_exists.failures.max=245) | |
(op_mkdirs.failures.max=130712)); | |
means=((action_http_head_request.mean=(samples=4, sum=211, mean=52.7500)) | |
(object_list_request.mean=(samples=4, sum=134, mean=33.5000)) | |
(object_put_request.failures.mean=(samples=8, sum=1214, mean=151.7500)) | |
(op_exists.failures.mean=(samples=1, sum=245, mean=245.0000)) | |
(op_mkdirs.failures.mean=(samples=1, sum=130712, mean=130712.0000))); | |
22/08/12 15:01:42 DEBUG FileSystem: FileSystem.close() by method: org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:529)); Key: (luke (auth:SIMPLE))@file://; URI: file:///; Object Identity Hash: 2822281f | |
22/08/12 15:01:42 DEBUG FileSystem: FileSystem.close() by method: org.apache.hadoop.fs.RawLocalFileSystem.close(RawLocalFileSystem.java:759)); Key: null; URI: file:///; Object Identity Hash: 27c5fb55 | |
22/08/12 15:01:42 DEBUG ShutdownHookManager: Completed shutdown in 0.108 seconds; Timeouts: 0 | |
22/08/12 15:01:42 DEBUG ShutdownHookManager: ShutdownHookManager completed shutdown. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment