Created
May 7, 2020 19:29
-
-
Save harrycinnamon/f0f3e9a52782cdaf2189e9df9acffa85 to your computer and use it in GitHub Desktop.
hadoop-hdfs-namenode-ip-10-0-4-12.log
This file has been truncated, but you can view the full file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2020-05-07 09:31:41,054 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: | |
/************************************************************ | |
STARTUP_MSG: Starting NameNode | |
STARTUP_MSG: user = hdfs | |
STARTUP_MSG: host = ip-10-0-4-12.us-west-2.compute.internal/10.0.4.12 | |
STARTUP_MSG: args = [] | |
STARTUP_MSG: version = 2.8.2.10-SNAPSHOT | |
STARTUP_MSG: classpath = /srv/hops/hadoop/etc/hadoop:/srv/hops/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-gpu-management-2.8.2.10-20200427.065454-121.jar:/srv/hops/hadoop/share/hadoop/common/lib/nvidia-management.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/erasure-coding-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-3.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-metadata-dal-2.8.2.10-20200427.065409-123.jar:/srv/hops/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/common/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jmx_prometheus_javaagent-0.12.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/json-smart-1.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/srv/hops/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/common/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/srv/hops/hadoop/share/hadoop/common/lib/gson-2.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/service-discovery-client-0.4-20200409.074643-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/jcip-annotations-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/ndb-dal.jar:/srv/hops/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-nfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/json-20140107.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/spymemcached-2.11.7.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/netty-all-4.1.42.Final.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcpkix-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-nodemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-api-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcprov-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-resourcemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-2.10.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-web-proxy-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.2.10-20200427.065934-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.2.10-20200427.065930-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.2.10-20200427.065949-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.2.10-20200427.065948-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-20200427.065952-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.2.10-20200427.065922-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.2.10-20200427.070014-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.2.10-20200427.070006-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.2.10-20200427.070010-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.2.10-20200427.070012-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.2.10-20200427.070008-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.2.10-20200427.070003-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.2.10-20200427.070017-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.2.10-20200427.070015-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.2.10-20200427.070043-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.2.10-20200427.070052-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.2.10-20200427.070046-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.2.10-20200427.070034-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.2.10-20200427.070040-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.2.10-20200427.070038-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.2.10-20200427.070050-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.10-20200427.070054-119.jar::.:/srv/hops/hadoop/share/hadoop/yarn/test/*:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.2.10-20200427.065934-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.2.10-20200427.065930-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.2.10-20200427.065949-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.2.10-20200427.065948-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-20200427.065952-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.2.10-20200427.065922-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.2.10-20200427.070014-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.2.10-20200427.070006-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.2.10-20200427.070010-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.2.10-20200427.070012-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.2.10-20200427.070008-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.2.10-20200427.070003-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.2.10-20200427.070017-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.2.10-20200427.070015-119.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcpkix-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-nodemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-api-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcprov-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-resourcemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-2.10.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-web-proxy-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.2.10-20200427.070043-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.2.10-20200427.070052-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.2.10-20200427.070046-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.2.10-20200427.070034-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.2.10-20200427.070040-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.2.10-20200427.070038-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.2.10-20200427.070050-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.10-20200427.070054-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/test/*:/srv/hops/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-gpu-management-2.8.2.10-20200427.065454-121.jar:/srv/hops/hadoop/share/hadoop/common/lib/nvidia-management.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/erasure-coding-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-3.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-metadata-dal-2.8.2.10-20200427.065409-123.jar:/srv/hops/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/common/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jmx_prometheus_javaagent-0.12.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/json-smart-1.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/srv/hops/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/common/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/srv/hops/hadoop/share/hadoop/common/lib/gson-2.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/service-discovery-client-0.4-20200409.074643-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/jcip-annotations-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/ndb-dal.jar:/srv/hops/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/json-20140107.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/spymemcached-2.11.7.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/netty-all-4.1.42.Final.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-configuration-1.6.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hops-gpu-management-2.8.2.10-20200427.065454-121.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-datajoin-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-distcp-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/tools/lib/api-util-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-httpclient-3.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-beanutils-1.7.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-archive-logs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/apacheds-i18n-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jmespath-java-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/tools/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-databind-2.10.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-beanutils-core-1.8.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-archives-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jsr305-3.0.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-aws-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/nimbus-jose-jwt-3.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-ant-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/httpcore-4.4.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/aws-java-sdk-kms-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hops-metadata-dal-2.8.2.10-20200427.065409-123.jar:/srv/hops/hadoop/share/hadoop/tools/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-annotations-2.10.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-sls-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/httpclient-4.5.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/tools/lib/bcprov-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-rumen-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/curator-recipes-2.7.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/aws-java-sdk-core-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-gridmix-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-core-2.10.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/json-smart-1.1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-net-3.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jetty-sslengine-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jsch-0.1.54.jar:/srv/hops/hadoop/share/hadoop/tools/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/joda-time-2.9.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-extras-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-digester-1.8.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-dataformat-cbor-2.6.7.jar:/srv/hops/hadoop/share/hadoop/tools/lib/gson-2.8.5.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-auth-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/service-discovery-client-0.4-20200409.074643-1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/api-asn1-api-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jcip-annotations-1.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/ion-java-1.0.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/metrics-core-3.0.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/java-xmlbuilder-0.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/aws-java-sdk-s3-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-openstack-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jets3t-0.9.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/curator-framework-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-nfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-nfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.2.10-20200427.070043-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.2.10-20200427.070052-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.2.10-20200427.070046-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.2.10-20200427.070034-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.2.10-20200427.070040-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.2.10-20200427.070038-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.2.10-20200427.070050-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.10-20200427.070054-119.jar:/srv/hops/hadoop/contrib/capacity-scheduler/*.jar:/srv/hops/hadoop/contrib/capacity-scheduler/*.jar:/srv/hops/hadoop/contrib/capacity-scheduler/*.jar:/srv/hops/hadoop/share/hadoop/common/lib/jmx_prometheus_javaagent-0.12.0.jar | |
STARTUP_MSG: build = git@github.com:hopshadoop/hops.git -r 5bb94b87c4e62d91d17f97533ed018e07cf3f8bc; compiled by 'jenkins' on 2020-04-27T06:57Z | |
STARTUP_MSG: java = 1.8.0_252 | |
************************************************************/ | |
2020-05-07 09:31:41,062 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] | |
2020-05-07 09:31:41,065 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] | |
2020-05-07 09:31:41,223 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties | |
2020-05-07 09:31:41,261 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). | |
2020-05-07 09:31:41,262 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started | |
2020-05-07 09:31:41,312 WARN org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library | |
2020-05-07 09:31:41,386 INFO io.hops.resolvingcache.Cache: starting Resolving Cache [InMemoryCache] | |
2020-05-07 09:31:41,418 INFO io.hops.metadata.ndb.ClusterjConnector: Database connect string: 10.0.4.12:1186 | |
2020-05-07 09:31:41,418 INFO io.hops.metadata.ndb.ClusterjConnector: Database name: hops | |
2020-05-07 09:31:41,418 INFO io.hops.metadata.ndb.ClusterjConnector: Max Transactions: 1024 | |
2020-05-07 09:31:42,477 INFO io.hops.security.UsersGroups: UsersGroups Initialized. | |
2020-05-07 09:31:42,632 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070 | |
2020-05-07 09:31:42,688 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog | |
2020-05-07 09:31:42,694 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined | |
2020-05-07 09:31:42,700 INFO org.apache.hadoop.http.HttpServer3: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer3$QuotingInputFilter) | |
2020-05-07 09:31:42,702 INFO org.apache.hadoop.http.HttpServer3: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs | |
2020-05-07 09:31:42,702 INFO org.apache.hadoop.http.HttpServer3: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static | |
2020-05-07 09:31:42,703 INFO org.apache.hadoop.http.HttpServer3: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs | |
2020-05-07 09:31:42,723 INFO org.apache.hadoop.http.HttpServer3: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) | |
2020-05-07 09:31:42,725 INFO org.apache.hadoop.http.HttpServer3: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* | |
2020-05-07 09:31:42,729 INFO org.apache.hadoop.http.HttpServer3: Jetty bound to port 50070 | |
2020-05-07 09:31:42,729 INFO org.mortbay.log: jetty-6.1.26 | |
2020-05-07 09:31:42,861 INFO org.mortbay.log: Started HttpServer3$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 | |
2020-05-07 09:31:42,886 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found. | |
2020-05-07 09:31:42,988 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 | |
2020-05-07 09:31:42,988 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true | |
2020-05-07 09:31:42,990 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 | |
2020-05-07 09:31:42,990 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2020 May 07 09:31:42 | |
2020-05-07 09:31:42,995 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 3 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 50 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: slicerBatchSize = 500 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: misReplicatedNoOfBatchs = 20 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: slicerNbOfBatchs = 20 | |
2020-05-07 09:31:43,203 INFO com.zaxxer.hikari.HikariDataSource: HikariCP pool HikariPool-0 is starting. | |
2020-05-07 09:31:43,458 WARN io.hops.common.IDsGeneratorFactory: Called setConfiguration more than once. | |
2020-05-07 09:31:43,461 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE) | |
2020-05-07 09:31:43,461 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: superGroup = hdfs | |
2020-05-07 09:31:43,461 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true | |
2020-05-07 09:31:43,462 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true | |
2020-05-07 09:31:43,510 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Added new root inode | |
2020-05-07 09:31:43,510 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false | |
2020-05-07 09:31:43,511 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true | |
2020-05-07 09:31:43,511 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 13755 | |
2020-05-07 09:31:43,511 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: The maximum number of xattrs per inode is set to 32 | |
2020-05-07 09:31:43,511 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times | |
2020-05-07 09:31:43,516 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 | |
2020-05-07 09:31:43,516 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 | |
2020-05-07 09:31:43,516 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 | |
2020-05-07 09:31:43,518 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled | |
2020-05-07 09:31:43,518 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis | |
2020-05-07 09:31:43,528 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups | |
2020-05-07 09:31:43,629 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to 0.0.0.0:8020 | |
2020-05-07 09:31:43,634 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 12000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler | |
2020-05-07 09:31:43,642 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8020 | |
2020-05-07 09:31:43,642 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #2 for port 8020 | |
2020-05-07 09:31:43,642 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #3 for port 8020 | |
2020-05-07 09:31:43,755 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor | |
2020-05-07 09:31:43,766 INFO io.hops.leaderElection.LETransaction: LE Status: id 1 I can be the leader but I have weak locks. Retry with stronger lock | |
2020-05-07 09:31:43,766 INFO io.hops.leaderElection.LETransaction: LE Status: id 1 periodic update. Stronger locks requested in next round | |
2020-05-07 09:31:43,768 INFO io.hops.leaderElection.LETransaction: LE Status: id 1 I am the new LEADER. | |
2020-05-07 09:31:43,866 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean | |
2020-05-07 09:31:44,888 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: cealring the safe blocks tabl, this may take some time. | |
2020-05-07 09:31:44,893 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 | |
2020-05-07 09:31:44,893 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 | |
2020-05-07 09:31:44,893 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 | |
2020-05-07 09:31:44,900 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0 | |
2020-05-07 09:31:44,908 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 2 secs | |
2020-05-07 09:31:44,910 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes | |
2020-05-07 09:31:44,911 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks | |
2020-05-07 09:31:44,911 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: cealring the safe blocks tabl, this may take some time. | |
2020-05-07 09:31:44,918 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0 | |
2020-05-07 09:31:44,948 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting | |
2020-05-07 09:31:44,948 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: starting | |
2020-05-07 09:31:44,981 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Leader Node RPC up at: ip-10-0-4-12.us-west-2.compute.internal/10.0.4.12:8020 | |
2020-05-07 09:31:44,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state | |
2020-05-07 09:31:44,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Catching up to latest edits from old active before taking over writer role in edits logs | |
2020-05-07 09:31:44,983 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Marking all datandoes as stale | |
2020-05-07 09:31:44,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Reprocessing replication and invalidation queues | |
2020-05-07 09:31:44,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues | |
2020-05-07 09:31:44,990 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds | |
2020-05-07 09:31:45,007 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: processMisReplicated read 0/10000 in the Ids range [0 - 10000] (max inodeId when the process started: 1) | |
2020-05-07 09:31:45,019 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 0 | |
2020-05-07 09:31:45,019 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0 | |
2020-05-07 09:31:45,019 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0 | |
2020-05-07 09:31:45,019 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0 | |
2020-05-07 09:31:45,019 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0 | |
2020-05-07 09:31:45,019 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 32 msec | |
2020-05-07 09:31:45,497 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 60 minutes. | |
2020-05-07 09:31:45,498 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 60 minutes. | |
2020-05-07 09:31:48,713 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:31:58,688 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:32:08,698 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:32:18,680 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:32:28,689 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:32:38,690 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:32:48,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:32:58,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:33:02,470 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15: SIGTERM | |
2020-05-07 09:33:02,475 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: | |
/************************************************************ | |
SHUTDOWN_MSG: Shutting down NameNode at ip-10-0-4-12.us-west-2.compute.internal/10.0.4.12 | |
************************************************************/ | |
2020-05-07 09:33:04,181 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: | |
/************************************************************ | |
STARTUP_MSG: Starting NameNode | |
STARTUP_MSG: user = hdfs | |
STARTUP_MSG: host = ip-10-0-4-12.us-west-2.compute.internal/10.0.4.12 | |
STARTUP_MSG: args = [] | |
STARTUP_MSG: version = 2.8.2.10-SNAPSHOT | |
STARTUP_MSG: classpath = /srv/hops/hadoop/etc/hadoop:/srv/hops/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-gpu-management-2.8.2.10-20200427.065454-121.jar:/srv/hops/hadoop/share/hadoop/common/lib/nvidia-management.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/erasure-coding-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-3.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-metadata-dal-2.8.2.10-20200427.065409-123.jar:/srv/hops/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/common/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jmx_prometheus_javaagent-0.12.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/json-smart-1.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/srv/hops/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/common/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/srv/hops/hadoop/share/hadoop/common/lib/gson-2.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/service-discovery-client-0.4-20200409.074643-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/jcip-annotations-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/ndb-dal.jar:/srv/hops/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-nfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/json-20140107.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/spymemcached-2.11.7.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/netty-all-4.1.42.Final.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcpkix-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-nodemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-api-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcprov-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-resourcemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-2.10.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-web-proxy-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.2.10-20200427.065934-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.2.10-20200427.065930-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.2.10-20200427.065949-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.2.10-20200427.065948-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-20200427.065952-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.2.10-20200427.065922-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.2.10-20200427.070014-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.2.10-20200427.070006-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.2.10-20200427.070010-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.2.10-20200427.070012-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.2.10-20200427.070008-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.2.10-20200427.070003-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.2.10-20200427.070017-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.2.10-20200427.070015-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.2.10-20200427.070043-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.2.10-20200427.070052-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.2.10-20200427.070046-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.2.10-20200427.070034-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.2.10-20200427.070040-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.2.10-20200427.070038-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.2.10-20200427.070050-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.10-20200427.070054-119.jar::.:/srv/hops/hadoop/share/hadoop/yarn/test/*:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.2.10-20200427.065934-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.2.10-20200427.065930-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.2.10-20200427.065949-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.2.10-20200427.065948-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-20200427.065952-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.2.10-20200427.065922-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.2.10-20200427.070014-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.2.10-20200427.070006-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.2.10-20200427.070010-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.2.10-20200427.070012-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.2.10-20200427.070008-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.2.10-20200427.070003-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.2.10-20200427.070017-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.2.10-20200427.070015-119.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcpkix-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-nodemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-api-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcprov-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-resourcemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-2.10.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-web-proxy-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.2.10-20200427.070043-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.2.10-20200427.070052-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.2.10-20200427.070046-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.2.10-20200427.070034-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.2.10-20200427.070040-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.2.10-20200427.070038-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.2.10-20200427.070050-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.10-20200427.070054-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/test/*:/srv/hops/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-gpu-management-2.8.2.10-20200427.065454-121.jar:/srv/hops/hadoop/share/hadoop/common/lib/nvidia-management.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/erasure-coding-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-3.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-metadata-dal-2.8.2.10-20200427.065409-123.jar:/srv/hops/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/common/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jmx_prometheus_javaagent-0.12.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/json-smart-1.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/srv/hops/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/common/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/srv/hops/hadoop/share/hadoop/common/lib/gson-2.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/service-discovery-client-0.4-20200409.074643-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/jcip-annotations-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/ndb-dal.jar:/srv/hops/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/json-20140107.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/spymemcached-2.11.7.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/netty-all-4.1.42.Final.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-configuration-1.6.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hops-gpu-management-2.8.2.10-20200427.065454-121.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-datajoin-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-distcp-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/tools/lib/api-util-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-httpclient-3.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-beanutils-1.7.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-archive-logs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/apacheds-i18n-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jmespath-java-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/tools/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-databind-2.10.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-beanutils-core-1.8.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-archives-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jsr305-3.0.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-aws-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/nimbus-jose-jwt-3.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-ant-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/httpcore-4.4.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/aws-java-sdk-kms-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hops-metadata-dal-2.8.2.10-20200427.065409-123.jar:/srv/hops/hadoop/share/hadoop/tools/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-annotations-2.10.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-sls-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/httpclient-4.5.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/tools/lib/bcprov-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-rumen-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/curator-recipes-2.7.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/aws-java-sdk-core-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-gridmix-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-core-2.10.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/json-smart-1.1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-net-3.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jetty-sslengine-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jsch-0.1.54.jar:/srv/hops/hadoop/share/hadoop/tools/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/joda-time-2.9.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-extras-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-digester-1.8.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-dataformat-cbor-2.6.7.jar:/srv/hops/hadoop/share/hadoop/tools/lib/gson-2.8.5.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-auth-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/service-discovery-client-0.4-20200409.074643-1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/api-asn1-api-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jcip-annotations-1.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/ion-java-1.0.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/metrics-core-3.0.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/java-xmlbuilder-0.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/aws-java-sdk-s3-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-openstack-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jets3t-0.9.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/curator-framework-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-nfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-nfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.2.10-20200427.070043-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.2.10-20200427.070052-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.2.10-20200427.070046-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.2.10-20200427.070034-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.2.10-20200427.070040-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.2.10-20200427.070038-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.2.10-20200427.070050-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.10-20200427.070054-119.jar:/srv/hops/hadoop/contrib/capacity-scheduler/*.jar:/srv/hops/hadoop/contrib/capacity-scheduler/*.jar:/srv/hops/hadoop/contrib/capacity-scheduler/*.jar:/srv/hops/hadoop/share/hadoop/common/lib/jmx_prometheus_javaagent-0.12.0.jar | |
STARTUP_MSG: build = git@github.com:hopshadoop/hops.git -r 5bb94b87c4e62d91d17f97533ed018e07cf3f8bc; compiled by 'jenkins' on 2020-04-27T06:57Z | |
STARTUP_MSG: java = 1.8.0_252 | |
************************************************************/ | |
2020-05-07 09:33:04,189 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] | |
2020-05-07 09:33:04,191 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] | |
2020-05-07 09:33:04,325 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties | |
2020-05-07 09:33:04,352 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). | |
2020-05-07 09:33:04,352 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started | |
2020-05-07 09:33:04,408 WARN org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library | |
2020-05-07 09:33:04,490 INFO io.hops.resolvingcache.Cache: starting Resolving Cache [InMemoryCache] | |
2020-05-07 09:33:04,523 INFO io.hops.metadata.ndb.ClusterjConnector: Database connect string: 10.0.4.12:1186 | |
2020-05-07 09:33:04,523 INFO io.hops.metadata.ndb.ClusterjConnector: Database name: hops | |
2020-05-07 09:33:04,524 INFO io.hops.metadata.ndb.ClusterjConnector: Max Transactions: 1024 | |
2020-05-07 09:33:05,589 INFO io.hops.security.UsersGroups: UsersGroups Initialized. | |
2020-05-07 09:33:05,685 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070 | |
2020-05-07 09:33:05,732 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog | |
2020-05-07 09:33:05,737 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined | |
2020-05-07 09:33:05,741 INFO org.apache.hadoop.http.HttpServer3: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer3$QuotingInputFilter) | |
2020-05-07 09:33:05,743 INFO org.apache.hadoop.http.HttpServer3: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs | |
2020-05-07 09:33:05,743 INFO org.apache.hadoop.http.HttpServer3: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static | |
2020-05-07 09:33:05,743 INFO org.apache.hadoop.http.HttpServer3: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs | |
2020-05-07 09:33:05,758 INFO org.apache.hadoop.http.HttpServer3: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) | |
2020-05-07 09:33:05,759 INFO org.apache.hadoop.http.HttpServer3: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* | |
2020-05-07 09:33:05,762 INFO org.apache.hadoop.http.HttpServer3: Jetty bound to port 50070 | |
2020-05-07 09:33:05,762 INFO org.mortbay.log: jetty-6.1.26 | |
2020-05-07 09:33:05,863 INFO org.mortbay.log: Started HttpServer3$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 | |
2020-05-07 09:33:05,883 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found. | |
2020-05-07 09:33:05,961 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 | |
2020-05-07 09:33:05,961 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true | |
2020-05-07 09:33:05,963 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 | |
2020-05-07 09:33:05,963 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2020 May 07 09:33:05 | |
2020-05-07 09:33:05,967 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 3 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 50 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: slicerBatchSize = 500 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: misReplicatedNoOfBatchs = 20 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: slicerNbOfBatchs = 20 | |
2020-05-07 09:33:06,135 INFO com.zaxxer.hikari.HikariDataSource: HikariCP pool HikariPool-0 is starting. | |
2020-05-07 09:33:06,371 WARN io.hops.common.IDsGeneratorFactory: Called setConfiguration more than once. | |
2020-05-07 09:33:06,374 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE) | |
2020-05-07 09:33:06,374 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: superGroup = hdfs | |
2020-05-07 09:33:06,374 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true | |
2020-05-07 09:33:06,375 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true | |
2020-05-07 09:33:06,435 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false | |
2020-05-07 09:33:06,435 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true | |
2020-05-07 09:33:06,436 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 13755 | |
2020-05-07 09:33:06,436 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: The maximum number of xattrs per inode is set to 32 | |
2020-05-07 09:33:06,436 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times | |
2020-05-07 09:33:06,443 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 | |
2020-05-07 09:33:06,444 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 | |
2020-05-07 09:33:06,444 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 | |
2020-05-07 09:33:06,445 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled | |
2020-05-07 09:33:06,445 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis | |
2020-05-07 09:33:06,457 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups | |
2020-05-07 09:33:06,560 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to 0.0.0.0:8020 | |
2020-05-07 09:33:06,564 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 12000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler | |
2020-05-07 09:33:06,573 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8020 | |
2020-05-07 09:33:06,573 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #2 for port 8020 | |
2020-05-07 09:33:06,573 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #3 for port 8020 | |
2020-05-07 09:33:06,687 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor | |
2020-05-07 09:33:06,701 INFO io.hops.leaderElection.LETransaction: LE Status: id 2 I am a NON_LEADER process | |
2020-05-07 09:33:08,715 INFO io.hops.leaderElection.LETransaction: LE Status: id 2 I can be the leader but I have weak locks. Retry with stronger lock | |
2020-05-07 09:33:08,716 INFO io.hops.leaderElection.LETransaction: LE Status: id 2 periodic update. Stronger locks requested in next round | |
2020-05-07 09:33:08,718 INFO io.hops.leaderElection.LETransaction: LE Status: id 2 I am the new LEADER. | |
2020-05-07 09:33:08,803 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean | |
2020-05-07 09:33:09,825 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: cealring the safe blocks tabl, this may take some time. | |
2020-05-07 09:33:09,830 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 | |
2020-05-07 09:33:09,830 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 | |
2020-05-07 09:33:09,830 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 | |
2020-05-07 09:33:09,837 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0 | |
2020-05-07 09:33:09,845 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 3 secs | |
2020-05-07 09:33:09,847 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes | |
2020-05-07 09:33:09,848 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks | |
2020-05-07 09:33:09,848 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: cealring the safe blocks tabl, this may take some time. | |
2020-05-07 09:33:09,855 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0 | |
2020-05-07 09:33:09,895 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting | |
2020-05-07 09:33:09,895 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: starting | |
2020-05-07 09:33:10,027 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Leader Node RPC up at: ip-10-0-4-12.us-west-2.compute.internal/10.0.4.12:8020 | |
2020-05-07 09:33:10,028 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state | |
2020-05-07 09:33:10,028 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Catching up to latest edits from old active before taking over writer role in edits logs | |
2020-05-07 09:33:10,028 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Marking all datandoes as stale | |
2020-05-07 09:33:10,029 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Reprocessing replication and invalidation queues | |
2020-05-07 09:33:10,029 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues | |
2020-05-07 09:33:10,047 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds | |
2020-05-07 09:33:10,071 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: processMisReplicated read 0/10000 in the Ids range [0 - 10000] (max inodeId when the process started: 7) | |
2020-05-07 09:33:10,080 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 0 | |
2020-05-07 09:33:10,081 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0 | |
2020-05-07 09:33:10,081 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0 | |
2020-05-07 09:33:10,081 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0 | |
2020-05-07 09:33:10,081 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0 | |
2020-05-07 09:33:10,081 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 46 msec | |
2020-05-07 09:33:10,620 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 60 minutes. | |
2020-05-07 09:33:10,620 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 60 minutes. | |
2020-05-07 09:33:18,711 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:33:21,863 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(10.0.4.12:50010, datanodeUuid=a7438e0b-c413-4d38-888d-ab4392b95d31, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-4230c663-6049-437f-b406-77ff12af092d;nsid=911;c=1588843062940) storage a7438e0b-c413-4d38-888d-ab4392b95d31 | |
2020-05-07 09:33:21,864 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0 | |
2020-05-07 09:33:21,864 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.0.4.12:50010 | |
2020-05-07 09:33:21,918 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 containing 0 blocks is assigned to NN [ID: 2, IP: 10.0.4.12] | |
2020-05-07 09:33:21,921 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0 | |
2020-05-07 09:33:22,246 INFO BlockStateChange: BLOCK* processReport success: from DatanodeRegistration(10.0.4.12:50010, datanodeUuid=a7438e0b-c413-4d38-888d-ab4392b95d31, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-4230c663-6049-437f-b406-77ff12af092d;nsid=911;c=1588843062940) storage: DatanodeStorage[DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9,DISK,NORMAL], blocks: 5000, hasStaleStorages: true, processing time: 219 ms. (buckets,bucketsMatching,blocks,toRemove,toInvalidate,toCorrupt,toUC,toAdd,safeBlocksIfSafeMode)=(1000,0,5000,0,0,0,0,0,0) | |
2020-05-07 09:33:22,250 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 has completed | |
2020-05-07 09:33:28,684 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:33:30,165 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(10.0.4.12:50010, datanodeUuid=a7438e0b-c413-4d38-888d-ab4392b95d31, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-4230c663-6049-437f-b406-77ff12af092d;nsid=911;c=1588843062940) storage a7438e0b-c413-4d38-888d-ab4392b95d31 | |
2020-05-07 09:33:30,165 INFO org.apache.hadoop.net.NetworkTopology: Removing a node: /default-rack/10.0.4.12:50010 | |
2020-05-07 09:33:30,165 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.0.4.12:50010 | |
2020-05-07 09:33:30,194 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0 | |
2020-05-07 09:33:30,205 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 containing 0 blocks is assigned to NN [ID: 2, IP: 10.0.4.12] | |
2020-05-07 09:33:30,411 INFO BlockStateChange: BLOCK* processReport success: from DatanodeRegistration(10.0.4.12:50010, datanodeUuid=a7438e0b-c413-4d38-888d-ab4392b95d31, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-4230c663-6049-437f-b406-77ff12af092d;nsid=911;c=1588843062940) storage: DatanodeStorage[DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9,DISK,NORMAL], blocks: 5000, hasStaleStorages: false, processing time: 147 ms. (buckets,bucketsMatching,blocks,toRemove,toInvalidate,toCorrupt,toUC,toAdd,safeBlocksIfSafeMode)=(1000,0,5000,0,0,0,0,0,0) | |
2020-05-07 09:33:30,417 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 has completed | |
2020-05-07 09:33:38,687 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:33:48,747 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:33:58,693 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:34:08,763 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:34:18,683 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:34:28,725 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:34:38,702 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:34:48,710 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:34:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:35:08,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:35:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:35:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:35:38,682 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:35:48,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:35:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:36:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:36:18,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:36:18,777 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10000 State = UNDER_CONSTRUCTION for /apps/tez/apache-tez-0.9.1.2.tar.gz._COPYING_ | |
2020-05-07 09:36:18,983 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10000 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /apps/tez/apache-tez-0.9.1.2.tar.gz._COPYING_ | |
2020-05-07 09:36:18,989 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10000 State = COMMITTED size 13935245 byte | |
2020-05-07 09:36:19,392 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/tez/apache-tez-0.9.1.2.tar.gz._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1908209306_1 | |
2020-05-07 09:36:24,632 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/apps/tez/apache-tez-0.9.1.2.tar.gz" | |
2020-05-07 09:36:28,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:36:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:36:48,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:36:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:37:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:37:16,669 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10001 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/census/adult.data._COPYING_ | |
2020-05-07 09:37:16,788 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10001 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/census/adult.data._COPYING_ | |
2020-05-07 09:37:16,793 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10001 State = COMMITTED size 3974305 byte | |
2020-05-07 09:37:17,195 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/census/adult.data._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:17,256 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10002 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/census/adult.test._COPYING_ | |
2020-05-07 09:37:17,271 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10002 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/census/adult.test._COPYING_ | |
2020-05-07 09:37:17,275 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10002 State = COMMITTED size 2003153 byte | |
2020-05-07 09:37:17,676 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/census/adult.test._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:17,718 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10003 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/iris/iris.csv._COPYING_ | |
2020-05-07 09:37:17,727 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10003 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/iris/iris.csv._COPYING_ | |
2020-05-07 09:37:17,731 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10003 State = COMMITTED size 3966 byte | |
2020-05-07 09:37:18,133 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/iris/iris.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:18,156 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10004 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/iris/iris_knn.pkl._COPYING_ | |
2020-05-07 09:37:18,165 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10004 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/iris/iris_knn.pkl._COPYING_ | |
2020-05-07 09:37:18,168 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10004 State = COMMITTED size 14121 byte | |
2020-05-07 09:37:18,570 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/iris/iris_knn.pkl._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:18,623 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10005 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/MNIST/processed/test.pt._COPYING_ | |
2020-05-07 09:37:18,653 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10005 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/MNIST/processed/test.pt._COPYING_ | |
2020-05-07 09:37:18,658 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10005 State = COMMITTED size 7920381 byte | |
2020-05-07 09:37:18,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:37:19,060 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/MNIST/processed/test.pt._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:19,087 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10006 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/MNIST/processed/training.pt._COPYING_ | |
2020-05-07 09:37:19,199 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10006 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/MNIST/processed/training.pt._COPYING_ | |
2020-05-07 09:37:19,203 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10006 State = COMMITTED size 47520385 byte | |
2020-05-07 09:37:19,604 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/MNIST/processed/training.pt._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:19,637 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10007 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/t10k-images-idx3-ubyte.gz._COPYING_ | |
2020-05-07 09:37:19,649 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10007 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/t10k-images-idx3-ubyte.gz._COPYING_ | |
2020-05-07 09:37:19,653 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10007 State = COMMITTED size 1648877 byte | |
2020-05-07 09:37:20,055 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/t10k-images-idx3-ubyte.gz._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:20,089 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10008 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/t10k-labels-idx1-ubyte.gz._COPYING_ | |
2020-05-07 09:37:20,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10008 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/t10k-labels-idx1-ubyte.gz._COPYING_ | |
2020-05-07 09:37:20,105 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10008 State = COMMITTED size 4542 byte | |
2020-05-07 09:37:20,506 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/t10k-labels-idx1-ubyte.gz._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:20,530 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10009 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/train-images-idx3-ubyte.gz._COPYING_ | |
2020-05-07 09:37:20,555 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10009 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/train-images-idx3-ubyte.gz._COPYING_ | |
2020-05-07 09:37:20,558 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10009 State = COMMITTED size 9912422 byte | |
2020-05-07 09:37:20,960 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/train-images-idx3-ubyte.gz._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:20,983 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10010 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/train-labels-idx1-ubyte.gz._COPYING_ | |
2020-05-07 09:37:20,991 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10010 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/train-labels-idx1-ubyte.gz._COPYING_ | |
2020-05-07 09:37:20,995 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10010 State = COMMITTED size 28881 byte | |
2020-05-07 09:37:21,396 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/train-labels-idx1-ubyte.gz._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:21,444 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10011 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/model/1/saved_model.pb._COPYING_ | |
2020-05-07 09:37:21,454 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10011 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/model/1/saved_model.pb._COPYING_ | |
2020-05-07 09:37:21,460 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10011 State = COMMITTED size 19060 byte | |
2020-05-07 09:37:21,859 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/model/1/saved_model.pb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:21,893 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10012 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/model/1/variables/variables.data-00000-of-00001._COPYING_ | |
2020-05-07 09:37:21,902 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10012 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/model/1/variables/variables.data-00000-of-00001._COPYING_ | |
2020-05-07 09:37:21,905 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10012 State = COMMITTED size 31400 byte | |
2020-05-07 09:37:22,307 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/model/1/variables/variables.data-00000-of-00001._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:22,332 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10013 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/model/1/variables/variables.index._COPYING_ | |
2020-05-07 09:37:22,340 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10013 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/model/1/variables/variables.index._COPYING_ | |
2020-05-07 09:37:22,344 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10013 State = COMMITTED size 159 byte | |
2020-05-07 09:37:22,745 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/model/1/variables/variables.index._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:22,780 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10014 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/model/2/saved_model.pb._COPYING_ | |
2020-05-07 09:37:22,788 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10014 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/model/2/saved_model.pb._COPYING_ | |
2020-05-07 09:37:22,793 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10014 State = COMMITTED size 19060 byte | |
2020-05-07 09:37:23,194 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/model/2/saved_model.pb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:23,230 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10015 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/model/2/variables/variables.data-00000-of-00001._COPYING_ | |
2020-05-07 09:37:23,239 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10015 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/model/2/variables/variables.data-00000-of-00001._COPYING_ | |
2020-05-07 09:37:23,244 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10015 State = COMMITTED size 31400 byte | |
2020-05-07 09:37:23,644 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/model/2/variables/variables.data-00000-of-00001._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:23,667 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10016 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/model/2/variables/variables.index._COPYING_ | |
2020-05-07 09:37:23,676 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10016 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/model/2/variables/variables.index._COPYING_ | |
2020-05-07 09:37:23,680 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10016 State = COMMITTED size 159 byte | |
2020-05-07 09:37:24,080 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/model/2/variables/variables.index._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:24,118 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10017 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/train/train.tfrecords._COPYING_ | |
2020-05-07 09:37:24,211 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10017 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/train/train.tfrecords._COPYING_ | |
2020-05-07 09:37:24,215 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10017 State = COMMITTED size 49005000 byte | |
2020-05-07 09:37:24,616 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/train/train.tfrecords._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:24,655 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10018 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/validation/validation.tfrecords._COPYING_ | |
2020-05-07 09:37:24,671 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10018 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/validation/validation.tfrecords._COPYING_ | |
2020-05-07 09:37:24,675 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10018 State = COMMITTED size 4455000 byte | |
2020-05-07 09:37:25,077 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/validation/validation.tfrecords._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:25,114 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10019 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/numpy/C_test.npy._COPYING_ | |
2020-05-07 09:37:25,137 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10019 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/numpy/C_test.npy._COPYING_ | |
2020-05-07 09:37:25,143 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10019 State = COMMITTED size 3072128 byte | |
2020-05-07 09:37:25,543 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/numpy/C_test.npy._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:25,575 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10020 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/visualization/Pokemon.csv._COPYING_ | |
2020-05-07 09:37:25,583 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10020 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/visualization/Pokemon.csv._COPYING_ | |
2020-05-07 09:37:25,587 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10020 State = COMMITTED size 44028 byte | |
2020-05-07 09:37:25,988 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/visualization/Pokemon.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:26,025 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10021 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Benchmarks/benchmark.ipynb._COPYING_ | |
2020-05-07 09:37:26,032 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10021 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Benchmarks/benchmark.ipynb._COPYING_ | |
2020-05-07 09:37:26,036 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10021 State = COMMITTED size 8181 byte | |
2020-05-07 09:37:26,437 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Benchmarks/benchmark.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:26,471 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10022 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/collective_allreduce_strategy/keras.ipynb._COPYING_ | |
2020-05-07 09:37:26,478 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10022 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/collective_allreduce_strategy/keras.ipynb._COPYING_ | |
2020-05-07 09:37:26,481 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10022 State = COMMITTED size 7802 byte | |
2020-05-07 09:37:26,882 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/collective_allreduce_strategy/keras.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:26,902 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10023 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/collective_allreduce_strategy/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:26,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10023 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/collective_allreduce_strategy/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:26,913 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10023 State = COMMITTED size 17026 byte | |
2020-05-07 09:37:27,313 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/collective_allreduce_strategy/mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:27,340 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10024 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/mirrored_strategy/keras.ipynb._COPYING_ | |
2020-05-07 09:37:27,347 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10024 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/mirrored_strategy/keras.ipynb._COPYING_ | |
2020-05-07 09:37:27,350 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10024 State = COMMITTED size 7388 byte | |
2020-05-07 09:37:27,751 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/mirrored_strategy/keras.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:27,772 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10025 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/mirrored_strategy/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:27,780 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10025 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/mirrored_strategy/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:27,784 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10025 State = COMMITTED size 15542 byte | |
2020-05-07 09:37:28,184 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/mirrored_strategy/mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:28,213 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10026 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/parameter_server_strategy/keras.ipynb._COPYING_ | |
2020-05-07 09:37:28,220 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10026 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/parameter_server_strategy/keras.ipynb._COPYING_ | |
2020-05-07 09:37:28,223 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10026 State = COMMITTED size 7698 byte | |
2020-05-07 09:37:28,624 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/parameter_server_strategy/keras.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:28,643 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10027 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/parameter_server_strategy/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:28,650 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10027 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/parameter_server_strategy/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:28,654 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10027 State = COMMITTED size 16778 byte | |
2020-05-07 09:37:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:37:29,055 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/parameter_server_strategy/mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:29,088 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10028 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/sklearn/IrisClassification_And_Serving_SKLearn.ipynb._COPYING_ | |
2020-05-07 09:37:29,095 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10028 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/sklearn/IrisClassification_And_Serving_SKLearn.ipynb._COPYING_ | |
2020-05-07 09:37:29,099 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10028 State = COMMITTED size 18622 byte | |
2020-05-07 09:37:29,499 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/sklearn/IrisClassification_And_Serving_SKLearn.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:29,520 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10029 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/sklearn/iris_flower_classifier.py._COPYING_ | |
2020-05-07 09:37:29,527 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10029 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/sklearn/iris_flower_classifier.py._COPYING_ | |
2020-05-07 09:37:29,530 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10029 State = COMMITTED size 984 byte | |
2020-05-07 09:37:29,932 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/sklearn/iris_flower_classifier.py._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:29,972 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10030 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/tensorflow/model_repo_and_serving.ipynb._COPYING_ | |
2020-05-07 09:37:29,980 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10030 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/tensorflow/model_repo_and_serving.ipynb._COPYING_ | |
2020-05-07 09:37:29,985 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10030 State = COMMITTED size 65776 byte | |
2020-05-07 09:37:30,386 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/tensorflow/model_repo_and_serving.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:30,422 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10031 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Experiment/Keras/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:30,430 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10031 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Experiment/Keras/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:30,433 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10031 State = COMMITTED size 9282 byte | |
2020-05-07 09:37:30,833 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Experiment/Keras/mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:30,859 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10032 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Experiment/PyTorch/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:30,866 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10032 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Experiment/PyTorch/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:30,869 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10032 State = COMMITTED size 10702 byte | |
2020-05-07 09:37:31,270 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Experiment/PyTorch/mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:31,297 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10033 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Experiment/TensorFlow/minimal_mnist_classifier_on_hops.ipynb._COPYING_ | |
2020-05-07 09:37:31,305 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10033 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Experiment/TensorFlow/minimal_mnist_classifier_on_hops.ipynb._COPYING_ | |
2020-05-07 09:37:31,308 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10033 State = COMMITTED size 7804 byte | |
2020-05-07 09:37:31,709 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Experiment/TensorFlow/minimal_mnist_classifier_on_hops.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:31,729 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10034 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Experiment/TensorFlow/tensorboard_debugger.ipynb._COPYING_ | |
2020-05-07 09:37:31,736 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10034 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Experiment/TensorFlow/tensorboard_debugger.ipynb._COPYING_ | |
2020-05-07 09:37:31,740 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10034 State = COMMITTED size 11421 byte | |
2020-05-07 09:37:32,140 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Experiment/TensorFlow/tensorboard_debugger.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:32,166 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10035 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Filesystem/HopsFSOperations.ipynb._COPYING_ | |
2020-05-07 09:37:32,172 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10035 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Filesystem/HopsFSOperations.ipynb._COPYING_ | |
2020-05-07 09:37:32,176 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10035 State = COMMITTED size 7723 byte | |
2020-05-07 09:37:32,577 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Filesystem/HopsFSOperations.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:32,604 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10036 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Inference/Batch_Inference_Imagenet_Spark.ipynb._COPYING_ | |
2020-05-07 09:37:32,612 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10036 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Inference/Batch_Inference_Imagenet_Spark.ipynb._COPYING_ | |
2020-05-07 09:37:32,616 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10036 State = COMMITTED size 141805 byte | |
2020-05-07 09:37:33,017 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Inference/Batch_Inference_Imagenet_Spark.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:33,035 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10037 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Inference/Inference_Hello_World.ipynb._COPYING_ | |
2020-05-07 09:37:33,042 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10037 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Inference/Inference_Hello_World.ipynb._COPYING_ | |
2020-05-07 09:37:33,046 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10037 State = COMMITTED size 110552 byte | |
2020-05-07 09:37:33,446 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Inference/Inference_Hello_World.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:33,488 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10038 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Keras/evolutionary_search/keras_mnist.ipynb._COPYING_ | |
2020-05-07 09:37:33,495 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10038 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Keras/evolutionary_search/keras_mnist.ipynb._COPYING_ | |
2020-05-07 09:37:33,498 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10038 State = COMMITTED size 9470 byte | |
2020-05-07 09:37:33,899 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Keras/evolutionary_search/keras_mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:33,927 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10039 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Maggy/maggy-ablation-titanic-example.ipynb._COPYING_ | |
2020-05-07 09:37:33,936 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10039 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:37:33,940 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Maggy/maggy-ablation-titanic-example.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:33,959 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10040 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Maggy/maggy-fashion-mnist-example.ipynb._COPYING_ | |
2020-05-07 09:37:33,967 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10040 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Maggy/maggy-fashion-mnist-example.ipynb._COPYING_ | |
2020-05-07 09:37:33,970 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10040 State = COMMITTED size 13914 byte | |
2020-05-07 09:37:34,371 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Maggy/maggy-fashion-mnist-example.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:34,403 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10041 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/PyTorch/differential_evolution/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:34,410 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10041 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/PyTorch/differential_evolution/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:34,413 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10041 State = COMMITTED size 12094 byte | |
2020-05-07 09:37:34,814 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/PyTorch/differential_evolution/mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:34,853 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10042 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/TensorFlow/evolutionary_search/automl_fashion_mnist.ipynb._COPYING_ | |
2020-05-07 09:37:34,860 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10042 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/TensorFlow/evolutionary_search/automl_fashion_mnist.ipynb._COPYING_ | |
2020-05-07 09:37:34,865 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10042 State = COMMITTED size 16324 byte | |
2020-05-07 09:37:35,267 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/TensorFlow/evolutionary_search/automl_fashion_mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:35,311 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10043 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/TensorFlow/grid_search/grid_search_fashion_mnist.ipynb._COPYING_ | |
2020-05-07 09:37:35,319 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10043 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/TensorFlow/grid_search/grid_search_fashion_mnist.ipynb._COPYING_ | |
2020-05-07 09:37:35,324 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10043 State = COMMITTED size 16127 byte | |
2020-05-07 09:37:35,724 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/TensorFlow/grid_search/grid_search_fashion_mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:35,751 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10044 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Plotting/What_If_Tool_Notebook.ipynb._COPYING_ | |
2020-05-07 09:37:35,758 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10044 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Plotting/What_If_Tool_Notebook.ipynb._COPYING_ | |
2020-05-07 09:37:35,761 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10044 State = COMMITTED size 33925 byte | |
2020-05-07 09:37:36,162 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Plotting/What_If_Tool_Notebook.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:36,182 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10045 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Plotting/custom_scalar.ipynb._COPYING_ | |
2020-05-07 09:37:36,189 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10045 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Plotting/custom_scalar.ipynb._COPYING_ | |
2020-05-07 09:37:36,191 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10045 State = COMMITTED size 7691 byte | |
2020-05-07 09:37:36,592 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Plotting/custom_scalar.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:36,610 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10046 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Plotting/folium_heat_map.ipynb._COPYING_ | |
2020-05-07 09:37:36,616 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10046 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Plotting/folium_heat_map.ipynb._COPYING_ | |
2020-05-07 09:37:36,619 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10046 State = COMMITTED size 3078 byte | |
2020-05-07 09:37:37,020 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Plotting/folium_heat_map.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:37,039 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10047 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Plotting/ipyleaflet.ipynb._COPYING_ | |
2020-05-07 09:37:37,045 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10047 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Plotting/ipyleaflet.ipynb._COPYING_ | |
2020-05-07 09:37:37,048 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10047 State = COMMITTED size 12299 byte | |
2020-05-07 09:37:37,449 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Plotting/ipyleaflet.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:37,467 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10048 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Plotting/matplotlib_sparkmagic.ipynb._COPYING_ | |
2020-05-07 09:37:37,478 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10048 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Plotting/matplotlib_sparkmagic.ipynb._COPYING_ | |
2020-05-07 09:37:37,481 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10048 State = COMMITTED size 1810866 byte | |
2020-05-07 09:37:37,882 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Plotting/matplotlib_sparkmagic.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:37,907 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10049 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/ablation-feature-vs-model.png._COPYING_ | |
2020-05-07 09:37:37,913 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10049 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/ablation-feature-vs-model.png._COPYING_ | |
2020-05-07 09:37:37,916 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10049 State = COMMITTED size 148848 byte | |
2020-05-07 09:37:38,317 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/ablation-feature-vs-model.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:38,336 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10050 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/custom_scalar.png._COPYING_ | |
2020-05-07 09:37:38,343 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10050 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/custom_scalar.png._COPYING_ | |
2020-05-07 09:37:38,346 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10050 State = COMMITTED size 181880 byte | |
2020-05-07 09:37:38,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:37:38,747 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/custom_scalar.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:38,766 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10051 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/experiments.gif._COPYING_ | |
2020-05-07 09:37:38,776 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10051 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/experiments.gif._COPYING_ | |
2020-05-07 09:37:38,779 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10051 State = COMMITTED size 1414448 byte | |
2020-05-07 09:37:39,179 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/experiments.gif._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:39,197 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10052 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/hops.png._COPYING_ | |
2020-05-07 09:37:39,203 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10052 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/hops.png._COPYING_ | |
2020-05-07 09:37:39,206 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10052 State = COMMITTED size 5252 byte | |
2020-05-07 09:37:39,607 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/hops.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:39,623 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10053 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/models.gif._COPYING_ | |
2020-05-07 09:37:39,630 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10053 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/models.gif._COPYING_ | |
2020-05-07 09:37:39,633 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10053 State = COMMITTED size 515693 byte | |
2020-05-07 09:37:40,034 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/models.gif._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:40,057 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10054 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/servings.gif._COPYING_ | |
2020-05-07 09:37:40,065 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10054 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/servings.gif._COPYING_ | |
2020-05-07 09:37:40,070 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10054 State = COMMITTED size 793603 byte | |
2020-05-07 09:37:40,469 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/servings.gif._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:40,487 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10055 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving1.png._COPYING_ | |
2020-05-07 09:37:40,493 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10055 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving1.png._COPYING_ | |
2020-05-07 09:37:40,496 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10055 State = COMMITTED size 51670 byte | |
2020-05-07 09:37:40,897 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving1.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:40,915 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10056 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving2.png._COPYING_ | |
2020-05-07 09:37:40,922 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10056 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving2.png._COPYING_ | |
2020-05-07 09:37:40,925 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10056 State = COMMITTED size 23510 byte | |
2020-05-07 09:37:41,326 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving2.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:41,344 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10057 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving3.png._COPYING_ | |
2020-05-07 09:37:41,353 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10057 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving3.png._COPYING_ | |
2020-05-07 09:37:41,357 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10057 State = COMMITTED size 69308 byte | |
2020-05-07 09:37:41,758 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving3.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:41,777 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10058 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/tensorboard_debug.png._COPYING_ | |
2020-05-07 09:37:41,785 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10058 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/tensorboard_debug.png._COPYING_ | |
2020-05-07 09:37:41,787 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10058 State = COMMITTED size 156767 byte | |
2020-05-07 09:37:42,188 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/tensorboard_debug.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:42,211 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10059 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/numpy/numpy-hdfs.ipynb._COPYING_ | |
2020-05-07 09:37:42,218 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10059 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/numpy/numpy-hdfs.ipynb._COPYING_ | |
2020-05-07 09:37:42,221 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10059 State = COMMITTED size 1092 byte | |
2020-05-07 09:37:42,622 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/numpy/numpy-hdfs.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:42,649 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10060 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/pandas/pandas-hdfs.ipynb._COPYING_ | |
2020-05-07 09:37:42,657 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10060 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/pandas/pandas-hdfs.ipynb._COPYING_ | |
2020-05-07 09:37:42,661 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10060 State = COMMITTED size 1537 byte | |
2020-05-07 09:37:43,061 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/pandas/pandas-hdfs.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:37:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:38:05,723 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10061 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/data/attendances.csv._COPYING_ | |
2020-05-07 09:38:05,821 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10061 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:38:05,831 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/data/attendances.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:05,857 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10062 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/data/games.csv._COPYING_ | |
2020-05-07 09:38:05,867 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10062 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/data/games.csv._COPYING_ | |
2020-05-07 09:38:05,871 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10062 State = COMMITTED size 76451 byte | |
2020-05-07 09:38:06,271 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/data/games.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:06,293 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10063 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/data/players.csv._COPYING_ | |
2020-05-07 09:38:06,300 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10063 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/data/players.csv._COPYING_ | |
2020-05-07 09:38:06,303 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10063 State = COMMITTED size 212910 byte | |
2020-05-07 09:38:06,704 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/data/players.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:06,723 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10064 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/data/season_scores.csv._COPYING_ | |
2020-05-07 09:38:06,731 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10064 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/data/season_scores.csv._COPYING_ | |
2020-05-07 09:38:06,734 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10064 State = COMMITTED size 8378 byte | |
2020-05-07 09:38:07,135 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/data/season_scores.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:07,158 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10065 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/data/teams.csv._COPYING_ | |
2020-05-07 09:38:07,165 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10065 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/data/teams.csv._COPYING_ | |
2020-05-07 09:38:07,167 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10065 State = COMMITTED size 2307 byte | |
2020-05-07 09:38:07,569 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/data/teams.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:07,595 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10066 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/FeatureStoreQuickStart.ipynb._COPYING_ | |
2020-05-07 09:38:07,601 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10066 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/FeatureStoreQuickStart.ipynb._COPYING_ | |
2020-05-07 09:38:07,603 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10066 State = COMMITTED size 24136 byte | |
2020-05-07 09:38:08,006 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/FeatureStoreQuickStart.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:08,062 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10067 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/FeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:08,071 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10067 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/FeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:08,074 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10067 State = COMMITTED size 747622 byte | |
2020-05-07 09:38:08,475 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/FeaturestoreTourPython.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:08,496 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10068 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/FeaturestoreTourScala.ipynb._COPYING_ | |
2020-05-07 09:38:08,504 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10068 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/FeaturestoreTourScala.ipynb._COPYING_ | |
2020-05-07 09:38:08,506 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10068 State = COMMITTED size 122995 byte | |
2020-05-07 09:38:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:38:08,908 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/FeaturestoreTourScala.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:08,934 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10069 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/aws/S3-FeatureStore.ipynb._COPYING_ | |
2020-05-07 09:38:08,942 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10069 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:38:08,950 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/aws/S3-FeatureStore.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:08,969 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10070 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/aws/SageMakerFeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:08,983 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10070 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/aws/SageMakerFeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:08,987 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10070 State = COMMITTED size 462660 byte | |
2020-05-07 09:38:09,387 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/aws/SageMakerFeaturestoreTourPython.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:09,418 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10071 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/aws/data/Sacramentorealestatetransactions.csv._COPYING_ | |
2020-05-07 09:38:09,427 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10071 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/aws/data/Sacramentorealestatetransactions.csv._COPYING_ | |
2020-05-07 09:38:09,430 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10071 State = COMMITTED size 113183 byte | |
2020-05-07 09:38:09,832 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/aws/data/Sacramentorealestatetransactions.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:09,872 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10072 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FatureStore.ipynb._COPYING_ | |
2020-05-07 09:38:09,881 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10072 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FatureStore.ipynb._COPYING_ | |
2020-05-07 09:38:09,885 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10072 State = COMMITTED size 6860 byte | |
2020-05-07 09:38:10,286 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FatureStore.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:10,306 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10073 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FeatureStore-Setup.ipynb._COPYING_ | |
2020-05-07 09:38:10,314 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10073 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FeatureStore-Setup.ipynb._COPYING_ | |
2020-05-07 09:38:10,317 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10073 State = COMMITTED size 3270 byte | |
2020-05-07 09:38:10,718 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FeatureStore-Setup.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:10,736 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10074 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FeatureStore.ipynb._COPYING_ | |
2020-05-07 09:38:10,743 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10074 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FeatureStore.ipynb._COPYING_ | |
2020-05-07 09:38:10,746 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10074 State = COMMITTED size 6861 byte | |
2020-05-07 09:38:11,147 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FeatureStore.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:11,166 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10075 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/databricks/DatabricksFeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:11,176 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10075 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/databricks/DatabricksFeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:11,178 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10075 State = COMMITTED size 582544 byte | |
2020-05-07 09:38:11,579 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/databricks/DatabricksFeaturestoreTourPython.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:11,601 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10076 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/databricks/FeatureStoreQuickStartDatabricks.ipynb._COPYING_ | |
2020-05-07 09:38:11,617 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10076 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/databricks/FeatureStoreQuickStartDatabricks.ipynb._COPYING_ | |
2020-05-07 09:38:11,620 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10076 State = COMMITTED size 14556 byte | |
2020-05-07 09:38:12,020 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/databricks/FeatureStoreQuickStartDatabricks.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:12,045 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10077 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/datasets/TitanicTrainingDatasetPython.ipynb._COPYING_ | |
2020-05-07 09:38:12,052 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10077 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/datasets/TitanicTrainingDatasetPython.ipynb._COPYING_ | |
2020-05-07 09:38:12,055 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10077 State = COMMITTED size 11407 byte | |
2020-05-07 09:38:12,457 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/datasets/TitanicTrainingDatasetPython.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:12,484 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10078 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/delta/DeltaOnHops.ipynb._COPYING_ | |
2020-05-07 09:38:12,493 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10078 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/delta/DeltaOnHops.ipynb._COPYING_ | |
2020-05-07 09:38:12,496 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10078 State = COMMITTED size 19374 byte | |
2020-05-07 09:38:12,897 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/delta/DeltaOnHops.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:12,924 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10079 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/hudi/HudiOnHops.ipynb._COPYING_ | |
2020-05-07 09:38:12,930 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10079 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/hudi/HudiOnHops.ipynb._COPYING_ | |
2020-05-07 09:38:12,934 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10079 State = COMMITTED size 68350 byte | |
2020-05-07 09:38:13,334 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/hudi/HudiOnHops.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:13,358 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10080 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/image_datasets/ImageDatasetFeaturestore.ipynb._COPYING_ | |
2020-05-07 09:38:13,365 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10080 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/image_datasets/ImageDatasetFeaturestore.ipynb._COPYING_ | |
2020-05-07 09:38:13,368 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10080 State = COMMITTED size 11931 byte | |
2020-05-07 09:38:13,768 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/image_datasets/ImageDatasetFeaturestore.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:13,787 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10081 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/image_datasets/ImageFeatureGroup.ipynb._COPYING_ | |
2020-05-07 09:38:13,793 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10081 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/image_datasets/ImageFeatureGroup.ipynb._COPYING_ | |
2020-05-07 09:38:13,797 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10081 State = COMMITTED size 18615 byte | |
2020-05-07 09:38:14,197 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/image_datasets/ImageFeatureGroup.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:14,223 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10082 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/admin_fs_tags.png._COPYING_ | |
2020-05-07 09:38:14,230 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10082 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/admin_fs_tags.png._COPYING_ | |
2020-05-07 09:38:14,233 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10082 State = COMMITTED size 417928 byte | |
2020-05-07 09:38:14,634 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/admin_fs_tags.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:14,653 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10083 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/concepts.png._COPYING_ | |
2020-05-07 09:38:14,660 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10083 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/concepts.png._COPYING_ | |
2020-05-07 09:38:14,663 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10083 State = COMMITTED size 50873 byte | |
2020-05-07 09:38:15,064 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/concepts.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:15,085 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10084 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/create_tags.png._COPYING_ | |
2020-05-07 09:38:15,092 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10084 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/create_tags.png._COPYING_ | |
2020-05-07 09:38:15,096 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10084 State = COMMITTED size 49985 byte | |
2020-05-07 09:38:15,496 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/create_tags.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:15,519 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10085 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/delta_dataset.png._COPYING_ | |
2020-05-07 09:38:15,527 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10085 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/delta_dataset.png._COPYING_ | |
2020-05-07 09:38:15,530 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10085 State = COMMITTED size 523229 byte | |
2020-05-07 09:38:15,931 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/delta_dataset.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:15,950 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10086 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/featurestore_incremental_pull.png._COPYING_ | |
2020-05-07 09:38:15,957 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10086 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/featurestore_incremental_pull.png._COPYING_ | |
2020-05-07 09:38:15,960 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10086 State = COMMITTED size 203952 byte | |
2020-05-07 09:38:16,361 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/featurestore_incremental_pull.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:16,380 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10087 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/fg_stats_1.png._COPYING_ | |
2020-05-07 09:38:16,386 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10087 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/fg_stats_1.png._COPYING_ | |
2020-05-07 09:38:16,389 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10087 State = COMMITTED size 440893 byte | |
2020-05-07 09:38:16,790 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/fg_stats_1.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:16,809 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10088 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/hudi_dataset.png._COPYING_ | |
2020-05-07 09:38:16,816 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10088 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/hudi_dataset.png._COPYING_ | |
2020-05-07 09:38:16,819 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10088 State = COMMITTED size 354375 byte | |
2020-05-07 09:38:17,219 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/hudi_dataset.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:17,237 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10089 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_1.png._COPYING_ | |
2020-05-07 09:38:17,244 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10089 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_1.png._COPYING_ | |
2020-05-07 09:38:17,249 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10089 State = COMMITTED size 95014 byte | |
2020-05-07 09:38:17,648 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_1.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:17,669 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10090 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_2.png._COPYING_ | |
2020-05-07 09:38:17,675 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10090 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_2.png._COPYING_ | |
2020-05-07 09:38:17,678 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10090 State = COMMITTED size 30481 byte | |
2020-05-07 09:38:18,079 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_2.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:18,097 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10091 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_3.png._COPYING_ | |
2020-05-07 09:38:18,106 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10091 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_3.png._COPYING_ | |
2020-05-07 09:38:18,109 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10091 State = COMMITTED size 56907 byte | |
2020-05-07 09:38:18,510 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_3.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:18,528 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10092 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_4.png._COPYING_ | |
2020-05-07 09:38:18,535 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10092 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_4.png._COPYING_ | |
2020-05-07 09:38:18,538 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10092 State = COMMITTED size 49404 byte | |
2020-05-07 09:38:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:38:18,940 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_4.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:18,958 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10093 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/incr_load.png._COPYING_ | |
2020-05-07 09:38:18,965 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10093 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/incr_load.png._COPYING_ | |
2020-05-07 09:38:18,968 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10093 State = COMMITTED size 96959 byte | |
2020-05-07 09:38:19,369 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/incr_load.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:19,385 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10094 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/model.png._COPYING_ | |
2020-05-07 09:38:19,393 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10094 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/model.png._COPYING_ | |
2020-05-07 09:38:19,396 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10094 State = COMMITTED size 21281 byte | |
2020-05-07 09:38:19,796 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/model.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:19,815 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10095 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/near_real_time.jpg._COPYING_ | |
2020-05-07 09:38:19,822 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10095 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/near_real_time.jpg._COPYING_ | |
2020-05-07 09:38:19,825 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10095 State = COMMITTED size 26672 byte | |
2020-05-07 09:38:20,226 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/near_real_time.jpg._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:20,246 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10096 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/overview.png._COPYING_ | |
2020-05-07 09:38:20,254 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10096 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/overview.png._COPYING_ | |
2020-05-07 09:38:20,258 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10096 State = COMMITTED size 29440 byte | |
2020-05-07 09:38:20,658 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/overview.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:20,675 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10097 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm1.png._COPYING_ | |
2020-05-07 09:38:20,681 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10097 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm1.png._COPYING_ | |
2020-05-07 09:38:20,684 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10097 State = COMMITTED size 21284 byte | |
2020-05-07 09:38:21,085 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm1.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:21,104 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10098 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm2.png._COPYING_ | |
2020-05-07 09:38:21,110 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10098 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm2.png._COPYING_ | |
2020-05-07 09:38:21,113 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10098 State = COMMITTED size 22301 byte | |
2020-05-07 09:38:21,514 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm2.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:21,534 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10099 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm3.png._COPYING_ | |
2020-05-07 09:38:21,541 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10099 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm3.png._COPYING_ | |
2020-05-07 09:38:21,544 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10099 State = COMMITTED size 70895 byte | |
2020-05-07 09:38:21,945 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm3.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:21,963 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10100 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm4.png._COPYING_ | |
2020-05-07 09:38:21,970 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10100 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm4.png._COPYING_ | |
2020-05-07 09:38:21,973 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10100 State = COMMITTED size 37376 byte | |
2020-05-07 09:38:22,374 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm4.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:22,394 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10101 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm5.png._COPYING_ | |
2020-05-07 09:38:22,401 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10101 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm5.png._COPYING_ | |
2020-05-07 09:38:22,404 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10101 State = COMMITTED size 46393 byte | |
2020-05-07 09:38:22,805 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm5.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:22,823 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10102 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm6.png._COPYING_ | |
2020-05-07 09:38:22,830 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10102 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm6.png._COPYING_ | |
2020-05-07 09:38:22,833 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10102 State = COMMITTED size 23761 byte | |
2020-05-07 09:38:23,234 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm6.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:23,253 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10103 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm7.png._COPYING_ | |
2020-05-07 09:38:23,260 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10103 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm7.png._COPYING_ | |
2020-05-07 09:38:23,263 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10103 State = COMMITTED size 22384 byte | |
2020-05-07 09:38:23,664 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm7.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:23,680 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10104 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/query_optimizer.png._COPYING_ | |
2020-05-07 09:38:23,687 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10104 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/query_optimizer.png._COPYING_ | |
2020-05-07 09:38:23,690 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10104 State = COMMITTED size 94773 byte | |
2020-05-07 09:38:24,091 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/query_optimizer.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:24,107 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10105 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/select_fs.png._COPYING_ | |
2020-05-07 09:38:24,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10105 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/select_fs.png._COPYING_ | |
2020-05-07 09:38:24,116 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10105 State = COMMITTED size 11700 byte | |
2020-05-07 09:38:24,517 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/select_fs.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:24,533 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10106 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/share_featurestore.png._COPYING_ | |
2020-05-07 09:38:24,540 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10106 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/share_featurestore.png._COPYING_ | |
2020-05-07 09:38:24,542 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10106 State = COMMITTED size 72783 byte | |
2020-05-07 09:38:24,944 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/share_featurestore.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:24,964 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10107 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/to_admin.png._COPYING_ | |
2020-05-07 09:38:24,971 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10107 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/to_admin.png._COPYING_ | |
2020-05-07 09:38:24,974 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10107 State = COMMITTED size 48799 byte | |
2020-05-07 09:38:25,375 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/to_admin.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:25,393 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10108 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/upsert_illustration.png._COPYING_ | |
2020-05-07 09:38:25,400 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10108 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/upsert_illustration.png._COPYING_ | |
2020-05-07 09:38:25,403 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10108 State = COMMITTED size 425340 byte | |
2020-05-07 09:38:25,804 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/upsert_illustration.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:25,833 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10109 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/online_featurestore/OnlineFeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:25,841 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10109 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/online_featurestore/OnlineFeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:25,844 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10109 State = COMMITTED size 30666 byte | |
2020-05-07 09:38:26,245 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/online_featurestore/OnlineFeaturestoreTourPython.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:26,263 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10110 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/online_featurestore/OnlineFeaturestoreTourScala.ipynb._COPYING_ | |
2020-05-07 09:38:26,269 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10110 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/online_featurestore/OnlineFeaturestoreTourScala.ipynb._COPYING_ | |
2020-05-07 09:38:26,272 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10110 State = COMMITTED size 26146 byte | |
2020-05-07 09:38:26,672 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/online_featurestore/OnlineFeaturestoreTourScala.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:26,696 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10111 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormHelloWorld.ipynb._COPYING_ | |
2020-05-07 09:38:26,702 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10111 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormHelloWorld.ipynb._COPYING_ | |
2020-05-07 09:38:26,705 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10111 State = COMMITTED size 36272 byte | |
2020-05-07 09:38:27,106 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormHelloWorld.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:27,124 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10112 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_CreateDataset.ipynb._COPYING_ | |
2020-05-07 09:38:27,130 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10112 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_CreateDataset.ipynb._COPYING_ | |
2020-05-07 09:38:27,133 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10112 State = COMMITTED size 43037 byte | |
2020-05-07 09:38:27,534 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_CreateDataset.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:27,551 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10113 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_PyTorch.ipynb._COPYING_ | |
2020-05-07 09:38:27,558 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10113 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_PyTorch.ipynb._COPYING_ | |
2020-05-07 09:38:27,560 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10113 State = COMMITTED size 79254 byte | |
2020-05-07 09:38:27,961 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_PyTorch.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:27,985 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10114 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_Tensorflow.ipynb._COPYING_ | |
2020-05-07 09:38:27,991 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10114 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_Tensorflow.ipynb._COPYING_ | |
2020-05-07 09:38:27,994 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10114 State = COMMITTED size 46561 byte | |
2020-05-07 09:38:28,394 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_Tensorflow.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:28,420 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10115 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/query_planner/FeaturestoreQueryPlanner.ipynb._COPYING_ | |
2020-05-07 09:38:28,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10115 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/query_planner/FeaturestoreQueryPlanner.ipynb._COPYING_ | |
2020-05-07 09:38:28,429 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10115 State = COMMITTED size 17540 byte | |
2020-05-07 09:38:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:38:28,829 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/query_planner/FeaturestoreQueryPlanner.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:28,852 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10116 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/visualizations/Feature_Visualizations.ipynb._COPYING_ | |
2020-05-07 09:38:28,859 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10116 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/visualizations/Feature_Visualizations.ipynb._COPYING_ | |
2020-05-07 09:38:28,862 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10116 State = COMMITTED size 669353 byte | |
2020-05-07 09:38:29,263 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/visualizations/Feature_Visualizations.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:38,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:38:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:38:58,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:39:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:39:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:39:28,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:39:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:39:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:39:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:40:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:40:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:40:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:40:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:40:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:40:58,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:41:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:41:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:41:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:41:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:41:48,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:41:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:42:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:42:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:42:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:42:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:42:48,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:42:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:43:08,699 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:43:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:43:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:43:38,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:43:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:43:58,691 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:44:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:44:18,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:44:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:44:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:44:48,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:44:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:45:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:45:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:45:28,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:45:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:45:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:45:58,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:46:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:46:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:46:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:46:37,922 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:46:37,923 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:46:37,923 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10117 State = UNDER_CONSTRUCTION for /user/spark/hops-verification-assembly-1.3.0-SNAPSHOT.jar._COPYING_ | |
2020-05-07 09:46:38,149 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10117 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:46:38,163 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/hops-verification-assembly-1.3.0-SNAPSHOT.jar._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_678724206_1 | |
2020-05-07 09:46:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:46:39,833 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-verification-assembly-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:46:43,198 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-verification-assembly-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:46:48,521 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/hops-examples-spark-1.3.0-SNAPSHOT.jar._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-453314954_1 | |
2020-05-07 09:46:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:46:50,272 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-examples-spark-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:46:53,618 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-examples-spark-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:46:58,680 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:08,132 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:47:08,132 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:47:08,132 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10118 State = UNDER_CONSTRUCTION for /user/spark/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar._COPYING_ | |
2020-05-07 09:47:08,224 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10118 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:47:08,237 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_1193308009_1 | |
2020-05-07 09:47:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:09,871 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:47:13,204 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:47:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:28,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:28,896 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:47:28,896 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:47:28,896 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10119 State = UNDER_CONSTRUCTION for /user/spark/hops-examples-featurestore-util4j-1.3.0-SNAPSHOT.jar._COPYING_ | |
2020-05-07 09:47:29,029 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10119 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:47:29,042 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/hops-examples-featurestore-util4j-1.3.0-SNAPSHOT.jar._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_572138777_1 | |
2020-05-07 09:47:30,843 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-examples-featurestore-util4j-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:47:34,171 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-examples-featurestore-util4j-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:47:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:39,538 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/featurestore_util.py._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_1819718216_1 | |
2020-05-07 09:47:41,236 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/featurestore_util.py" | |
2020-05-07 09:47:44,542 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/featurestore_util.py" | |
2020-05-07 09:47:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:49,569 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/metrics.properties._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_1271699741_1 | |
2020-05-07 09:47:51,296 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/metrics.properties" | |
2020-05-07 09:47:54,595 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/metrics.properties" | |
2020-05-07 09:47:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:59,578 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10120 State = UNDER_CONSTRUCTION for /user/hdfs/metrics.properties._COPYING_ | |
2020-05-07 09:47:59,660 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10120 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:47:59,673 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/metrics.properties._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_664392635_1 | |
2020-05-07 09:48:01,318 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/metrics.properties" | |
2020-05-07 09:48:04,745 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/metrics.properties" | |
2020-05-07 09:48:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:48:09,785 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/log4j.properties._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1196503116_1 | |
2020-05-07 09:48:11,490 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/log4j.properties" | |
2020-05-07 09:48:14,852 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/log4j.properties" | |
2020-05-07 09:48:18,628 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:48:18,628 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:48:18,628 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10121 State = UNDER_CONSTRUCTION for /user/spark/cacerts.jks._COPYING_ | |
2020-05-07 09:48:18,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:48:18,716 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10121 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:48:18,730 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/cacerts.jks._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_848252496_1 | |
2020-05-07 09:48:20,440 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/cacerts.jks" | |
2020-05-07 09:48:23,846 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/cacerts.jks" | |
2020-05-07 09:48:27,282 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:48:27,282 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:48:27,282 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10122 State = UNDER_CONSTRUCTION for /user/spark/cacerts.pem._COPYING_ | |
2020-05-07 09:48:27,391 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10122 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:48:27,400 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/cacerts.pem._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_1491068508_1 | |
2020-05-07 09:48:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:48:29,076 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/cacerts.pem" | |
2020-05-07 09:48:32,445 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/cacerts.pem" | |
2020-05-07 09:48:37,512 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/hive-site.xml._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1387422785_1 | |
2020-05-07 09:48:38,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:48:39,222 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hive-site.xml" | |
2020-05-07 09:48:42,608 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hive-site.xml" | |
2020-05-07 09:48:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:48:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:49:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:49:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:49:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:49:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:49:48,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:49:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:50:08,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:50:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:50:28,679 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:50:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:50:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:50:58,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:51:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:51:18,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:51:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:51:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:51:48,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:51:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:52:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:52:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:52:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:52:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:52:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:52:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:53:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:53:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:53:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:53:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:53:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:53:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:54:08,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:54:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:54:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:54:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:54:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:54:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:55:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:55:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:55:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:55:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:55:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:55:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:56:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:56:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:56:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:56:38,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:56:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:56:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:57:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:57:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:57:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:57:38,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:57:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:57:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:58:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:58:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:58:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:58:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:58:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:58:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:59:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:59:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:59:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:59:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:59:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:59:58,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:00:00,049 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 21600000 minutes, Emptier interval = 3600000 minutes. | |
2020-05-07 10:00:00,049 INFO org.apache.hadoop.fs.TrashPolicyDefault: TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ip-10-0-4-12.us-west-2.compute.internal/user/hdfs/.Trash | |
2020-05-07 10:00:00,071 INFO org.apache.hadoop.fs.TrashPolicyDefault: Created trash checkpoint: /user/hdfs/.Trash/200507100000 | |
2020-05-07 10:00:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:00:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:00:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:00:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:00:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:00:58,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:01:08,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:01:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:01:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:01:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:01:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:01:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:02:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:02:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:02:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:02:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:02:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:02:58,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:03:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:03:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:03:28,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:03:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:03:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:03:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:04:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:04:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:04:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:04:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:04:48,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:04:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:05:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:05:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:05:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:05:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:05:48,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:05:58,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:06:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:06:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:06:28,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:06:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:06:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:06:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:07:08,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:07:18,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:07:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:07:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:07:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:07:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:08:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:08:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:08:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:08:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:08:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:08:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:09:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:09:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:09:28,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:09:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:09:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:09:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:10:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:10:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:10:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:10:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:10:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:10:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:11:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:11:18,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:11:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:11:38,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:11:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:11:58,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:12:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:12:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:12:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:12:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:12:48,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:12:58,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:13:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:13:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:13:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:13:38,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:13:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:13:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:14:08,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:14:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:14:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:14:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:14:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:14:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:15:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:15:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:15:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:15:38,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:15:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:15:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:16:08,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:16:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:16:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:16:38,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:16:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:16:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:17:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:17:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:17:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:17:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:17:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:17:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:18:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:18:18,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:18:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:18:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:18:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:18:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:19:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:19:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:19:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:19:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:19:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:19:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:20:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:20:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:20:28,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:20:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:20:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:20:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:21:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:21:18,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:21:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:21:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:21:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:21:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:22:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:22:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:22:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:22:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:22:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:22:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:23:08,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:23:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:23:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:23:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:23:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:23:58,685 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:24:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:24:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:24:28,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:24:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:24:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:24:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:25:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:25:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:25:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:25:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:25:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:25:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:26:08,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:26:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:26:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:26:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:26:48,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:26:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:27:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:27:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:27:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:27:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:27:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:27:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:28:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:28:14,977 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 containing 123 blocks is assigned to NN [ID: 2, IP: 10.0.4.12] | |
2020-05-07 10:28:14,990 INFO BlockStateChange: BLOCK* processReport success: from DatanodeRegistration(10.0.4.12:50010, datanodeUuid=a7438e0b-c413-4d38-888d-ab4392b95d31, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-4230c663-6049-437f-b406-77ff12af092d;nsid=911;c=1588843062940) storage: DatanodeStorage[DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9,DISK,NORMAL], blocks: 2000, hasStaleStorages: false, processing time: 1 ms. (buckets,bucketsMatching,blocks,toRemove,toInvalidate,toCorrupt,toUC,toAdd,safeBlocksIfSafeMode)=(1000,1000,2000,0,0,0,0,0,0) | |
2020-05-07 10:28:14,992 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 has completed | |
2020-05-07 10:28:18,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:28:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:28:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:28:48,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:28:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:29:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:29:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:29:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:29:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:29:48,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:29:58,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:30:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:30:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:30:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:30:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:30:48,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:30:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:31:08,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:31:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:31:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:31:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:31:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:31:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:32:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:32:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:32:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:32:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:32:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:32:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:33:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:33:18,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:33:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:33:32,008 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/flax is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:33,075 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Logs/README.md" | |
2020-05-07 10:33:33,094 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10123 State = UNDER_CONSTRUCTION for /Projects/flax/Logs/README.md | |
2020-05-07 10:33:33,214 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10123 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/flax/Logs/README.md | |
2020-05-07 10:33:33,217 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10123 State = COMMITTED size 227 byte | |
2020-05-07 10:33:33,619 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/Logs/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:33,621 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Logs/README.md" | |
2020-05-07 10:33:33,741 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Resources/README.md" | |
2020-05-07 10:33:33,746 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/Resources/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:33,748 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Resources/README.md" | |
2020-05-07 10:33:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:33:39,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Experiments/README.md" | |
2020-05-07 10:33:39,183 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/Experiments/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:39,187 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Experiments/README.md" | |
2020-05-07 10:33:42,735 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Jupyter/README.md" | |
2020-05-07 10:33:42,739 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/Jupyter/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:42,743 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Jupyter/README.md" | |
2020-05-07 10:33:45,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Models/README.md" | |
2020-05-07 10:33:45,986 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/Models/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:45,990 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Models/README.md" | |
2020-05-07 10:33:46,445 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/flax_Training_Datasets/README.md" | |
2020-05-07 10:33:46,449 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/flax_Training_Datasets/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:46,463 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/flax_Training_Datasets/README.md" | |
2020-05-07 10:33:47,032 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/DataValidation/README.md" | |
2020-05-07 10:33:47,035 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/DataValidation/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:47,040 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/DataValidation/README.md" | |
2020-05-07 10:33:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:33:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:18,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:38,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:53,853 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/demo_featurestore_harry001 is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:34:54,170 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Logs/README.md" | |
2020-05-07 10:34:54,174 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10124 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Logs/README.md | |
2020-05-07 10:34:54,183 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10124 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Logs/README.md | |
2020-05-07 10:34:54,185 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10124 State = COMMITTED size 227 byte | |
2020-05-07 10:34:54,587 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Logs/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:34:54,589 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Logs/README.md" | |
2020-05-07 10:34:54,693 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Resources/README.md" | |
2020-05-07 10:34:54,696 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:34:54,699 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Resources/README.md" | |
2020-05-07 10:34:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:59,877 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Experiments/README.md" | |
2020-05-07 10:34:59,882 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Experiments/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:34:59,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Experiments/README.md" | |
2020-05-07 10:35:03,051 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/README.md" | |
2020-05-07 10:35:03,055 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:35:03,059 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/README.md" | |
2020-05-07 10:35:04,129 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/demo_featurestore_harry001_Training_Datasets/README.md" | |
2020-05-07 10:35:04,134 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/demo_featurestore_harry001_Training_Datasets/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:35:04,138 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/demo_featurestore_harry001_Training_Datasets/README.md" | |
2020-05-07 10:35:05,163 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/DataValidation/README.md" | |
2020-05-07 10:35:05,167 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/DataValidation/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:35:05,171 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/DataValidation/README.md" | |
2020-05-07 10:35:05,548 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:05,548 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:05,548 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10125 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar | |
2020-05-07 10:35:05,616 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10125 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar | |
2020-05-07 10:35:05,619 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10125 State = COMMITTED size 6522467 byte | |
2020-05-07 10:35:06,020 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:06,029 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar" | |
2020-05-07 10:35:06,032 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar" | |
2020-05-07 10:35:06,057 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/data/attendances.csv is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:06,072 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:06,072 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:06,072 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10126 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/TestJob/data/games.csv | |
2020-05-07 10:35:06,080 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10126 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/TestJob/data/games.csv | |
2020-05-07 10:35:06,084 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10126 State = COMMITTED size 76451 byte | |
2020-05-07 10:35:06,485 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/data/games.csv is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:06,498 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:06,498 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:06,498 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10127 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/TestJob/data/players.csv | |
2020-05-07 10:35:06,507 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10127 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/TestJob/data/players.csv | |
2020-05-07 10:35:06,510 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10127 State = COMMITTED size 212910 byte | |
2020-05-07 10:35:06,910 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/data/players.csv is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:06,922 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/data/season_scores.csv is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:06,935 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/data/teams.csv is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:06,955 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar" | |
2020-05-07 10:35:06,958 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar" | |
2020-05-07 10:35:06,968 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/attendances.csv" | |
2020-05-07 10:35:06,971 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/attendances.csv" | |
2020-05-07 10:35:06,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/games.csv" | |
2020-05-07 10:35:06,977 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/games.csv" | |
2020-05-07 10:35:06,980 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/players.csv" | |
2020-05-07 10:35:06,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/players.csv" | |
2020-05-07 10:35:06,986 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/season_scores.csv" | |
2020-05-07 10:35:06,989 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/season_scores.csv" | |
2020-05-07 10:35:06,992 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/teams.csv" | |
2020-05-07 10:35:06,995 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/teams.csv" | |
2020-05-07 10:35:07,016 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/FeatureStoreQuickStart.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:07,036 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:07,036 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:07,036 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10128 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourPython.ipynb | |
2020-05-07 10:35:07,045 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10128 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourPython.ipynb | |
2020-05-07 10:35:07,047 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10128 State = COMMITTED size 747622 byte | |
2020-05-07 10:35:07,449 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourPython.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:07,464 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:07,464 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:07,464 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10129 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourScala.ipynb | |
2020-05-07 10:35:07,471 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10129 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourScala.ipynb | |
2020-05-07 10:35:07,473 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10129 State = COMMITTED size 122995 byte | |
2020-05-07 10:35:07,875 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourScala.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:07,896 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/aws/S3-FeatureStore.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:07,907 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:07,907 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:07,908 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10130 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/aws/SageMakerFeaturestoreTourPython.ipynb | |
2020-05-07 10:35:07,914 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10130 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/aws/SageMakerFeaturestoreTourPython.ipynb | |
2020-05-07 10:35:07,919 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10130 State = COMMITTED size 462660 byte | |
2020-05-07 10:35:08,318 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/aws/SageMakerFeaturestoreTourPython.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:08,336 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:08,336 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:08,336 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10131 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/aws/data/Sacramentorealestatetransactions.csv | |
2020-05-07 10:35:08,342 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10131 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/aws/data/Sacramentorealestatetransactions.csv | |
2020-05-07 10:35:08,345 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10131 State = COMMITTED size 113183 byte | |
2020-05-07 10:35:08,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:35:08,746 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/aws/data/Sacramentorealestatetransactions.csv is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:08,766 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FatureStore.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:08,777 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FeatureStore-Setup.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:08,787 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FeatureStore.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:08,798 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:08,798 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:08,798 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10132 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/databricks/DatabricksFeaturestoreTourPython.ipynb | |
2020-05-07 10:35:08,806 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10132 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/databricks/DatabricksFeaturestoreTourPython.ipynb | |
2020-05-07 10:35:08,809 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10132 State = COMMITTED size 582544 byte | |
2020-05-07 10:35:09,210 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/databricks/DatabricksFeaturestoreTourPython.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,221 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/databricks/FeatureStoreQuickStartDatabricks.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,243 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/datasets/TitanicTrainingDatasetPython.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,261 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/delta/DeltaOnHops.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,279 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:09,279 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:09,279 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10133 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/hudi/HudiOnHops.ipynb | |
2020-05-07 10:35:09,286 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10133 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/hudi/HudiOnHops.ipynb | |
2020-05-07 10:35:09,288 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10133 State = COMMITTED size 68350 byte | |
2020-05-07 10:35:09,690 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/hudi/HudiOnHops.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,710 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/image_datasets/ImageDatasetFeaturestore.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,720 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/image_datasets/ImageFeatureGroup.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,743 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:09,743 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:09,743 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10134 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/admin_fs_tags.png | |
2020-05-07 10:35:09,750 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10134 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/admin_fs_tags.png | |
2020-05-07 10:35:09,752 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10134 State = COMMITTED size 417928 byte | |
2020-05-07 10:35:10,154 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/admin_fs_tags.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:10,169 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/concepts.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:10,183 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/create_tags.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:10,201 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:10,201 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:10,201 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10135 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/delta_dataset.png | |
2020-05-07 10:35:10,215 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10135 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/delta_dataset.png | |
2020-05-07 10:35:10,220 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10135 State = COMMITTED size 523229 byte | |
2020-05-07 10:35:10,619 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/delta_dataset.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:10,632 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:10,633 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:10,633 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10136 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/featurestore_incremental_pull.png | |
2020-05-07 10:35:10,642 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10136 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/featurestore_incremental_pull.png | |
2020-05-07 10:35:10,646 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10136 State = COMMITTED size 203952 byte | |
2020-05-07 10:35:11,046 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/featurestore_incremental_pull.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:11,058 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:11,058 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:11,059 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10137 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/fg_stats_1.png | |
2020-05-07 10:35:11,066 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10137 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/fg_stats_1.png | |
2020-05-07 10:35:11,069 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10137 State = COMMITTED size 440893 byte | |
2020-05-07 10:35:11,469 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/fg_stats_1.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:11,482 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:11,482 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:11,482 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10138 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/hudi_dataset.png | |
2020-05-07 10:35:11,489 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10138 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/hudi_dataset.png | |
2020-05-07 10:35:11,491 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10138 State = COMMITTED size 354375 byte | |
2020-05-07 10:35:11,893 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/hudi_dataset.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:11,904 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:11,904 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:11,904 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10139 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_1.png | |
2020-05-07 10:35:11,911 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10139 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_1.png | |
2020-05-07 10:35:11,914 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10139 State = COMMITTED size 95014 byte | |
2020-05-07 10:35:12,314 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_1.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,325 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_2.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,336 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_3.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,346 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_4.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,356 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:12,356 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:12,356 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10140 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/incr_load.png | |
2020-05-07 10:35:12,363 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10140 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/incr_load.png | |
2020-05-07 10:35:12,367 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10140 State = COMMITTED size 96959 byte | |
2020-05-07 10:35:12,768 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/incr_load.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,781 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/model.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,796 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/near_real_time.jpg is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,808 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/overview.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,821 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm1.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,835 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm2.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,848 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:12,848 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:12,849 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10141 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/petastorm3.png | |
2020-05-07 10:35:12,856 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10141 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/petastorm3.png | |
2020-05-07 10:35:12,860 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10141 State = COMMITTED size 70895 byte | |
2020-05-07 10:35:13,260 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm3.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,272 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm4.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,283 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm5.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,293 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm6.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,303 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm7.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,314 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:13,314 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:13,315 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10142 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/query_optimizer.png | |
2020-05-07 10:35:13,322 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10142 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/query_optimizer.png | |
2020-05-07 10:35:13,324 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10142 State = COMMITTED size 94773 byte | |
2020-05-07 10:35:13,726 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/query_optimizer.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,738 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/select_fs.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,749 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:13,749 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:13,750 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10143 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/share_featurestore.png | |
2020-05-07 10:35:13,759 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10143 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/share_featurestore.png | |
2020-05-07 10:35:13,762 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10143 State = COMMITTED size 72783 byte | |
2020-05-07 10:35:14,163 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/share_featurestore.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,175 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/to_admin.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,186 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:14,186 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:14,186 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10144 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/upsert_illustration.png | |
2020-05-07 10:35:14,193 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10144 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/upsert_illustration.png | |
2020-05-07 10:35:14,196 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10144 State = COMMITTED size 425340 byte | |
2020-05-07 10:35:14,597 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/upsert_illustration.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,617 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/online_featurestore/OnlineFeaturestoreTourPython.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,627 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/online_featurestore/OnlineFeaturestoreTourScala.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,646 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormHelloWorld.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,656 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_CreateDataset.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,667 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:14,667 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:14,667 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10145 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_PyTorch.ipynb | |
2020-05-07 10:35:14,673 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10145 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_PyTorch.ipynb | |
2020-05-07 10:35:14,676 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10145 State = COMMITTED size 79254 byte | |
2020-05-07 10:35:15,078 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_PyTorch.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:15,092 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_Tensorflow.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:15,117 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/query_planner/FeaturestoreQueryPlanner.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:15,148 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:15,148 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:15,148 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10146 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/visualizations/Feature_Visualizations.ipynb | |
2020-05-07 10:35:15,161 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10146 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/visualizations/Feature_Visualizations.ipynb | |
2020-05-07 10:35:15,165 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10146 State = COMMITTED size 669353 byte | |
2020-05-07 10:35:15,565 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/visualizations/Feature_Visualizations.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:15,585 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/README.md" | |
2020-05-07 10:35:15,588 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/README.md" | |
2020-05-07 10:35:15,591 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/FeatureStoreQuickStart.ipynb" | |
2020-05-07 10:35:15,594 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/FeatureStoreQuickStart.ipynb" | |
2020-05-07 10:35:15,597 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:15,601 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:15,604 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourScala.ipynb" | |
2020-05-07 10:35:15,606 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourScala.ipynb" | |
2020-05-07 10:35:15,695 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/visualizations/Feature_Visualizations.ipynb" | |
2020-05-07 10:35:15,699 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/visualizations/Feature_Visualizations.ipynb" | |
2020-05-07 10:35:15,706 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/query_planner/FeaturestoreQueryPlanner.ipynb" | |
2020-05-07 10:35:15,718 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/query_planner/FeaturestoreQueryPlanner.ipynb" | |
2020-05-07 10:35:15,725 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormHelloWorld.ipynb" | |
2020-05-07 10:35:15,729 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormHelloWorld.ipynb" | |
2020-05-07 10:35:15,733 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_CreateDataset.ipynb" | |
2020-05-07 10:35:15,737 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_CreateDataset.ipynb" | |
2020-05-07 10:35:15,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_PyTorch.ipynb" | |
2020-05-07 10:35:15,743 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_PyTorch.ipynb" | |
2020-05-07 10:35:15,747 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_Tensorflow.ipynb" | |
2020-05-07 10:35:15,750 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_Tensorflow.ipynb" | |
2020-05-07 10:35:15,755 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/online_featurestore/OnlineFeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:15,758 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/online_featurestore/OnlineFeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:15,762 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/online_featurestore/OnlineFeaturestoreTourScala.ipynb" | |
2020-05-07 10:35:15,765 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/online_featurestore/OnlineFeaturestoreTourScala.ipynb" | |
2020-05-07 10:35:15,770 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/admin_fs_tags.png" | |
2020-05-07 10:35:15,774 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/admin_fs_tags.png" | |
2020-05-07 10:35:15,777 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/concepts.png" | |
2020-05-07 10:35:15,780 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/concepts.png" | |
2020-05-07 10:35:15,783 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/create_tags.png" | |
2020-05-07 10:35:15,786 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/create_tags.png" | |
2020-05-07 10:35:15,788 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/delta_dataset.png" | |
2020-05-07 10:35:15,792 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/delta_dataset.png" | |
2020-05-07 10:35:15,795 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/featurestore_incremental_pull.png" | |
2020-05-07 10:35:15,798 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/featurestore_incremental_pull.png" | |
2020-05-07 10:35:15,801 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/fg_stats_1.png" | |
2020-05-07 10:35:15,804 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/fg_stats_1.png" | |
2020-05-07 10:35:15,807 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/hudi_dataset.png" | |
2020-05-07 10:35:15,811 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/hudi_dataset.png" | |
2020-05-07 10:35:15,814 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_1.png" | |
2020-05-07 10:35:15,817 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_1.png" | |
2020-05-07 10:35:15,822 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_2.png" | |
2020-05-07 10:35:15,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_2.png" | |
2020-05-07 10:35:15,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_3.png" | |
2020-05-07 10:35:15,832 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_3.png" | |
2020-05-07 10:35:15,835 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_4.png" | |
2020-05-07 10:35:15,839 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_4.png" | |
2020-05-07 10:35:15,842 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/incr_load.png" | |
2020-05-07 10:35:15,846 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/incr_load.png" | |
2020-05-07 10:35:15,849 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/model.png" | |
2020-05-07 10:35:15,852 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/model.png" | |
2020-05-07 10:35:15,855 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/near_real_time.jpg" | |
2020-05-07 10:35:15,858 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/near_real_time.jpg" | |
2020-05-07 10:35:15,861 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/overview.png" | |
2020-05-07 10:35:15,864 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/overview.png" | |
2020-05-07 10:35:15,867 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm1.png" | |
2020-05-07 10:35:15,870 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm1.png" | |
2020-05-07 10:35:15,873 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm2.png" | |
2020-05-07 10:35:15,876 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm2.png" | |
2020-05-07 10:35:15,880 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm3.png" | |
2020-05-07 10:35:15,883 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm3.png" | |
2020-05-07 10:35:15,886 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm4.png" | |
2020-05-07 10:35:15,888 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm4.png" | |
2020-05-07 10:35:15,891 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm5.png" | |
2020-05-07 10:35:15,894 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm5.png" | |
2020-05-07 10:35:15,896 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm6.png" | |
2020-05-07 10:35:15,899 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm6.png" | |
2020-05-07 10:35:15,902 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm7.png" | |
2020-05-07 10:35:15,905 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm7.png" | |
2020-05-07 10:35:15,907 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/query_optimizer.png" | |
2020-05-07 10:35:15,910 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/query_optimizer.png" | |
2020-05-07 10:35:15,913 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/select_fs.png" | |
2020-05-07 10:35:15,916 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/select_fs.png" | |
2020-05-07 10:35:15,919 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/share_featurestore.png" | |
2020-05-07 10:35:15,923 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/share_featurestore.png" | |
2020-05-07 10:35:15,926 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/to_admin.png" | |
2020-05-07 10:35:15,930 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/to_admin.png" | |
2020-05-07 10:35:15,933 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/upsert_illustration.png" | |
2020-05-07 10:35:15,936 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/upsert_illustration.png" | |
2020-05-07 10:35:15,941 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/image_datasets/ImageDatasetFeaturestore.ipynb" | |
2020-05-07 10:35:15,944 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/image_datasets/ImageDatasetFeaturestore.ipynb" | |
2020-05-07 10:35:15,948 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/image_datasets/ImageFeatureGroup.ipynb" | |
2020-05-07 10:35:15,951 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/image_datasets/ImageFeatureGroup.ipynb" | |
2020-05-07 10:35:15,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/hudi/HudiOnHops.ipynb" | |
2020-05-07 10:35:15,959 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/hudi/HudiOnHops.ipynb" | |
2020-05-07 10:35:15,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/delta/DeltaOnHops.ipynb" | |
2020-05-07 10:35:15,966 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/delta/DeltaOnHops.ipynb" | |
2020-05-07 10:35:15,970 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/datasets/TitanicTrainingDatasetPython.ipynb" | |
2020-05-07 10:35:15,973 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/datasets/TitanicTrainingDatasetPython.ipynb" | |
2020-05-07 10:35:15,977 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FatureStore.ipynb" | |
2020-05-07 10:35:15,980 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FatureStore.ipynb" | |
2020-05-07 10:35:15,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FeatureStore-Setup.ipynb" | |
2020-05-07 10:35:15,986 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FeatureStore-Setup.ipynb" | |
2020-05-07 10:35:15,988 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FeatureStore.ipynb" | |
2020-05-07 10:35:15,991 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FeatureStore.ipynb" | |
2020-05-07 10:35:15,994 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/DatabricksFeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:15,997 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/DatabricksFeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:16,000 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/FeatureStoreQuickStartDatabricks.ipynb" | |
2020-05-07 10:35:16,003 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/FeatureStoreQuickStartDatabricks.ipynb" | |
2020-05-07 10:35:16,007 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/aws/S3-FeatureStore.ipynb" | |
2020-05-07 10:35:16,010 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/aws/S3-FeatureStore.ipynb" | |
2020-05-07 10:35:16,014 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/aws/SageMakerFeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:16,017 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/aws/SageMakerFeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:16,031 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/aws/data/Sacramentorealestatetransactions.csv" | |
2020-05-07 10:35:16,034 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/aws/data/Sacramentorealestatetransactions.csv" | |
2020-05-07 10:35:16,104 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/README.md" | |
2020-05-07 10:35:16,108 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-459498988_28 | |
2020-05-07 10:35:17,553 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10147 State = UNDER_CONSTRUCTION for /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__tstore.jks | |
2020-05-07 10:35:17,561 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10147 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__tstore.jks | |
2020-05-07 10:35:17,564 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10147 State = COMMITTED size 1494 byte | |
2020-05-07 10:35:17,965 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__tstore.jks is closed by HopsFS_DFSClient_NONMAPREDUCE_-756788838_56 | |
2020-05-07 10:35:17,967 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__tstore.jks" | |
2020-05-07 10:35:17,970 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__tstore.jks" | |
2020-05-07 10:35:17,986 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10148 State = UNDER_CONSTRUCTION for /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__kstore.jks | |
2020-05-07 10:35:17,992 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10148 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__kstore.jks | |
2020-05-07 10:35:17,995 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10148 State = COMMITTED size 3318 byte | |
2020-05-07 10:35:18,395 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__kstore.jks is closed by HopsFS_DFSClient_NONMAPREDUCE_-756788838_56 | |
2020-05-07 10:35:18,397 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__kstore.jks" | |
2020-05-07 10:35:18,400 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__kstore.jks" | |
2020-05-07 10:35:18,416 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10149 State = UNDER_CONSTRUCTION for /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__cert.key | |
2020-05-07 10:35:18,422 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10149 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__cert.key | |
2020-05-07 10:35:18,425 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10149 State = COMMITTED size 64 byte | |
2020-05-07 10:35:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:35:18,827 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__cert.key is closed by HopsFS_DFSClient_NONMAPREDUCE_-756788838_56 | |
2020-05-07 10:35:18,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__cert.key" | |
2020-05-07 10:35:18,832 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__cert.key" | |
2020-05-07 10:35:27,005 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/applicationHistory/application_1588844087764_0001_1.snappy.inprogress" | |
2020-05-07 10:35:27,059 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:27,059 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:27,059 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10150 State = UNDER_CONSTRUCTION for /user/spark/applicationHistory/application_1588844087764_0001_1.snappy.inprogress | |
2020-05-07 10:35:27,268 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /user/spark/applicationHistory/application_1588844087764_0001_1.snappy.inprogress for HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:35:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:35:38,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:35:48,721 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:35:58,685 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:35:58,768 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/games_features_1/_temporary/0/_temporary/attempt_20200507103556_0066_m_000000_267/part-00000-75891584-a213-4cb9-b9bc-13eec19f6a9f-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:35:59,617 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/games_features_1/_SUCCESS is closed by HopsFS_DFSClient_NONMAPREDUCE_-397473362_14 | |
2020-05-07 10:36:08,688 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:36:14,356 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/hoodie.properties is closed by HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:36:14,868 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/20200507103614.inflight is closed by HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:36:17,141 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:17,141 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:17,141 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10151 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_2 | |
2020-05-07 10:36:17,175 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:17,175 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:17,175 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10152 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_1 | |
2020-05-07 10:36:17,270 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_2 for HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:17,291 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10151 State = UNDER_CONSTRUCTION size 93 byte | |
2020-05-07 10:36:17,298 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_2 is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:17,437 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/37ab9806-893f-4127-b4c9-fb6546eb3d16-0_2-134-538_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:17,488 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_1 for HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:17,505 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10152 State = UNDER_CONSTRUCTION size 93 byte | |
2020-05-07 10:36:17,519 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_1 is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:17,526 WARN org.apache.hadoop.hdfs.StateChange: DIR* FSDirectory.unprotectedRenameTo: failed to rename /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_1 to /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata because destination exists | |
2020-05-07 10:36:17,541 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/38a7db2e-1619-4b67-8a65-8ac2202d0892-0_0-134-536_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:17,544 INFO org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks: BLOCK* InvalidateBlocks: add bid= 10152 State = COMPLETE to [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 | |
2020-05-07 10:36:17,544 INFO BlockStateChange: BLOCK* addToInvalidates: bid= 10152 State = COMPLETE 10.0.4.12:50010 | |
2020-05-07 10:36:17,565 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/fc1fe50c-3322-4c91-b8bb-00f7c3bb1667-0_1-134-537_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:18,244 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,244 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,244 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10153 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/fc1fe50c-3322-4c91-b8bb-00f7c3bb1667-0_1-134-537_20200507103614.parquet | |
2020-05-07 10:36:18,271 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,272 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,272 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10154 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/37ab9806-893f-4127-b4c9-fb6546eb3d16-0_2-134-538_20200507103614.parquet | |
2020-05-07 10:36:18,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10153 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/fc1fe50c-3322-4c91-b8bb-00f7c3bb1667-0_1-134-537_20200507103614.parquet | |
2020-05-07 10:36:18,278 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,278 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,279 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10155 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/38a7db2e-1619-4b67-8a65-8ac2202d0892-0_0-134-536_20200507103614.parquet | |
2020-05-07 10:36:18,282 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10153 State = COMMITTED size 434842 byte | |
2020-05-07 10:36:18,376 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10154 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/37ab9806-893f-4127-b4c9-fb6546eb3d16-0_2-134-538_20200507103614.parquet | |
2020-05-07 10:36:18,380 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10154 State = COMMITTED size 434849 byte | |
2020-05-07 10:36:18,405 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10155 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/38a7db2e-1619-4b67-8a65-8ac2202d0892-0_0-134-536_20200507103614.parquet | |
2020-05-07 10:36:18,410 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10155 State = COMMITTED size 434819 byte | |
2020-05-07 10:36:18,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:36:18,681 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/fc1fe50c-3322-4c91-b8bb-00f7c3bb1667-0_1-134-537_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:18,780 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/82851757-3e4a-4953-a6fa-0ecab8f772a3-0_3-134-539_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:18,788 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/37ab9806-893f-4127-b4c9-fb6546eb3d16-0_2-134-538_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:18,817 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/38a7db2e-1619-4b67-8a65-8ac2202d0892-0_0-134-536_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:18,880 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,880 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,881 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10156 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/82851757-3e4a-4953-a6fa-0ecab8f772a3-0_3-134-539_20200507103614.parquet | |
2020-05-07 10:36:18,965 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10156 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/82851757-3e4a-4953-a6fa-0ecab8f772a3-0_3-134-539_20200507103614.parquet | |
2020-05-07 10:36:18,998 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10156 State = COMMITTED size 434862 byte | |
2020-05-07 10:36:19,016 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/9acb654f-47f1-448e-aaf5-393063f4652c-0_4-134-540_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:19,142 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/0a072f10-eaa4-47fe-b1b8-2a26d10426b9-0_5-134-541_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:19,191 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,191 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,192 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10157 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/9acb654f-47f1-448e-aaf5-393063f4652c-0_4-134-540_20200507103614.parquet | |
2020-05-07 10:36:19,212 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10157 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/9acb654f-47f1-448e-aaf5-393063f4652c-0_4-134-540_20200507103614.parquet | |
2020-05-07 10:36:19,215 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10157 State = COMMITTED size 434904 byte | |
2020-05-07 10:36:19,229 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,230 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,230 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10158 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0a072f10-eaa4-47fe-b1b8-2a26d10426b9-0_5-134-541_20200507103614.parquet | |
2020-05-07 10:36:19,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10158 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0a072f10-eaa4-47fe-b1b8-2a26d10426b9-0_5-134-541_20200507103614.parquet | |
2020-05-07 10:36:19,245 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10158 State = COMMITTED size 434895 byte | |
2020-05-07 10:36:19,378 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/82851757-3e4a-4953-a6fa-0ecab8f772a3-0_3-134-539_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:19,431 INFO BlockStateChange: BLOCK* BlockManager: ask 10.0.4.12:50010 to delete [blk_10152_1001] | |
2020-05-07 10:36:19,451 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/3d98fb81-795d-4fee-a603-919a22ce116e-0_6-134-542_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:19,491 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,491 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,491 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10159 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/3d98fb81-795d-4fee-a603-919a22ce116e-0_6-134-542_20200507103614.parquet | |
2020-05-07 10:36:19,514 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10159 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/3d98fb81-795d-4fee-a603-919a22ce116e-0_6-134-542_20200507103614.parquet | |
2020-05-07 10:36:19,521 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10159 State = COMMITTED size 434925 byte | |
2020-05-07 10:36:19,616 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/9acb654f-47f1-448e-aaf5-393063f4652c-0_4-134-540_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:19,647 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0a072f10-eaa4-47fe-b1b8-2a26d10426b9-0_5-134-541_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:19,701 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/cca3a42e-ec0c-44ce-955f-6ef1c696393b-0_7-134-543_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:19,783 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/7c2b085a-afe5-4820-95b7-355da12baf0d-0_8-134-544_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:19,853 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,854 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,854 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,854 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10160 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/cca3a42e-ec0c-44ce-955f-6ef1c696393b-0_7-134-543_20200507103614.parquet | |
2020-05-07 10:36:19,854 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,854 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10161 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7c2b085a-afe5-4820-95b7-355da12baf0d-0_8-134-544_20200507103614.parquet | |
2020-05-07 10:36:19,872 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10160 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/cca3a42e-ec0c-44ce-955f-6ef1c696393b-0_7-134-543_20200507103614.parquet | |
2020-05-07 10:36:19,874 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10161 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7c2b085a-afe5-4820-95b7-355da12baf0d-0_8-134-544_20200507103614.parquet | |
2020-05-07 10:36:19,878 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10160 State = COMMITTED size 434931 byte | |
2020-05-07 10:36:19,893 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10161 State = COMMITTED size 434912 byte | |
2020-05-07 10:36:19,920 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/3d98fb81-795d-4fee-a603-919a22ce116e-0_6-134-542_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:19,986 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/4b76f43e-f989-4606-acd0-62c93bdd7ace-0_9-134-545_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:20,026 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,026 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,026 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10162 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/4b76f43e-f989-4606-acd0-62c93bdd7ace-0_9-134-545_20200507103614.parquet | |
2020-05-07 10:36:20,042 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10162 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/4b76f43e-f989-4606-acd0-62c93bdd7ace-0_9-134-545_20200507103614.parquet | |
2020-05-07 10:36:20,047 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10162 State = COMMITTED size 434919 byte | |
2020-05-07 10:36:20,279 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/cca3a42e-ec0c-44ce-955f-6ef1c696393b-0_7-134-543_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:20,280 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7c2b085a-afe5-4820-95b7-355da12baf0d-0_8-134-544_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:20,445 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/ed45a508-9e2e-47c6-a6f7-c7fe77bb9eb4-0_10-134-546_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:20,453 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/4b76f43e-f989-4606-acd0-62c93bdd7ace-0_9-134-545_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:20,458 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/9f43e40c-1684-4ef0-9970-28d71f903e52-0_11-134-547_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:20,511 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,511 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,511 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10163 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ed45a508-9e2e-47c6-a6f7-c7fe77bb9eb4-0_10-134-546_20200507103614.parquet | |
2020-05-07 10:36:20,532 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,532 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,532 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10164 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/9f43e40c-1684-4ef0-9970-28d71f903e52-0_11-134-547_20200507103614.parquet | |
2020-05-07 10:36:20,558 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10163 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:20,567 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ed45a508-9e2e-47c6-a6f7-c7fe77bb9eb4-0_10-134-546_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:20,585 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10164 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/9f43e40c-1684-4ef0-9970-28d71f903e52-0_11-134-547_20200507103614.parquet | |
2020-05-07 10:36:20,616 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10164 State = COMMITTED size 435048 byte | |
2020-05-07 10:36:20,623 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/89b783e7-43c3-47a7-8eb1-b5924da32afc-0_12-134-548_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:20,679 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,679 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,679 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10165 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/89b783e7-43c3-47a7-8eb1-b5924da32afc-0_12-134-548_20200507103614.parquet | |
2020-05-07 10:36:20,680 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/f3792b85-5e39-47e8-bd9c-7e53f7874ea8-0_13-134-549_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:20,712 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10165 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/89b783e7-43c3-47a7-8eb1-b5924da32afc-0_12-134-548_20200507103614.parquet | |
2020-05-07 10:36:20,722 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,722 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,722 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10166 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/f3792b85-5e39-47e8-bd9c-7e53f7874ea8-0_13-134-549_20200507103614.parquet | |
2020-05-07 10:36:20,723 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10165 State = COMMITTED size 435026 byte | |
2020-05-07 10:36:20,744 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10166 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/f3792b85-5e39-47e8-bd9c-7e53f7874ea8-0_13-134-549_20200507103614.parquet | |
2020-05-07 10:36:20,750 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10166 State = COMMITTED size 435037 byte | |
2020-05-07 10:36:20,997 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/9f43e40c-1684-4ef0-9970-28d71f903e52-0_11-134-547_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:21,045 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/17cb9a03-df7a-43ec-921a-3d58673e2964-0_14-134-550_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:21,065 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,065 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,066 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10167 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/17cb9a03-df7a-43ec-921a-3d58673e2964-0_14-134-550_20200507103614.parquet | |
2020-05-07 10:36:21,074 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10167 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/17cb9a03-df7a-43ec-921a-3d58673e2964-0_14-134-550_20200507103614.parquet | |
2020-05-07 10:36:21,077 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10167 State = COMMITTED size 435028 byte | |
2020-05-07 10:36:21,118 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/89b783e7-43c3-47a7-8eb1-b5924da32afc-0_12-134-548_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:21,149 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/f3792b85-5e39-47e8-bd9c-7e53f7874ea8-0_13-134-549_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,170 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/a01ca614-84e3-4d80-ae19-7315fcea7b14-0_15-134-551_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:21,204 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,204 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,205 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10168 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/a01ca614-84e3-4d80-ae19-7315fcea7b14-0_15-134-551_20200507103614.parquet | |
2020-05-07 10:36:21,219 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10168 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/a01ca614-84e3-4d80-ae19-7315fcea7b14-0_15-134-551_20200507103614.parquet | |
2020-05-07 10:36:21,221 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/c8bf45ae-0ef2-4ddf-ad77-f965a92a0733-0_16-134-552_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,223 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10168 State = COMMITTED size 435046 byte | |
2020-05-07 10:36:21,244 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,244 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,244 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10169 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/c8bf45ae-0ef2-4ddf-ad77-f965a92a0733-0_16-134-552_20200507103614.parquet | |
2020-05-07 10:36:21,253 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10169 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:21,266 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/c8bf45ae-0ef2-4ddf-ad77-f965a92a0733-0_16-134-552_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,329 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/6d065511-629e-4db2-a864-42198e1e2e35-0_17-134-553_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,354 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,354 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,354 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10170 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/6d065511-629e-4db2-a864-42198e1e2e35-0_17-134-553_20200507103614.parquet | |
2020-05-07 10:36:21,362 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10170 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:21,366 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/6d065511-629e-4db2-a864-42198e1e2e35-0_17-134-553_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,420 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/57156339-7924-4945-a18b-bf17e896b084-0_18-134-554_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,439 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,440 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,440 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10171 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/57156339-7924-4945-a18b-bf17e896b084-0_18-134-554_20200507103614.parquet | |
2020-05-07 10:36:21,448 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10171 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/57156339-7924-4945-a18b-bf17e896b084-0_18-134-554_20200507103614.parquet | |
2020-05-07 10:36:21,451 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10171 State = COMMITTED size 435106 byte | |
2020-05-07 10:36:21,478 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/17cb9a03-df7a-43ec-921a-3d58673e2964-0_14-134-550_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:21,535 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/7d723ee8-1ca5-4e91-86d8-a819dddf791e-0_19-134-555_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:21,559 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,559 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,559 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10172 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7d723ee8-1ca5-4e91-86d8-a819dddf791e-0_19-134-555_20200507103614.parquet | |
2020-05-07 10:36:21,568 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10172 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:21,572 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7d723ee8-1ca5-4e91-86d8-a819dddf791e-0_19-134-555_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:21,624 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/a01ca614-84e3-4d80-ae19-7315fcea7b14-0_15-134-551_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:21,625 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/0b7f0ab1-200a-420b-81da-c5b82640b199-0_20-134-556_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:21,662 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,662 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,662 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10173 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0b7f0ab1-200a-420b-81da-c5b82640b199-0_20-134-556_20200507103614.parquet | |
2020-05-07 10:36:21,680 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10173 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0b7f0ab1-200a-420b-81da-c5b82640b199-0_20-134-556_20200507103614.parquet | |
2020-05-07 10:36:21,685 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10173 State = COMMITTED size 435089 byte | |
2020-05-07 10:36:21,698 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/08add3f3-483c-47f0-92a7-583e0aa1bbeb-0_21-134-557_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:21,732 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,732 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,733 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10174 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/08add3f3-483c-47f0-92a7-583e0aa1bbeb-0_21-134-557_20200507103614.parquet | |
2020-05-07 10:36:21,745 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10174 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/08add3f3-483c-47f0-92a7-583e0aa1bbeb-0_21-134-557_20200507103614.parquet | |
2020-05-07 10:36:21,749 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10174 State = COMMITTED size 435162 byte | |
2020-05-07 10:36:21,852 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/57156339-7924-4945-a18b-bf17e896b084-0_18-134-554_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,920 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/ff88e70f-fb59-45da-bf2c-ac95d495f27c-0_22-134-558_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,952 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,952 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,953 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10175 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ff88e70f-fb59-45da-bf2c-ac95d495f27c-0_22-134-558_20200507103614.parquet | |
2020-05-07 10:36:21,965 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10175 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ff88e70f-fb59-45da-bf2c-ac95d495f27c-0_22-134-558_20200507103614.parquet | |
2020-05-07 10:36:21,974 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10175 State = COMMITTED size 435114 byte | |
2020-05-07 10:36:22,086 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0b7f0ab1-200a-420b-81da-c5b82640b199-0_20-134-556_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:22,148 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/f851f67f-2ebe-466b-92cb-5b05caec6daa-0_23-134-559_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:22,150 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/08add3f3-483c-47f0-92a7-583e0aa1bbeb-0_21-134-557_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:22,174 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,174 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,174 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10176 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/f851f67f-2ebe-466b-92cb-5b05caec6daa-0_23-134-559_20200507103614.parquet | |
2020-05-07 10:36:22,183 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10176 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/f851f67f-2ebe-466b-92cb-5b05caec6daa-0_23-134-559_20200507103614.parquet | |
2020-05-07 10:36:22,188 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10176 State = COMMITTED size 435189 byte | |
2020-05-07 10:36:22,223 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/5fa07bb8-18e8-4c83-b0c0-697c53e31378-0_24-134-560_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:22,250 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,250 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,250 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10177 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/5fa07bb8-18e8-4c83-b0c0-697c53e31378-0_24-134-560_20200507103614.parquet | |
2020-05-07 10:36:22,258 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10177 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/5fa07bb8-18e8-4c83-b0c0-697c53e31378-0_24-134-560_20200507103614.parquet | |
2020-05-07 10:36:22,262 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10177 State = COMMITTED size 435190 byte | |
2020-05-07 10:36:22,373 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ff88e70f-fb59-45da-bf2c-ac95d495f27c-0_22-134-558_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:22,453 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/2fd55a0a-20f0-4309-9961-45d8c058092f-0_25-134-561_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:22,485 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,485 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,485 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10178 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/2fd55a0a-20f0-4309-9961-45d8c058092f-0_25-134-561_20200507103614.parquet | |
2020-05-07 10:36:22,502 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10178 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/2fd55a0a-20f0-4309-9961-45d8c058092f-0_25-134-561_20200507103614.parquet | |
2020-05-07 10:36:22,505 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10178 State = COMMITTED size 435117 byte | |
2020-05-07 10:36:22,587 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/f851f67f-2ebe-466b-92cb-5b05caec6daa-0_23-134-559_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:22,646 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/44665e12-befe-4dd1-ab53-af4e465bfadf-0_26-134-562_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:22,672 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/5fa07bb8-18e8-4c83-b0c0-697c53e31378-0_24-134-560_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:22,677 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,677 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,677 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10179 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/44665e12-befe-4dd1-ab53-af4e465bfadf-0_26-134-562_20200507103614.parquet | |
2020-05-07 10:36:22,696 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10179 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:22,724 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/44665e12-befe-4dd1-ab53-af4e465bfadf-0_26-134-562_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:22,836 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/7a2e9a96-fe9a-4d20-802c-bf646910dc8d-0_27-134-563_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:22,869 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/eaefb650-3148-4de7-86cb-46fea890718d-0_28-134-564_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:22,896 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,896 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,896 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10180 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7a2e9a96-fe9a-4d20-802c-bf646910dc8d-0_27-134-563_20200507103614.parquet | |
2020-05-07 10:36:22,928 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/2fd55a0a-20f0-4309-9961-45d8c058092f-0_25-134-561_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:22,931 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,931 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,931 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10181 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/eaefb650-3148-4de7-86cb-46fea890718d-0_28-134-564_20200507103614.parquet | |
2020-05-07 10:36:22,933 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10180 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7a2e9a96-fe9a-4d20-802c-bf646910dc8d-0_27-134-563_20200507103614.parquet | |
2020-05-07 10:36:22,955 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10181 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/eaefb650-3148-4de7-86cb-46fea890718d-0_28-134-564_20200507103614.parquet | |
2020-05-07 10:36:22,965 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10180 State = COMMITTED size 435202 byte | |
2020-05-07 10:36:23,040 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10181 State = COMMITTED size 435299 byte | |
2020-05-07 10:36:23,048 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/ed8d4b1b-467a-4a44-8d50-0f98373dfa69-0_29-134-565_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:23,067 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:23,067 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:23,067 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10182 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ed8d4b1b-467a-4a44-8d50-0f98373dfa69-0_29-134-565_20200507103614.parquet | |
2020-05-07 10:36:23,075 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10182 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ed8d4b1b-467a-4a44-8d50-0f98373dfa69-0_29-134-565_20200507103614.parquet | |
2020-05-07 10:36:23,078 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10182 State = COMMITTED size 435239 byte | |
2020-05-07 10:36:23,340 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7a2e9a96-fe9a-4d20-802c-bf646910dc8d-0_27-134-563_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:23,373 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/eaefb650-3148-4de7-86cb-46fea890718d-0_28-134-564_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:23,413 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/73df5707-0fd2-4567-80d7-7d8497864118-0_30-134-566_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:23,449 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:23,449 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:23,449 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10183 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/73df5707-0fd2-4567-80d7-7d8497864118-0_30-134-566_20200507103614.parquet | |
2020-05-07 10:36:23,476 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/30f5d719-194c-43d2-ad24-e64e1d281147-0_31-134-567_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:23,479 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10183 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/73df5707-0fd2-4567-80d7-7d8497864118-0_30-134-566_20200507103614.parquet | |
2020-05-07 10:36:23,482 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ed8d4b1b-467a-4a44-8d50-0f98373dfa69-0_29-134-565_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:23,484 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10183 State = COMMITTED size 435324 byte | |
2020-05-07 10:36:23,518 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:23,518 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:23,518 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10184 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/30f5d719-194c-43d2-ad24-e64e1d281147-0_31-134-567_20200507103614.parquet | |
2020-05-07 10:36:23,545 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10184 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/30f5d719-194c-43d2-ad24-e64e1d281147-0_31-134-567_20200507103614.parquet | |
2020-05-07 10:36:23,551 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10184 State = COMMITTED size 435273 byte | |
2020-05-07 10:36:23,575 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/0e371eee-262b-4672-bc9e-0d1332dff90c-0_32-134-568_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:23,603 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:23,604 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:23,604 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10185 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0e371eee-262b-4672-bc9e-0d1332dff90c-0_32-134-568_20200507103614.parquet | |
2020-05-07 10:36:23,614 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10185 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0e371eee-262b-4672-bc9e-0d1332dff90c-0_32-134-568_20200507103614.parquet | |
2020-05-07 10:36:23,617 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10185 State = COMMITTED size 435270 byte | |
2020-05-07 10:36:23,883 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/73df5707-0fd2-4567-80d7-7d8497864118-0_30-134-566_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:23,939 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/dbeb2aa4-4f0a-42bf-a96b-10e540054752-0_33-134-569_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:23,952 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/30f5d719-194c-43d2-ad24-e64e1d281147-0_31-134-567_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:23,971 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:23,971 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:23,971 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10186 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/dbeb2aa4-4f0a-42bf-a96b-10e540054752-0_33-134-569_20200507103614.parquet | |
2020-05-07 10:36:23,988 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10186 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/dbeb2aa4-4f0a-42bf-a96b-10e540054752-0_33-134-569_20200507103614.parquet | |
2020-05-07 10:36:23,992 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10186 State = COMMITTED size 435279 byte | |
2020-05-07 10:36:24,019 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0e371eee-262b-4672-bc9e-0d1332dff90c-0_32-134-568_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:24,066 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/685452a9-cc3b-432d-9653-f31bb1fc5f2b-0_34-134-570_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:24,120 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/3a59e89b-6805-4a14-a666-56cce9f57915-0_35-134-571_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:24,133 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:24,133 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:24,133 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10187 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/685452a9-cc3b-432d-9653-f31bb1fc5f2b-0_34-134-570_20200507103614.parquet | |
2020-05-07 10:36:24,165 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:24,165 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:24,165 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10188 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/3a59e89b-6805-4a14-a666-56cce9f57915-0_35-134-571_20200507103614.parquet | |
2020-05-07 10:36:24,168 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10187 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:24,175 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/685452a9-cc3b-432d-9653-f31bb1fc5f2b-0_34-134-570_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:24,199 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10188 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/3a59e89b-6805-4a14-a666-56cce9f57915-0_35-134-571_20200507103614.parquet | |
2020-05-07 10:36:24,206 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10188 State = COMMITTED size 435311 byte | |
2020-05-07 10:36:24,382 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/a834b508-029b-415f-a418-12896a461180-0_36-134-572_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:24,394 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/dbeb2aa4-4f0a-42bf-a96b-10e540054752-0_33-134-569_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:24,423 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:24,423 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:24,423 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10189 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/a834b508-029b-415f-a418-12896a461180-0_36-134-572_20200507103614.parquet | |
2020-05-07 10:36:24,441 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10189 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/a834b508-029b-415f-a418-12896a461180-0_36-134-572_20200507103614.parquet | |
2020-05-07 10:36:24,448 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10189 State = COMMITTED size 435406 byte | |
2020-05-07 10:36:24,467 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/bbc0d057-0466-4635-a948-4bf96b98fd54-0_37-134-573_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:24,497 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:24,497 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:24,498 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10190 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/bbc0d057-0466-4635-a948-4bf96b98fd54-0_37-134-573_20200507103614.parquet | |
2020-05-07 10:36:24,507 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10190 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/bbc0d057-0466-4635-a948-4bf96b98fd54-0_37-134-573_20200507103614.parquet | |
2020-05-07 10:36:24,510 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10190 State = COMMITTED size 435390 byte | |
2020-05-07 10:36:24,609 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/3a59e89b-6805-4a14-a666-56cce9f57915-0_35-134-571_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:24,678 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/b7abfc91-d49c-40f0-b91d-3181cd5e378e-0_38-134-574_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:24,704 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:24,704 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:24,704 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10191 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/b7abfc91-d49c-40f0-b91d-3181cd5e378e-0_38-134-574_20200507103614.parquet | |
2020-05-07 10:36:24,712 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10191 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/b7abfc91-d49c-40f0-b91d-3181cd5e378e-0_38-134-574_20200507103614.parquet | |
2020-05-07 10:36:24,715 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10191 State = COMMITTED size 435424 byte | |
2020-05-07 10:36:24,845 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/a834b508-029b-415f-a418-12896a461180-0_36-134-572_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:24,911 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/19cb8c99-849b-46cf-93a8-e0a4433ca8f8-0_39-134-575_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:24,913 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/bbc0d057-0466-4635-a948-4bf96b98fd54-0_37-134-573_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:24,935 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:24,935 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:24,935 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10192 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/19cb8c99-849b-46cf-93a8-e0a4433ca8f8-0_39-134-575_20200507103614.parquet | |
2020-05-07 10:36:24,944 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10192 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/19cb8c99-849b-46cf-93a8-e0a4433ca8f8-0_39-134-575_20200507103614.parquet | |
2020-05-07 10:36:24,949 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10192 State = COMMITTED size 435409 byte | |
2020-05-07 10:36:24,982 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/979c9fbf-4e89-40a4-a35a-48b1fadb0439-0_40-134-576_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:25,005 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,006 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,006 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10193 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/979c9fbf-4e89-40a4-a35a-48b1fadb0439-0_40-134-576_20200507103614.parquet | |
2020-05-07 10:36:25,016 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10193 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/979c9fbf-4e89-40a4-a35a-48b1fadb0439-0_40-134-576_20200507103614.parquet | |
2020-05-07 10:36:25,020 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10193 State = COMMITTED size 435445 byte | |
2020-05-07 10:36:25,117 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/b7abfc91-d49c-40f0-b91d-3181cd5e378e-0_38-134-574_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:25,167 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/5ae6aa81-2274-4626-8b8c-7641865d8d7b-0_41-134-577_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:25,189 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,189 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,189 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10194 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/5ae6aa81-2274-4626-8b8c-7641865d8d7b-0_41-134-577_20200507103614.parquet | |
2020-05-07 10:36:25,202 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10194 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/5ae6aa81-2274-4626-8b8c-7641865d8d7b-0_41-134-577_20200507103614.parquet | |
2020-05-07 10:36:25,207 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10194 State = COMMITTED size 434778 byte | |
2020-05-07 10:36:25,349 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/19cb8c99-849b-46cf-93a8-e0a4433ca8f8-0_39-134-575_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:25,406 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/44b6f8c0-c1b7-49a0-8d7e-5e2ac2af3bd3-0_42-134-578_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:25,421 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/979c9fbf-4e89-40a4-a35a-48b1fadb0439-0_40-134-576_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:25,430 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,430 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,430 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10195 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/44b6f8c0-c1b7-49a0-8d7e-5e2ac2af3bd3-0_42-134-578_20200507103614.parquet | |
2020-05-07 10:36:25,439 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10195 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/44b6f8c0-c1b7-49a0-8d7e-5e2ac2af3bd3-0_42-134-578_20200507103614.parquet | |
2020-05-07 10:36:25,442 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10195 State = COMMITTED size 434798 byte | |
2020-05-07 10:36:25,468 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/b3d540d8-0f06-458d-b64f-12a1a1556862-0_43-134-579_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:25,485 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,485 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,485 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10196 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/b3d540d8-0f06-458d-b64f-12a1a1556862-0_43-134-579_20200507103614.parquet | |
2020-05-07 10:36:25,492 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10196 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/b3d540d8-0f06-458d-b64f-12a1a1556862-0_43-134-579_20200507103614.parquet | |
2020-05-07 10:36:25,495 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10196 State = COMMITTED size 434830 byte | |
2020-05-07 10:36:25,606 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/5ae6aa81-2274-4626-8b8c-7641865d8d7b-0_41-134-577_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:25,647 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/1f1fe8d4-2b85-4ac2-9a50-5db40b5f8482-0_44-134-580_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:25,666 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,666 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,666 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10197 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/1f1fe8d4-2b85-4ac2-9a50-5db40b5f8482-0_44-134-580_20200507103614.parquet | |
2020-05-07 10:36:25,673 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10197 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/1f1fe8d4-2b85-4ac2-9a50-5db40b5f8482-0_44-134-580_20200507103614.parquet | |
2020-05-07 10:36:25,676 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10197 State = COMMITTED size 434841 byte | |
2020-05-07 10:36:25,842 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/44b6f8c0-c1b7-49a0-8d7e-5e2ac2af3bd3-0_42-134-578_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:25,882 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,882 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,883 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10198 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_45 | |
2020-05-07 10:36:25,891 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_45 for HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:25,896 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10198 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_45 | |
2020-05-07 10:36:25,896 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/b3d540d8-0f06-458d-b64f-12a1a1556862-0_43-134-579_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:25,899 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10198 State = COMMITTED size 93 byte | |
2020-05-07 10:36:25,936 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,936 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:25,936 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10199 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_46 | |
2020-05-07 10:36:25,944 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_46 for HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:25,949 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10199 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_46 | |
2020-05-07 10:36:25,952 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10199 State = COMMITTED size 93 byte | |
2020-05-07 10:36:26,077 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/1f1fe8d4-2b85-4ac2-9a50-5db40b5f8482-0_44-134-580_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:26,123 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:26,123 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:26,123 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10200 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_47 | |
2020-05-07 10:36:26,132 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_47 for HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:26,138 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10200 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_47 | |
2020-05-07 10:36:26,141 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10200 State = COMMITTED size 93 byte | |
2020-05-07 10:36:26,299 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_45 is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:26,345 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/04a8272c-ae80-404b-bebd-59284469c429-0_45-134-581_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:26,353 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_46 is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:26,357 WARN org.apache.hadoop.hdfs.StateChange: DIR* FSDirectory.unprotectedRenameTo: failed to rename /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_46 to /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata because destination exists | |
2020-05-07 10:36:26,363 INFO org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks: BLOCK* InvalidateBlocks: add bid= 10199 State = COMPLETE to [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 | |
2020-05-07 10:36:26,363 INFO BlockStateChange: BLOCK* addToInvalidates: bid= 10199 State = COMPLETE 10.0.4.12:50010 | |
2020-05-07 10:36:26,373 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:26,373 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:26,373 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10201 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/04a8272c-ae80-404b-bebd-59284469c429-0_45-134-581_20200507103614.parquet | |
2020-05-07 10:36:26,375 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/47b3d6a4-bdc2-4049-aee3-2d18acee62e3-0_46-134-582_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:26,384 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10201 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/04a8272c-ae80-404b-bebd-59284469c429-0_45-134-581_20200507103614.parquet | |
2020-05-07 10:36:26,388 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10201 State = COMMITTED size 434812 byte | |
2020-05-07 10:36:26,395 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:26,395 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:26,395 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10202 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/47b3d6a4-bdc2-4049-aee3-2d18acee62e3-0_46-134-582_20200507103614.parquet | |
2020-05-07 10:36:26,403 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10202 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/47b3d6a4-bdc2-4049-aee3-2d18acee62e3-0_46-134-582_20200507103614.parquet | |
2020-05-07 10:36:26,407 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10202 State = COMMITTED size 434886 byte | |
2020-05-07 10:36:26,541 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_47 is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:26,546 WARN org.apache.hadoop.hdfs.StateChange: DIR* FSDirectory.unprotectedRenameTo: failed to rename /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata_47 to /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/.hoodie_partition_metadata because destination exists | |
2020-05-07 10:36:26,557 INFO org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks: BLOCK* InvalidateBlocks: add bid= 10200 State = COMPLETE to [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 | |
2020-05-07 10:36:26,557 INFO BlockStateChange: BLOCK* addToInvalidates: bid= 10200 State = COMPLETE 10.0.4.12:50010 | |
2020-05-07 10:36:26,568 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/0af57621-ef64-48dd-9a84-a8d3788c3a49-0_47-134-583_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:26,586 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:26,586 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:26,586 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10203 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/0af57621-ef64-48dd-9a84-a8d3788c3a49-0_47-134-583_20200507103614.parquet | |
2020-05-07 10:36:26,596 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10203 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/0af57621-ef64-48dd-9a84-a8d3788c3a49-0_47-134-583_20200507103614.parquet | |
2020-05-07 10:36:26,599 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10203 State = COMMITTED size 434881 byte | |
2020-05-07 10:36:26,787 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/04a8272c-ae80-404b-bebd-59284469c429-0_45-134-581_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:26,810 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/47b3d6a4-bdc2-4049-aee3-2d18acee62e3-0_46-134-582_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:26,833 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/4804e3f5-1ed2-4a3a-a42c-ecd6757a9094-0_48-134-584_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:26,852 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:26,852 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:26,852 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10204 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/4804e3f5-1ed2-4a3a-a42c-ecd6757a9094-0_48-134-584_20200507103614.parquet | |
2020-05-07 10:36:26,858 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/1a300d5a-e0fb-44cb-896c-ceb337a76928-0_49-134-585_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:26,861 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10204 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/4804e3f5-1ed2-4a3a-a42c-ecd6757a9094-0_48-134-584_20200507103614.parquet | |
2020-05-07 10:36:26,864 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10204 State = COMMITTED size 434881 byte | |
2020-05-07 10:36:26,877 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:26,877 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:26,877 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10205 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/1a300d5a-e0fb-44cb-896c-ceb337a76928-0_49-134-585_20200507103614.parquet | |
2020-05-07 10:36:26,885 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10205 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/1a300d5a-e0fb-44cb-896c-ceb337a76928-0_49-134-585_20200507103614.parquet | |
2020-05-07 10:36:26,888 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10205 State = COMMITTED size 434907 byte | |
2020-05-07 10:36:26,999 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/0af57621-ef64-48dd-9a84-a8d3788c3a49-0_47-134-583_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:27,041 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/dc95acb3-803d-41dc-b8d0-369e605d754a-0_50-134-586_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:27,057 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,057 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,057 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10206 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/dc95acb3-803d-41dc-b8d0-369e605d754a-0_50-134-586_20200507103614.parquet | |
2020-05-07 10:36:27,063 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10206 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/dc95acb3-803d-41dc-b8d0-369e605d754a-0_50-134-586_20200507103614.parquet | |
2020-05-07 10:36:27,066 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10206 State = COMMITTED size 434895 byte | |
2020-05-07 10:36:27,264 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/4804e3f5-1ed2-4a3a-a42c-ecd6757a9094-0_48-134-584_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:27,296 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/1a300d5a-e0fb-44cb-896c-ceb337a76928-0_49-134-585_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:27,373 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/c80f0bd5-577a-4359-8182-d7fddffeece0-0_51-134-587_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:27,374 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/e5408dc3-7f8c-40b5-8ecc-365215c5372a-0_52-134-588_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:27,401 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,401 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,401 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,401 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10207 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/c80f0bd5-577a-4359-8182-d7fddffeece0-0_51-134-587_20200507103614.parquet | |
2020-05-07 10:36:27,401 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,401 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10208 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e5408dc3-7f8c-40b5-8ecc-365215c5372a-0_52-134-588_20200507103614.parquet | |
2020-05-07 10:36:27,413 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10207 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/c80f0bd5-577a-4359-8182-d7fddffeece0-0_51-134-587_20200507103614.parquet | |
2020-05-07 10:36:27,415 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10208 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:27,420 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e5408dc3-7f8c-40b5-8ecc-365215c5372a-0_52-134-588_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:27,429 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10207 State = COMMITTED size 434876 byte | |
2020-05-07 10:36:27,471 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/dc95acb3-803d-41dc-b8d0-369e605d754a-0_50-134-586_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:27,483 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/db423b2e-f1fc-4439-a682-f1909464077b-0_53-134-589_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:27,508 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,508 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,508 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10209 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/db423b2e-f1fc-4439-a682-f1909464077b-0_53-134-589_20200507103614.parquet | |
2020-05-07 10:36:27,518 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10209 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/db423b2e-f1fc-4439-a682-f1909464077b-0_53-134-589_20200507103614.parquet | |
2020-05-07 10:36:27,522 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10209 State = COMMITTED size 434888 byte | |
2020-05-07 10:36:27,529 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/770128ba-c98e-428b-a9f8-57f2d6f7abdc-0_54-134-590_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:27,546 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,546 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,546 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10210 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/770128ba-c98e-428b-a9f8-57f2d6f7abdc-0_54-134-590_20200507103614.parquet | |
2020-05-07 10:36:27,556 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10210 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/770128ba-c98e-428b-a9f8-57f2d6f7abdc-0_54-134-590_20200507103614.parquet | |
2020-05-07 10:36:27,559 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10210 State = COMMITTED size 434894 byte | |
2020-05-07 10:36:27,818 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/c80f0bd5-577a-4359-8182-d7fddffeece0-0_51-134-587_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:27,860 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/9db94ed9-8cab-4053-8f98-5decc307e9e9-0_55-134-591_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:27,875 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,876 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,876 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10211 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/9db94ed9-8cab-4053-8f98-5decc307e9e9-0_55-134-591_20200507103614.parquet | |
2020-05-07 10:36:27,883 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10211 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/9db94ed9-8cab-4053-8f98-5decc307e9e9-0_55-134-591_20200507103614.parquet | |
2020-05-07 10:36:27,886 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10211 State = COMMITTED size 434886 byte | |
2020-05-07 10:36:27,923 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/db423b2e-f1fc-4439-a682-f1909464077b-0_53-134-589_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:27,960 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/770128ba-c98e-428b-a9f8-57f2d6f7abdc-0_54-134-590_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:27,964 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/efddf7e9-29cc-4ec5-93f5-9ddf0033dda5-0_56-134-592_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:27,986 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,986 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:27,986 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10212 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/efddf7e9-29cc-4ec5-93f5-9ddf0033dda5-0_56-134-592_20200507103614.parquet | |
2020-05-07 10:36:27,997 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10212 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/efddf7e9-29cc-4ec5-93f5-9ddf0033dda5-0_56-134-592_20200507103614.parquet | |
2020-05-07 10:36:28,002 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10212 State = COMMITTED size 434840 byte | |
2020-05-07 10:36:28,008 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/b2705644-f5e1-4dbc-8459-bd5127b91607-0_57-134-593_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:28,026 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:28,026 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:28,026 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10213 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/b2705644-f5e1-4dbc-8459-bd5127b91607-0_57-134-593_20200507103614.parquet | |
2020-05-07 10:36:28,033 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10213 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/b2705644-f5e1-4dbc-8459-bd5127b91607-0_57-134-593_20200507103614.parquet | |
2020-05-07 10:36:28,036 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10213 State = COMMITTED size 434882 byte | |
2020-05-07 10:36:28,287 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/9db94ed9-8cab-4053-8f98-5decc307e9e9-0_55-134-591_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:28,338 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/066e0a2d-b8b1-4159-979f-11176eb4d3c7-0_58-134-594_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:28,356 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:28,356 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:28,356 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10214 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/066e0a2d-b8b1-4159-979f-11176eb4d3c7-0_58-134-594_20200507103614.parquet | |
2020-05-07 10:36:28,365 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10214 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:28,378 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/066e0a2d-b8b1-4159-979f-11176eb4d3c7-0_58-134-594_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:28,402 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/efddf7e9-29cc-4ec5-93f5-9ddf0033dda5-0_56-134-592_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:28,426 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/eb4b1bc5-b5d3-42c8-9942-3b8079f4dce4-0_59-134-595_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:28,438 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/b2705644-f5e1-4dbc-8459-bd5127b91607-0_57-134-593_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:28,473 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:28,473 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:28,473 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10215 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/eb4b1bc5-b5d3-42c8-9942-3b8079f4dce4-0_59-134-595_20200507103614.parquet | |
2020-05-07 10:36:28,505 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10215 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/eb4b1bc5-b5d3-42c8-9942-3b8079f4dce4-0_59-134-595_20200507103614.parquet | |
2020-05-07 10:36:28,511 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10215 State = COMMITTED size 434883 byte | |
2020-05-07 10:36:28,516 INFO BlockStateChange: BLOCK* BlockManager: ask 10.0.4.12:50010 to delete [blk_10199_1001, blk_10200_1001] | |
2020-05-07 10:36:28,519 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/33714399-cb23-4e78-a949-195149aa4d5c-0_61-134-597_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:28,525 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/4cc580f8-3fff-4dfe-ac53-746e8148ca09-0_60-134-596_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:28,551 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:28,551 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:28,552 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10216 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/4cc580f8-3fff-4dfe-ac53-746e8148ca09-0_60-134-596_20200507103614.parquet | |
2020-05-07 10:36:28,560 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:28,560 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:28,560 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10217 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/33714399-cb23-4e78-a949-195149aa4d5c-0_61-134-597_20200507103614.parquet | |
2020-05-07 10:36:28,562 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10216 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/4cc580f8-3fff-4dfe-ac53-746e8148ca09-0_60-134-596_20200507103614.parquet | |
2020-05-07 10:36:28,568 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10216 State = COMMITTED size 434874 byte | |
2020-05-07 10:36:28,569 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10217 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/33714399-cb23-4e78-a949-195149aa4d5c-0_61-134-597_20200507103614.parquet | |
2020-05-07 10:36:28,574 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10217 State = COMMITTED size 434886 byte | |
2020-05-07 10:36:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:36:28,909 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/eb4b1bc5-b5d3-42c8-9942-3b8079f4dce4-0_59-134-595_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:28,948 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/12a0041a-d035-4e09-a270-ef080a030ac1-0_62-134-598_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:28,965 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:28,965 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:28,965 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10218 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/12a0041a-d035-4e09-a270-ef080a030ac1-0_62-134-598_20200507103614.parquet | |
2020-05-07 10:36:28,968 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/4cc580f8-3fff-4dfe-ac53-746e8148ca09-0_60-134-596_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:28,974 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/33714399-cb23-4e78-a949-195149aa4d5c-0_61-134-597_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:28,979 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10218 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:28,993 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/12a0041a-d035-4e09-a270-ef080a030ac1-0_62-134-598_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:29,018 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/1c1860c4-5988-493e-85f9-710462d27b27-0_63-134-599_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:29,036 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/d7ed3b24-4dba-41b4-a39b-b0191c5da845-0_64-134-600_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:29,038 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:29,038 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:29,038 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10219 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/1c1860c4-5988-493e-85f9-710462d27b27-0_63-134-599_20200507103614.parquet | |
2020-05-07 10:36:29,042 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/e607aba9-d910-4a6a-932a-53a6c2000ae4-0_65-134-601_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:29,049 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10219 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/1c1860c4-5988-493e-85f9-710462d27b27-0_63-134-599_20200507103614.parquet | |
2020-05-07 10:36:29,053 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10219 State = COMMITTED size 434875 byte | |
2020-05-07 10:36:29,062 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:29,062 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:29,062 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10220 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/d7ed3b24-4dba-41b4-a39b-b0191c5da845-0_64-134-600_20200507103614.parquet | |
2020-05-07 10:36:29,068 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:29,068 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:29,068 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10221 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e607aba9-d910-4a6a-932a-53a6c2000ae4-0_65-134-601_20200507103614.parquet | |
2020-05-07 10:36:29,070 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10220 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/d7ed3b24-4dba-41b4-a39b-b0191c5da845-0_64-134-600_20200507103614.parquet | |
2020-05-07 10:36:29,075 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10220 State = COMMITTED size 434902 byte | |
2020-05-07 10:36:29,079 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10221 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e607aba9-d910-4a6a-932a-53a6c2000ae4-0_65-134-601_20200507103614.parquet | |
2020-05-07 10:36:29,082 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10221 State = COMMITTED size 434894 byte | |
2020-05-07 10:36:29,453 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/1c1860c4-5988-493e-85f9-710462d27b27-0_63-134-599_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:29,476 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/d7ed3b24-4dba-41b4-a39b-b0191c5da845-0_64-134-600_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:29,484 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e607aba9-d910-4a6a-932a-53a6c2000ae4-0_65-134-601_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:29,498 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/f7778940-7192-4186-8fdb-e9fdebabafbc-0_66-134-602_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:29,529 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:29,529 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:29,529 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10222 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/f7778940-7192-4186-8fdb-e9fdebabafbc-0_66-134-602_20200507103614.parquet | |
2020-05-07 10:36:29,544 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/42f7b92b-2ffe-49b2-a806-33d24d5903ce-0_68-134-604_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:29,548 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/7ebfc7bd-6b78-48d7-8c36-0e19434fe694-0_67-134-603_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:29,548 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10222 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/f7778940-7192-4186-8fdb-e9fdebabafbc-0_66-134-602_20200507103614.parquet | |
2020-05-07 10:36:29,558 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10222 State = COMMITTED size 434889 byte | |
2020-05-07 10:36:29,578 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:29,578 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:29,578 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10223 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/42f7b92b-2ffe-49b2-a806-33d24d5903ce-0_68-134-604_20200507103614.parquet | |
2020-05-07 10:36:29,588 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10223 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/42f7b92b-2ffe-49b2-a806-33d24d5903ce-0_68-134-604_20200507103614.parquet | |
2020-05-07 10:36:29,593 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10223 State = COMMITTED size 434894 byte | |
2020-05-07 10:36:29,609 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:29,609 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:29,609 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10224 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/7ebfc7bd-6b78-48d7-8c36-0e19434fe694-0_67-134-603_20200507103614.parquet | |
2020-05-07 10:36:29,618 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10224 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/7ebfc7bd-6b78-48d7-8c36-0e19434fe694-0_67-134-603_20200507103614.parquet | |
2020-05-07 10:36:29,622 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10224 State = COMMITTED size 434847 byte | |
2020-05-07 10:36:29,955 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/f7778940-7192-4186-8fdb-e9fdebabafbc-0_66-134-602_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:30,000 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/42f7b92b-2ffe-49b2-a806-33d24d5903ce-0_68-134-604_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:30,021 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/d4729767-8403-43b6-8f57-633f8cda3901-0_69-134-605_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:30,028 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/7ebfc7bd-6b78-48d7-8c36-0e19434fe694-0_67-134-603_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:30,050 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:30,050 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:30,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10225 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/d4729767-8403-43b6-8f57-633f8cda3901-0_69-134-605_20200507103614.parquet | |
2020-05-07 10:36:30,069 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10225 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/d4729767-8403-43b6-8f57-633f8cda3901-0_69-134-605_20200507103614.parquet | |
2020-05-07 10:36:30,078 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/71a8e8d4-8b23-4f8d-9b48-1c6e5cde1f1f-0_70-134-606_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:30,084 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10225 State = COMMITTED size 434920 byte | |
2020-05-07 10:36:30,104 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:30,104 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:30,104 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10226 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/71a8e8d4-8b23-4f8d-9b48-1c6e5cde1f1f-0_70-134-606_20200507103614.parquet | |
2020-05-07 10:36:30,115 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/d9400975-49e9-4163-929a-5cb91693565c-0_71-134-607_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:30,117 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10226 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/71a8e8d4-8b23-4f8d-9b48-1c6e5cde1f1f-0_70-134-606_20200507103614.parquet | |
2020-05-07 10:36:30,123 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10226 State = COMMITTED size 434900 byte | |
2020-05-07 10:36:30,138 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:30,138 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:30,138 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10227 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/d9400975-49e9-4163-929a-5cb91693565c-0_71-134-607_20200507103614.parquet | |
2020-05-07 10:36:30,147 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10227 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/d9400975-49e9-4163-929a-5cb91693565c-0_71-134-607_20200507103614.parquet | |
2020-05-07 10:36:30,151 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10227 State = COMMITTED size 434913 byte | |
2020-05-07 10:36:30,475 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/d4729767-8403-43b6-8f57-633f8cda3901-0_69-134-605_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:30,522 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/59e16938-4d1e-401c-bd57-671ef5793a59-0_72-134-608_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:30,524 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/71a8e8d4-8b23-4f8d-9b48-1c6e5cde1f1f-0_70-134-606_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:30,552 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:30,552 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:30,552 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10228 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/59e16938-4d1e-401c-bd57-671ef5793a59-0_72-134-608_20200507103614.parquet | |
2020-05-07 10:36:30,553 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/d9400975-49e9-4163-929a-5cb91693565c-0_71-134-607_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:30,572 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10228 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/59e16938-4d1e-401c-bd57-671ef5793a59-0_72-134-608_20200507103614.parquet | |
2020-05-07 10:36:30,583 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10228 State = COMMITTED size 434876 byte | |
2020-05-07 10:36:30,597 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/d96e5c14-8aaf-4297-972d-4a0113ced7cf-0_73-134-609_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:30,611 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/0811ef32-56d0-438b-a05d-8b312dde1dd1-0_74-134-610_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:30,619 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:30,619 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:30,619 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10229 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/d96e5c14-8aaf-4297-972d-4a0113ced7cf-0_73-134-609_20200507103614.parquet | |
2020-05-07 10:36:30,635 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:30,635 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:30,635 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10230 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/0811ef32-56d0-438b-a05d-8b312dde1dd1-0_74-134-610_20200507103614.parquet | |
2020-05-07 10:36:30,635 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10229 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/d96e5c14-8aaf-4297-972d-4a0113ced7cf-0_73-134-609_20200507103614.parquet | |
2020-05-07 10:36:30,644 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10229 State = COMMITTED size 434879 byte | |
2020-05-07 10:36:30,646 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10230 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/0811ef32-56d0-438b-a05d-8b312dde1dd1-0_74-134-610_20200507103614.parquet | |
2020-05-07 10:36:30,651 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10230 State = COMMITTED size 434906 byte | |
2020-05-07 10:36:30,978 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/59e16938-4d1e-401c-bd57-671ef5793a59-0_72-134-608_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:31,027 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/a8456a4a-f0dd-4ded-9992-128d57c763ac-0_75-134-611_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:31,040 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/d96e5c14-8aaf-4297-972d-4a0113ced7cf-0_73-134-609_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:31,047 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:31,047 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:31,048 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10231 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/a8456a4a-f0dd-4ded-9992-128d57c763ac-0_75-134-611_20200507103614.parquet | |
2020-05-07 10:36:31,052 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/0811ef32-56d0-438b-a05d-8b312dde1dd1-0_74-134-610_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:31,058 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10231 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/a8456a4a-f0dd-4ded-9992-128d57c763ac-0_75-134-611_20200507103614.parquet | |
2020-05-07 10:36:31,063 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10231 State = COMMITTED size 434901 byte | |
2020-05-07 10:36:31,092 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/a43e5589-32f5-414b-b122-5e2fc3d85065-0_76-134-612_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:31,107 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/de48f1ad-9c33-4c2e-b4c5-a77baf99b561-0_77-134-613_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:31,119 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:31,119 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:31,119 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10232 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/a43e5589-32f5-414b-b122-5e2fc3d85065-0_76-134-612_20200507103614.parquet | |
2020-05-07 10:36:31,129 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10232 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/a43e5589-32f5-414b-b122-5e2fc3d85065-0_76-134-612_20200507103614.parquet | |
2020-05-07 10:36:31,131 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:31,132 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:31,132 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10233 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/de48f1ad-9c33-4c2e-b4c5-a77baf99b561-0_77-134-613_20200507103614.parquet | |
2020-05-07 10:36:31,133 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10232 State = COMMITTED size 434879 byte | |
2020-05-07 10:36:31,143 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10233 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/de48f1ad-9c33-4c2e-b4c5-a77baf99b561-0_77-134-613_20200507103614.parquet | |
2020-05-07 10:36:31,146 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10233 State = COMMITTED size 434888 byte | |
2020-05-07 10:36:31,462 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/a8456a4a-f0dd-4ded-9992-128d57c763ac-0_75-134-611_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:31,507 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/8d028b35-bc7e-487c-bec1-f7d96517ac70-0_78-134-614_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:31,528 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:31,528 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:31,528 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10234 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/8d028b35-bc7e-487c-bec1-f7d96517ac70-0_78-134-614_20200507103614.parquet | |
2020-05-07 10:36:31,533 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/a43e5589-32f5-414b-b122-5e2fc3d85065-0_76-134-612_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:31,548 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10234 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/8d028b35-bc7e-487c-bec1-f7d96517ac70-0_78-134-614_20200507103614.parquet | |
2020-05-07 10:36:31,549 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/de48f1ad-9c33-4c2e-b4c5-a77baf99b561-0_77-134-613_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:31,554 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10234 State = COMMITTED size 434875 byte | |
2020-05-07 10:36:31,586 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/e73ffc9e-8f8e-4ff2-ba7d-b53b0d9e2bf5-0_79-134-615_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:31,599 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/4410e653-6872-4632-9f12-c5a7857363fe-0_80-134-616_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:31,613 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:31,613 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:31,613 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10235 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e73ffc9e-8f8e-4ff2-ba7d-b53b0d9e2bf5-0_79-134-615_20200507103614.parquet | |
2020-05-07 10:36:31,618 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:31,619 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:31,619 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10236 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/4410e653-6872-4632-9f12-c5a7857363fe-0_80-134-616_20200507103614.parquet | |
2020-05-07 10:36:31,624 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10235 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e73ffc9e-8f8e-4ff2-ba7d-b53b0d9e2bf5-0_79-134-615_20200507103614.parquet | |
2020-05-07 10:36:31,627 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10236 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/4410e653-6872-4632-9f12-c5a7857363fe-0_80-134-616_20200507103614.parquet | |
2020-05-07 10:36:31,628 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10235 State = COMMITTED size 434902 byte | |
2020-05-07 10:36:31,634 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10236 State = COMMITTED size 434926 byte | |
2020-05-07 10:36:31,953 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/8d028b35-bc7e-487c-bec1-f7d96517ac70-0_78-134-614_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:31,999 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/08146247-6923-4d36-a92d-9bc7e6ab29af-0_81-134-617_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:32,018 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,018 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,018 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10237 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/08146247-6923-4d36-a92d-9bc7e6ab29af-0_81-134-617_20200507103614.parquet | |
2020-05-07 10:36:32,028 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10237 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/08146247-6923-4d36-a92d-9bc7e6ab29af-0_81-134-617_20200507103614.parquet | |
2020-05-07 10:36:32,028 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e73ffc9e-8f8e-4ff2-ba7d-b53b0d9e2bf5-0_79-134-615_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:32,032 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/4410e653-6872-4632-9f12-c5a7857363fe-0_80-134-616_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:32,032 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10237 State = COMMITTED size 434906 byte | |
2020-05-07 10:36:32,084 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/e562df69-bc60-432d-8e9c-3afb635d51fb-0_83-134-619_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:32,111 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,111 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,111 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10238 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e562df69-bc60-432d-8e9c-3afb635d51fb-0_83-134-619_20200507103614.parquet | |
2020-05-07 10:36:32,111 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/1d638f0e-e58f-48b7-b7ef-b42e039a79b9-0_82-134-618_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:32,122 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10238 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e562df69-bc60-432d-8e9c-3afb635d51fb-0_83-134-619_20200507103614.parquet | |
2020-05-07 10:36:32,125 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10238 State = COMMITTED size 434900 byte | |
2020-05-07 10:36:32,130 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,130 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,130 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10239 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/1d638f0e-e58f-48b7-b7ef-b42e039a79b9-0_82-134-618_20200507103614.parquet | |
2020-05-07 10:36:32,137 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10239 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/1d638f0e-e58f-48b7-b7ef-b42e039a79b9-0_82-134-618_20200507103614.parquet | |
2020-05-07 10:36:32,139 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10239 State = COMMITTED size 434913 byte | |
2020-05-07 10:36:32,432 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/08146247-6923-4d36-a92d-9bc7e6ab29af-0_81-134-617_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:32,471 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/7c804c03-dbdd-479b-8f86-f22033d24a32-0_84-134-620_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:32,487 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,488 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,488 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10240 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/7c804c03-dbdd-479b-8f86-f22033d24a32-0_84-134-620_20200507103614.parquet | |
2020-05-07 10:36:32,495 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10240 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/7c804c03-dbdd-479b-8f86-f22033d24a32-0_84-134-620_20200507103614.parquet | |
2020-05-07 10:36:32,498 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10240 State = COMMITTED size 434909 byte | |
2020-05-07 10:36:32,526 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e562df69-bc60-432d-8e9c-3afb635d51fb-0_83-134-619_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:32,540 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/1d638f0e-e58f-48b7-b7ef-b42e039a79b9-0_82-134-618_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:32,591 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/8b4a14a6-2bed-498d-8941-ba0465225caf-0_85-134-621_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:32,610 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/dafcc803-60ab-4afd-b595-f84a2e994a9c-0_86-134-622_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:32,620 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,620 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,620 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10241 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/8b4a14a6-2bed-498d-8941-ba0465225caf-0_85-134-621_20200507103614.parquet | |
2020-05-07 10:36:32,629 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,629 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,629 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10242 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/dafcc803-60ab-4afd-b595-f84a2e994a9c-0_86-134-622_20200507103614.parquet | |
2020-05-07 10:36:32,632 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10241 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:32,639 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/8b4a14a6-2bed-498d-8941-ba0465225caf-0_85-134-621_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:32,655 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10242 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/dafcc803-60ab-4afd-b595-f84a2e994a9c-0_86-134-622_20200507103614.parquet | |
2020-05-07 10:36:32,658 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10242 State = COMMITTED size 434873 byte | |
2020-05-07 10:36:32,694 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/e75b91c2-75fa-45af-b43b-302fba36c096-0_87-134-623_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:32,724 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,725 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,725 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10243 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e75b91c2-75fa-45af-b43b-302fba36c096-0_87-134-623_20200507103614.parquet | |
2020-05-07 10:36:32,734 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10243 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e75b91c2-75fa-45af-b43b-302fba36c096-0_87-134-623_20200507103614.parquet | |
2020-05-07 10:36:32,739 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10243 State = COMMITTED size 434861 byte | |
2020-05-07 10:36:32,900 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/7c804c03-dbdd-479b-8f86-f22033d24a32-0_84-134-620_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:32,939 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/bcdb2519-c117-493c-987e-18aa729144f8-0_88-134-624_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:32,956 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,956 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:32,956 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10244 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/bcdb2519-c117-493c-987e-18aa729144f8-0_88-134-624_20200507103614.parquet | |
2020-05-07 10:36:32,964 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10244 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/bcdb2519-c117-493c-987e-18aa729144f8-0_88-134-624_20200507103614.parquet | |
2020-05-07 10:36:32,967 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10244 State = COMMITTED size 434858 byte | |
2020-05-07 10:36:33,058 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/dafcc803-60ab-4afd-b595-f84a2e994a9c-0_86-134-622_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:33,098 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/3e6421f8-9894-4425-a2a6-3a7682ca8825-0_89-134-625_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:33,119 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:33,119 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:33,119 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10245 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/3e6421f8-9894-4425-a2a6-3a7682ca8825-0_89-134-625_20200507103614.parquet | |
2020-05-07 10:36:33,127 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10245 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/3e6421f8-9894-4425-a2a6-3a7682ca8825-0_89-134-625_20200507103614.parquet | |
2020-05-07 10:36:33,130 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10245 State = COMMITTED size 434897 byte | |
2020-05-07 10:36:33,137 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/e75b91c2-75fa-45af-b43b-302fba36c096-0_87-134-623_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:33,176 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/51f0ced9-438b-4393-9b28-54bea6889f3f-0_90-134-626_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:33,191 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:33,191 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:33,191 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10246 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/51f0ced9-438b-4393-9b28-54bea6889f3f-0_90-134-626_20200507103614.parquet | |
2020-05-07 10:36:33,197 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10246 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/51f0ced9-438b-4393-9b28-54bea6889f3f-0_90-134-626_20200507103614.parquet | |
2020-05-07 10:36:33,200 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10246 State = COMMITTED size 434818 byte | |
2020-05-07 10:36:33,368 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/bcdb2519-c117-493c-987e-18aa729144f8-0_88-134-624_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:33,408 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/10f063b2-004b-47ee-ac54-15356924ad32-0_91-134-627_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:33,429 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:33,429 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:33,429 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10247 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/10f063b2-004b-47ee-ac54-15356924ad32-0_91-134-627_20200507103614.parquet | |
2020-05-07 10:36:33,437 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10247 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/10f063b2-004b-47ee-ac54-15356924ad32-0_91-134-627_20200507103614.parquet | |
2020-05-07 10:36:33,440 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10247 State = COMMITTED size 434873 byte | |
2020-05-07 10:36:33,530 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/3e6421f8-9894-4425-a2a6-3a7682ca8825-0_89-134-625_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:33,570 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/38d29986-0a24-497e-8647-512ab735053b-0_92-134-628_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:33,593 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:33,593 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:33,594 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10248 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/38d29986-0a24-497e-8647-512ab735053b-0_92-134-628_20200507103614.parquet | |
2020-05-07 10:36:33,602 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/51f0ced9-438b-4393-9b28-54bea6889f3f-0_90-134-626_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:33,606 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10248 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/38d29986-0a24-497e-8647-512ab735053b-0_92-134-628_20200507103614.parquet | |
2020-05-07 10:36:33,610 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10248 State = COMMITTED size 434881 byte | |
2020-05-07 10:36:33,648 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/0b7426f8-d679-4428-ab62-4198a581d1c8-0_93-134-629_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:33,664 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:33,664 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:33,664 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10249 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/0b7426f8-d679-4428-ab62-4198a581d1c8-0_93-134-629_20200507103614.parquet | |
2020-05-07 10:36:33,671 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10249 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/0b7426f8-d679-4428-ab62-4198a581d1c8-0_93-134-629_20200507103614.parquet | |
2020-05-07 10:36:33,674 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10249 State = COMMITTED size 434921 byte | |
2020-05-07 10:36:33,844 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/10f063b2-004b-47ee-ac54-15356924ad32-0_91-134-627_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:34,011 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/38d29986-0a24-497e-8647-512ab735053b-0_92-134-628_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:34,022 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/2/5b44d16c-2d9c-41d2-95c1-0913736883c1-0_94-134-630_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:34,043 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:34,043 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:34,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10250 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/5b44d16c-2d9c-41d2-95c1-0913736883c1-0_94-134-630_20200507103614.parquet | |
2020-05-07 10:36:34,054 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10250 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/5b44d16c-2d9c-41d2-95c1-0913736883c1-0_94-134-630_20200507103614.parquet | |
2020-05-07 10:36:34,061 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10250 State = COMMITTED size 434886 byte | |
2020-05-07 10:36:34,068 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:34,068 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:34,068 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10251 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/.hoodie_partition_metadata_95 | |
2020-05-07 10:36:34,075 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/0b7426f8-d679-4428-ab62-4198a581d1c8-0_93-134-629_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:34,077 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/.hoodie_partition_metadata_95 for HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:34,084 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10251 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/.hoodie_partition_metadata_95 | |
2020-05-07 10:36:34,090 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10251 State = COMMITTED size 93 byte | |
2020-05-07 10:36:34,125 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:34,126 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:34,126 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10252 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/.hoodie_partition_metadata_96 | |
2020-05-07 10:36:34,133 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/.hoodie_partition_metadata_96 for HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:34,139 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10252 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/.hoodie_partition_metadata_96 | |
2020-05-07 10:36:34,142 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10252 State = COMMITTED size 93 byte | |
2020-05-07 10:36:34,459 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/2/5b44d16c-2d9c-41d2-95c1-0913736883c1-0_94-134-630_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:34,488 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/.hoodie_partition_metadata_95 is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:34,507 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/0f42f835-9e50-4b85-b7e1-ebd34a8dac4d-0_95-134-631_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:34,526 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/0c43f19b-6116-41f3-b602-576080859229-0_97-134-633_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:34,537 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:34,537 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:34,537 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10253 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0f42f835-9e50-4b85-b7e1-ebd34a8dac4d-0_95-134-631_20200507103614.parquet | |
2020-05-07 10:36:34,544 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/.hoodie_partition_metadata_96 is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:34,546 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:34,546 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:34,547 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10254 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0c43f19b-6116-41f3-b602-576080859229-0_97-134-633_20200507103614.parquet | |
2020-05-07 10:36:34,548 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10253 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0f42f835-9e50-4b85-b7e1-ebd34a8dac4d-0_95-134-631_20200507103614.parquet | |
2020-05-07 10:36:34,549 WARN org.apache.hadoop.hdfs.StateChange: DIR* FSDirectory.unprotectedRenameTo: failed to rename /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/.hoodie_partition_metadata_96 to /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/.hoodie_partition_metadata because destination exists | |
2020-05-07 10:36:34,555 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10253 State = COMMITTED size 435365 byte | |
2020-05-07 10:36:34,555 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10254 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0c43f19b-6116-41f3-b602-576080859229-0_97-134-633_20200507103614.parquet | |
2020-05-07 10:36:34,557 INFO org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks: BLOCK* InvalidateBlocks: add bid= 10252 State = COMPLETE to [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 | |
2020-05-07 10:36:34,557 INFO BlockStateChange: BLOCK* addToInvalidates: bid= 10252 State = COMPLETE 10.0.4.12:50010 | |
2020-05-07 10:36:34,561 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10254 State = COMMITTED size 435322 byte | |
2020-05-07 10:36:34,567 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/669aafd8-fcf1-4dc5-9115-08ef00318bc2-0_96-134-632_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:34,588 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:34,588 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:34,588 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10255 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/669aafd8-fcf1-4dc5-9115-08ef00318bc2-0_96-134-632_20200507103614.parquet | |
2020-05-07 10:36:34,597 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10255 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/669aafd8-fcf1-4dc5-9115-08ef00318bc2-0_96-134-632_20200507103614.parquet | |
2020-05-07 10:36:34,601 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10255 State = COMMITTED size 435240 byte | |
2020-05-07 10:36:34,953 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0f42f835-9e50-4b85-b7e1-ebd34a8dac4d-0_95-134-631_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:34,960 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0c43f19b-6116-41f3-b602-576080859229-0_97-134-633_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:35,003 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/669aafd8-fcf1-4dc5-9115-08ef00318bc2-0_96-134-632_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:35,013 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/42bad8fe-0ca5-4373-be1b-c2dfdd178592-0_98-134-634_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:35,019 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/417451e9-01d8-45de-9630-7c8ac115a8e1-0_99-134-635_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:35,044 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,044 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,044 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,044 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,044 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10256 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/417451e9-01d8-45de-9630-7c8ac115a8e1-0_99-134-635_20200507103614.parquet | |
2020-05-07 10:36:35,044 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10257 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/42bad8fe-0ca5-4373-be1b-c2dfdd178592-0_98-134-634_20200507103614.parquet | |
2020-05-07 10:36:35,060 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10256 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:35,060 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10257 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/42bad8fe-0ca5-4373-be1b-c2dfdd178592-0_98-134-634_20200507103614.parquet | |
2020-05-07 10:36:35,066 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/417451e9-01d8-45de-9630-7c8ac115a8e1-0_99-134-635_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:35,073 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/7ed9c1cc-94be-481f-9a91-0a5201b1b5f1-0_100-134-636_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:35,076 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10257 State = COMMITTED size 435314 byte | |
2020-05-07 10:36:35,104 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,106 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,106 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10258 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/7ed9c1cc-94be-481f-9a91-0a5201b1b5f1-0_100-134-636_20200507103614.parquet | |
2020-05-07 10:36:35,115 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10258 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/7ed9c1cc-94be-481f-9a91-0a5201b1b5f1-0_100-134-636_20200507103614.parquet | |
2020-05-07 10:36:35,119 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10258 State = COMMITTED size 435279 byte | |
2020-05-07 10:36:35,127 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/e4527eea-47a6-401a-9da1-82a5f7821d04-0_101-134-637_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:35,144 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,144 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,144 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10259 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/e4527eea-47a6-401a-9da1-82a5f7821d04-0_101-134-637_20200507103614.parquet | |
2020-05-07 10:36:35,151 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10259 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/e4527eea-47a6-401a-9da1-82a5f7821d04-0_101-134-637_20200507103614.parquet | |
2020-05-07 10:36:35,154 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10259 State = COMMITTED size 435300 byte | |
2020-05-07 10:36:35,464 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/42bad8fe-0ca5-4373-be1b-c2dfdd178592-0_98-134-634_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:35,507 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/43285138-ae20-44c0-8c69-c547c863c10d-0_102-134-638_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:35,518 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/7ed9c1cc-94be-481f-9a91-0a5201b1b5f1-0_100-134-636_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:35,522 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,523 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,523 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10260 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/43285138-ae20-44c0-8c69-c547c863c10d-0_102-134-638_20200507103614.parquet | |
2020-05-07 10:36:35,533 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10260 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/43285138-ae20-44c0-8c69-c547c863c10d-0_102-134-638_20200507103614.parquet | |
2020-05-07 10:36:35,536 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10260 State = COMMITTED size 435191 byte | |
2020-05-07 10:36:35,554 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/e4527eea-47a6-401a-9da1-82a5f7821d04-0_101-134-637_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:35,569 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/8afa9c11-4a93-4bcf-b866-b929c76c2cb6-0_103-134-639_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:35,586 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,586 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,586 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10261 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/8afa9c11-4a93-4bcf-b866-b929c76c2cb6-0_103-134-639_20200507103614.parquet | |
2020-05-07 10:36:35,594 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10261 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/8afa9c11-4a93-4bcf-b866-b929c76c2cb6-0_103-134-639_20200507103614.parquet | |
2020-05-07 10:36:35,597 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10261 State = COMMITTED size 435288 byte | |
2020-05-07 10:36:35,599 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/4518ab1f-3b70-412d-9dea-47565802b416-0_104-134-640_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:35,615 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,615 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,615 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10262 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/4518ab1f-3b70-412d-9dea-47565802b416-0_104-134-640_20200507103614.parquet | |
2020-05-07 10:36:35,621 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10262 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/4518ab1f-3b70-412d-9dea-47565802b416-0_104-134-640_20200507103614.parquet | |
2020-05-07 10:36:35,624 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10262 State = COMMITTED size 435182 byte | |
2020-05-07 10:36:35,936 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/43285138-ae20-44c0-8c69-c547c863c10d-0_102-134-638_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:35,977 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/38f5c786-2599-48a7-91d7-e4c1a7b90340-0_105-134-641_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:35,992 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,992 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:35,992 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10263 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/38f5c786-2599-48a7-91d7-e4c1a7b90340-0_105-134-641_20200507103614.parquet | |
2020-05-07 10:36:35,998 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/8afa9c11-4a93-4bcf-b866-b929c76c2cb6-0_103-134-639_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:36,001 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10263 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/38f5c786-2599-48a7-91d7-e4c1a7b90340-0_105-134-641_20200507103614.parquet | |
2020-05-07 10:36:36,005 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10263 State = COMMITTED size 435147 byte | |
2020-05-07 10:36:36,025 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/4518ab1f-3b70-412d-9dea-47565802b416-0_104-134-640_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:36,047 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/4feef456-551e-4605-a269-0314b65bae16-0_106-134-642_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:36,067 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,067 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,067 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10264 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/4feef456-551e-4605-a269-0314b65bae16-0_106-134-642_20200507103614.parquet | |
2020-05-07 10:36:36,068 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/9de70e12-f1a6-43a3-88dd-a0992dab8edd-0_107-134-643_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:36,075 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10264 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/4feef456-551e-4605-a269-0314b65bae16-0_106-134-642_20200507103614.parquet | |
2020-05-07 10:36:36,078 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10264 State = COMMITTED size 435370 byte | |
2020-05-07 10:36:36,085 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,085 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,085 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10265 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/9de70e12-f1a6-43a3-88dd-a0992dab8edd-0_107-134-643_20200507103614.parquet | |
2020-05-07 10:36:36,093 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10265 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:36,106 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/9de70e12-f1a6-43a3-88dd-a0992dab8edd-0_107-134-643_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:36,147 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/106112d8-e8b8-4a38-89f3-213dfcb598f7-0_108-134-644_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:36,165 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,165 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,165 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10266 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/106112d8-e8b8-4a38-89f3-213dfcb598f7-0_108-134-644_20200507103614.parquet | |
2020-05-07 10:36:36,173 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10266 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/106112d8-e8b8-4a38-89f3-213dfcb598f7-0_108-134-644_20200507103614.parquet | |
2020-05-07 10:36:36,176 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10266 State = COMMITTED size 435179 byte | |
2020-05-07 10:36:36,405 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/38f5c786-2599-48a7-91d7-e4c1a7b90340-0_105-134-641_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:36,447 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/096b03aa-eb34-4161-9152-7c314477e799-0_109-134-645_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:36,463 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,463 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,464 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10267 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/096b03aa-eb34-4161-9152-7c314477e799-0_109-134-645_20200507103614.parquet | |
2020-05-07 10:36:36,470 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10267 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/096b03aa-eb34-4161-9152-7c314477e799-0_109-134-645_20200507103614.parquet | |
2020-05-07 10:36:36,473 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10267 State = COMMITTED size 435103 byte | |
2020-05-07 10:36:36,478 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/4feef456-551e-4605-a269-0314b65bae16-0_106-134-642_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:36,523 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/5d14be0c-8a58-496f-bffa-e904f4c71323-0_110-134-646_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:36,540 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,540 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,540 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10268 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/5d14be0c-8a58-496f-bffa-e904f4c71323-0_110-134-646_20200507103614.parquet | |
2020-05-07 10:36:36,548 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10268 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/5d14be0c-8a58-496f-bffa-e904f4c71323-0_110-134-646_20200507103614.parquet | |
2020-05-07 10:36:36,552 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10268 State = COMMITTED size 435137 byte | |
2020-05-07 10:36:36,577 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/106112d8-e8b8-4a38-89f3-213dfcb598f7-0_108-134-644_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:36,620 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/9045e2f7-ed1d-48e9-bc9f-44ebbd9a87d0-0_111-134-647_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:36,636 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,636 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,636 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10269 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/9045e2f7-ed1d-48e9-bc9f-44ebbd9a87d0-0_111-134-647_20200507103614.parquet | |
2020-05-07 10:36:36,643 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10269 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/9045e2f7-ed1d-48e9-bc9f-44ebbd9a87d0-0_111-134-647_20200507103614.parquet | |
2020-05-07 10:36:36,646 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10269 State = COMMITTED size 435093 byte | |
2020-05-07 10:36:36,874 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/096b03aa-eb34-4161-9152-7c314477e799-0_109-134-645_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:36,917 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/18f31c0b-76bc-4089-a059-45f01bc9e670-0_112-134-648_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:36,932 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,932 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:36,932 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10270 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/18f31c0b-76bc-4089-a059-45f01bc9e670-0_112-134-648_20200507103614.parquet | |
2020-05-07 10:36:36,939 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10270 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/18f31c0b-76bc-4089-a059-45f01bc9e670-0_112-134-648_20200507103614.parquet | |
2020-05-07 10:36:36,941 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10270 State = COMMITTED size 435149 byte | |
2020-05-07 10:36:36,952 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/5d14be0c-8a58-496f-bffa-e904f4c71323-0_110-134-646_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:36,996 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/5b6cb5ae-c841-48b5-a9f8-9ce5d6c0e405-0_113-134-649_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:37,012 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,012 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,012 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10271 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/5b6cb5ae-c841-48b5-a9f8-9ce5d6c0e405-0_113-134-649_20200507103614.parquet | |
2020-05-07 10:36:37,020 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10271 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:37,033 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/5b6cb5ae-c841-48b5-a9f8-9ce5d6c0e405-0_113-134-649_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:37,047 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/9045e2f7-ed1d-48e9-bc9f-44ebbd9a87d0-0_111-134-647_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:37,078 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/14159c47-c18c-4a11-86b4-51a328c2ccd3-0_114-134-650_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:37,089 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/8e7e422c-ccfd-4168-ac9e-88ecfc983904-0_115-134-651_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:37,098 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,098 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,098 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10272 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/14159c47-c18c-4a11-86b4-51a328c2ccd3-0_114-134-650_20200507103614.parquet | |
2020-05-07 10:36:37,105 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10272 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/14159c47-c18c-4a11-86b4-51a328c2ccd3-0_114-134-650_20200507103614.parquet | |
2020-05-07 10:36:37,108 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10272 State = COMMITTED size 435050 byte | |
2020-05-07 10:36:37,113 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,113 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,113 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10273 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/8e7e422c-ccfd-4168-ac9e-88ecfc983904-0_115-134-651_20200507103614.parquet | |
2020-05-07 10:36:37,119 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10273 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/8e7e422c-ccfd-4168-ac9e-88ecfc983904-0_115-134-651_20200507103614.parquet | |
2020-05-07 10:36:37,122 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10273 State = COMMITTED size 435044 byte | |
2020-05-07 10:36:37,343 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/18f31c0b-76bc-4089-a059-45f01bc9e670-0_112-134-648_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:37,383 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/84b79f7a-5acc-4ff6-b00e-c01824503332-0_116-134-652_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:37,404 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,404 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,405 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10274 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/84b79f7a-5acc-4ff6-b00e-c01824503332-0_116-134-652_20200507103614.parquet | |
2020-05-07 10:36:37,412 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10274 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/84b79f7a-5acc-4ff6-b00e-c01824503332-0_116-134-652_20200507103614.parquet | |
2020-05-07 10:36:37,415 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10274 State = COMMITTED size 435049 byte | |
2020-05-07 10:36:37,508 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/14159c47-c18c-4a11-86b4-51a328c2ccd3-0_114-134-650_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:37,522 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/8e7e422c-ccfd-4168-ac9e-88ecfc983904-0_115-134-651_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:37,561 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/0b1dc474-04d6-4a0e-b051-a5556ed7de38-0_117-134-653_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:37,566 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/8f41362e-2dc6-4d2e-8570-d08a26c0ba20-0_118-134-654_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:37,568 INFO BlockStateChange: BLOCK* BlockManager: ask 10.0.4.12:50010 to delete [blk_10252_1001] | |
2020-05-07 10:36:37,579 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,579 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,580 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10275 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0b1dc474-04d6-4a0e-b051-a5556ed7de38-0_117-134-653_20200507103614.parquet | |
2020-05-07 10:36:37,587 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10275 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0b1dc474-04d6-4a0e-b051-a5556ed7de38-0_117-134-653_20200507103614.parquet | |
2020-05-07 10:36:37,590 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10275 State = COMMITTED size 435392 byte | |
2020-05-07 10:36:37,591 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,591 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,592 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10276 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/8f41362e-2dc6-4d2e-8570-d08a26c0ba20-0_118-134-654_20200507103614.parquet | |
2020-05-07 10:36:37,600 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10276 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/8f41362e-2dc6-4d2e-8570-d08a26c0ba20-0_118-134-654_20200507103614.parquet | |
2020-05-07 10:36:37,603 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10276 State = COMMITTED size 435036 byte | |
2020-05-07 10:36:37,816 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/84b79f7a-5acc-4ff6-b00e-c01824503332-0_116-134-652_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:37,857 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/adaa9ed3-f560-42fa-88ca-2de68f403534-0_119-134-655_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:37,873 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,873 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:37,873 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10277 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/adaa9ed3-f560-42fa-88ca-2de68f403534-0_119-134-655_20200507103614.parquet | |
2020-05-07 10:36:37,880 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10277 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/adaa9ed3-f560-42fa-88ca-2de68f403534-0_119-134-655_20200507103614.parquet | |
2020-05-07 10:36:37,883 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10277 State = COMMITTED size 434969 byte | |
2020-05-07 10:36:37,990 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0b1dc474-04d6-4a0e-b051-a5556ed7de38-0_117-134-653_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:38,004 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/8f41362e-2dc6-4d2e-8570-d08a26c0ba20-0_118-134-654_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:38,034 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/0f9c8d2d-a1c6-4260-a6c7-c0cc17972cac-0_120-134-656_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:38,048 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/f32c1ae7-793a-488c-aa79-c8b91c56d65b-0_121-134-657_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:38,053 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:38,053 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:38,054 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10278 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0f9c8d2d-a1c6-4260-a6c7-c0cc17972cac-0_120-134-656_20200507103614.parquet | |
2020-05-07 10:36:38,061 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10278 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0f9c8d2d-a1c6-4260-a6c7-c0cc17972cac-0_120-134-656_20200507103614.parquet | |
2020-05-07 10:36:38,064 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10278 State = COMMITTED size 435017 byte | |
2020-05-07 10:36:38,065 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:38,065 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:38,065 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10279 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/f32c1ae7-793a-488c-aa79-c8b91c56d65b-0_121-134-657_20200507103614.parquet | |
2020-05-07 10:36:38,071 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10279 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/f32c1ae7-793a-488c-aa79-c8b91c56d65b-0_121-134-657_20200507103614.parquet | |
2020-05-07 10:36:38,074 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10279 State = COMMITTED size 434990 byte | |
2020-05-07 10:36:38,283 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/adaa9ed3-f560-42fa-88ca-2de68f403534-0_119-134-655_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:38,341 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/3b205e38-d708-4284-82a9-d57bf2958d7e-0_122-134-658_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:38,359 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:38,359 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:38,359 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10280 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/3b205e38-d708-4284-82a9-d57bf2958d7e-0_122-134-658_20200507103614.parquet | |
2020-05-07 10:36:38,368 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10280 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/3b205e38-d708-4284-82a9-d57bf2958d7e-0_122-134-658_20200507103614.parquet | |
2020-05-07 10:36:38,371 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10280 State = COMMITTED size 434972 byte | |
2020-05-07 10:36:38,464 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0f9c8d2d-a1c6-4260-a6c7-c0cc17972cac-0_120-134-656_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:38,475 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/f32c1ae7-793a-488c-aa79-c8b91c56d65b-0_121-134-657_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:38,514 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/de2ce641-a4a5-461b-86b4-ff2beb03d75c-0_123-134-659_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:38,523 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/faf273b8-027f-47b0-80f9-1035f4878761-0_124-134-660_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:38,531 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:38,531 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:38,531 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10281 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/de2ce641-a4a5-461b-86b4-ff2beb03d75c-0_123-134-659_20200507103614.parquet | |
2020-05-07 10:36:38,538 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10281 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/de2ce641-a4a5-461b-86b4-ff2beb03d75c-0_123-134-659_20200507103614.parquet | |
2020-05-07 10:36:38,541 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:38,541 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:38,541 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10282 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/faf273b8-027f-47b0-80f9-1035f4878761-0_124-134-660_20200507103614.parquet | |
2020-05-07 10:36:38,542 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10281 State = COMMITTED size 434947 byte | |
2020-05-07 10:36:38,552 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10282 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/faf273b8-027f-47b0-80f9-1035f4878761-0_124-134-660_20200507103614.parquet | |
2020-05-07 10:36:38,555 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10282 State = COMMITTED size 434946 byte | |
2020-05-07 10:36:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:36:38,772 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/3b205e38-d708-4284-82a9-d57bf2958d7e-0_122-134-658_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:38,811 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/d00f66c0-6a7e-4421-8a79-a78fcb98f8d0-0_125-134-661_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:38,827 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:38,827 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:38,827 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10283 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/d00f66c0-6a7e-4421-8a79-a78fcb98f8d0-0_125-134-661_20200507103614.parquet | |
2020-05-07 10:36:38,838 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10283 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/d00f66c0-6a7e-4421-8a79-a78fcb98f8d0-0_125-134-661_20200507103614.parquet | |
2020-05-07 10:36:38,841 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10283 State = COMMITTED size 434921 byte | |
2020-05-07 10:36:38,942 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/de2ce641-a4a5-461b-86b4-ff2beb03d75c-0_123-134-659_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:38,956 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/faf273b8-027f-47b0-80f9-1035f4878761-0_124-134-660_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:38,986 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/36af7fa4-a79b-4535-897f-2dfe3aa2b858-0_126-134-662_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:38,999 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/a6bc83c9-d33f-43a4-bedc-457dcbaa8c45-0_127-134-663_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:39,003 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,003 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,004 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10284 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/36af7fa4-a79b-4535-897f-2dfe3aa2b858-0_126-134-662_20200507103614.parquet | |
2020-05-07 10:36:39,010 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10284 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/36af7fa4-a79b-4535-897f-2dfe3aa2b858-0_126-134-662_20200507103614.parquet | |
2020-05-07 10:36:39,013 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10284 State = COMMITTED size 434911 byte | |
2020-05-07 10:36:39,017 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,017 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,017 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10285 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/a6bc83c9-d33f-43a4-bedc-457dcbaa8c45-0_127-134-663_20200507103614.parquet | |
2020-05-07 10:36:39,024 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10285 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:39,036 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/a6bc83c9-d33f-43a4-bedc-457dcbaa8c45-0_127-134-663_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:39,078 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/809fbcd4-e1d3-4ed0-9697-009c1e85c9fc-0_128-134-664_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:39,094 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,094 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,094 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10286 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/809fbcd4-e1d3-4ed0-9697-009c1e85c9fc-0_128-134-664_20200507103614.parquet | |
2020-05-07 10:36:39,101 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10286 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/809fbcd4-e1d3-4ed0-9697-009c1e85c9fc-0_128-134-664_20200507103614.parquet | |
2020-05-07 10:36:39,104 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10286 State = COMMITTED size 435346 byte | |
2020-05-07 10:36:39,241 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/d00f66c0-6a7e-4421-8a79-a78fcb98f8d0-0_125-134-661_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:39,284 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/5b99101a-c536-4fcf-839d-2b2de3afa628-0_129-134-665_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:39,300 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,300 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,300 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10287 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/5b99101a-c536-4fcf-839d-2b2de3afa628-0_129-134-665_20200507103614.parquet | |
2020-05-07 10:36:39,306 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10287 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/5b99101a-c536-4fcf-839d-2b2de3afa628-0_129-134-665_20200507103614.parquet | |
2020-05-07 10:36:39,309 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10287 State = COMMITTED size 434877 byte | |
2020-05-07 10:36:39,414 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/36af7fa4-a79b-4535-897f-2dfe3aa2b858-0_126-134-662_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:39,452 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/ff204a0b-d67f-4627-acbe-193d5e6e2dd5-0_130-134-666_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:39,467 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,467 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,467 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10288 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/ff204a0b-d67f-4627-acbe-193d5e6e2dd5-0_130-134-666_20200507103614.parquet | |
2020-05-07 10:36:39,473 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10288 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/ff204a0b-d67f-4627-acbe-193d5e6e2dd5-0_130-134-666_20200507103614.parquet | |
2020-05-07 10:36:39,476 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10288 State = COMMITTED size 434859 byte | |
2020-05-07 10:36:39,504 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/809fbcd4-e1d3-4ed0-9697-009c1e85c9fc-0_128-134-664_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:39,540 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/0ffa4d5d-9267-4f95-8c40-9cc71b2d8cb6-0_131-134-667_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:39,555 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,555 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,555 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10289 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0ffa4d5d-9267-4f95-8c40-9cc71b2d8cb6-0_131-134-667_20200507103614.parquet | |
2020-05-07 10:36:39,561 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10289 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0ffa4d5d-9267-4f95-8c40-9cc71b2d8cb6-0_131-134-667_20200507103614.parquet | |
2020-05-07 10:36:39,564 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10289 State = COMMITTED size 434842 byte | |
2020-05-07 10:36:39,710 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/5b99101a-c536-4fcf-839d-2b2de3afa628-0_129-134-665_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:39,747 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/82dd5ea4-e5a6-4622-8a1d-c572091af4b3-0_132-134-668_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:39,762 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,762 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,762 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10290 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/82dd5ea4-e5a6-4622-8a1d-c572091af4b3-0_132-134-668_20200507103614.parquet | |
2020-05-07 10:36:39,768 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10290 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/82dd5ea4-e5a6-4622-8a1d-c572091af4b3-0_132-134-668_20200507103614.parquet | |
2020-05-07 10:36:39,771 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10290 State = COMMITTED size 434839 byte | |
2020-05-07 10:36:39,877 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/ff204a0b-d67f-4627-acbe-193d5e6e2dd5-0_130-134-666_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:39,918 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/0639b4de-e60e-43a5-8b93-c8c041907637-0_133-134-669_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:39,936 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,936 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:39,936 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10291 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0639b4de-e60e-43a5-8b93-c8c041907637-0_133-134-669_20200507103614.parquet | |
2020-05-07 10:36:39,944 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10291 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0639b4de-e60e-43a5-8b93-c8c041907637-0_133-134-669_20200507103614.parquet | |
2020-05-07 10:36:39,948 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10291 State = COMMITTED size 434815 byte | |
2020-05-07 10:36:39,964 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0ffa4d5d-9267-4f95-8c40-9cc71b2d8cb6-0_131-134-667_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:40,007 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/f01e8e9d-e3ac-4175-ae59-8091fb36ca16-0_134-134-670_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:40,025 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:40,025 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:40,025 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10292 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/f01e8e9d-e3ac-4175-ae59-8091fb36ca16-0_134-134-670_20200507103614.parquet | |
2020-05-07 10:36:40,032 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10292 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/f01e8e9d-e3ac-4175-ae59-8091fb36ca16-0_134-134-670_20200507103614.parquet | |
2020-05-07 10:36:40,036 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10292 State = COMMITTED size 434805 byte | |
2020-05-07 10:36:40,172 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/82dd5ea4-e5a6-4622-8a1d-c572091af4b3-0_132-134-668_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:40,222 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/44510239-539b-468e-8994-b510f5001f6d-0_135-134-671_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:40,242 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:40,242 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:40,242 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10293 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/44510239-539b-468e-8994-b510f5001f6d-0_135-134-671_20200507103614.parquet | |
2020-05-07 10:36:40,251 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10293 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:40,256 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/44510239-539b-468e-8994-b510f5001f6d-0_135-134-671_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:40,302 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/e4c68bf6-bc2a-4723-89be-0ca247ecfcfc-0_136-134-672_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:40,322 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:40,322 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:40,322 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10294 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/e4c68bf6-bc2a-4723-89be-0ca247ecfcfc-0_136-134-672_20200507103614.parquet | |
2020-05-07 10:36:40,329 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10294 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/e4c68bf6-bc2a-4723-89be-0ca247ecfcfc-0_136-134-672_20200507103614.parquet | |
2020-05-07 10:36:40,332 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10294 State = COMMITTED size 435316 byte | |
2020-05-07 10:36:40,347 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/0639b4de-e60e-43a5-8b93-c8c041907637-0_133-134-669_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:40,392 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/7e051295-01dd-458f-9bcb-e844aac1698e-0_137-134-673_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:40,408 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:40,408 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:40,408 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10295 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/7e051295-01dd-458f-9bcb-e844aac1698e-0_137-134-673_20200507103614.parquet | |
2020-05-07 10:36:40,417 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10295 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/7e051295-01dd-458f-9bcb-e844aac1698e-0_137-134-673_20200507103614.parquet | |
2020-05-07 10:36:40,420 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10295 State = COMMITTED size 435396 byte | |
2020-05-07 10:36:40,436 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/f01e8e9d-e3ac-4175-ae59-8091fb36ca16-0_134-134-670_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:40,480 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/c3a0f4aa-06a8-4c9d-822a-cf3bec30a48c-0_138-134-674_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:40,498 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:40,498 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:40,499 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10296 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/c3a0f4aa-06a8-4c9d-822a-cf3bec30a48c-0_138-134-674_20200507103614.parquet | |
2020-05-07 10:36:40,505 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10296 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/c3a0f4aa-06a8-4c9d-822a-cf3bec30a48c-0_138-134-674_20200507103614.parquet | |
2020-05-07 10:36:40,507 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10296 State = COMMITTED size 435314 byte | |
2020-05-07 10:36:40,733 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/e4c68bf6-bc2a-4723-89be-0ca247ecfcfc-0_136-134-672_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:40,777 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/3/a0b10690-1abf-4ff4-9077-2b8a471d156a-0_139-134-675_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:40,793 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:40,793 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:40,793 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10297 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/a0b10690-1abf-4ff4-9077-2b8a471d156a-0_139-134-675_20200507103614.parquet | |
2020-05-07 10:36:40,802 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10297 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/a0b10690-1abf-4ff4-9077-2b8a471d156a-0_139-134-675_20200507103614.parquet | |
2020-05-07 10:36:40,805 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10297 State = COMMITTED size 435354 byte | |
2020-05-07 10:36:40,820 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/7e051295-01dd-458f-9bcb-e844aac1698e-0_137-134-673_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:40,908 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/c3a0f4aa-06a8-4c9d-822a-cf3bec30a48c-0_138-134-674_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:41,206 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/3/a0b10690-1abf-4ff4-9077-2b8a471d156a-0_139-134-675_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:43,116 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:43,116 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:43,116 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10298 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/20200507103614.inflight | |
2020-05-07 10:36:43,124 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10298 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/20200507103614.inflight | |
2020-05-07 10:36:43,127 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10298 State = COMMITTED size 198508 byte | |
2020-05-07 10:36:43,528 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/20200507103614.inflight is closed by HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:36:44,095 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.aux/20200507103614.clean.requested is closed by HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:36:44,104 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.aux/20200507103614.clean.requested is closed by HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:36:44,117 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/20200507103614.clean.requested is closed by HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:36:44,298 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.aux/20200507103614.clean is closed by HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:36:44,307 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.aux/20200507103614.clean is closed by HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:36:44,315 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/20200507103614.clean.inflight is closed by HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:36:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:36:58,701 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:37:08,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:37:18,689 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:37:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:37:29,234 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000014_9334/part-00014-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:29,291 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000019_9335/part-00019-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:29,372 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000002_9332/part-00002-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:29,376 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000021_9336/part-00021-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:29,397 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000011_9333/part-00011-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:29,446 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000024_9337/part-00024-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:29,458 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000030_9338/part-00030-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:29,501 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000035_9339/part-00035-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:29,548 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000043_9340/part-00043-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:29,557 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000048_9341/part-00048-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:29,574 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000049_9342/part-00049-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:29,621 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000051_9343/part-00051-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:29,634 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000053_9344/part-00053-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:29,660 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000065_9345/part-00065-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:29,698 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000066_9346/part-00066-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:29,719 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000069_9347/part-00069-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:29,729 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000075_9348/part-00075-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:29,766 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000077_9349/part-00077-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:29,818 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000089_9351/part-00089-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:29,822 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000084_9350/part-00084-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:29,859 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000095_9352/part-00095-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:29,892 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000102_9353/part-00102-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:29,901 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000103_9354/part-00103-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:29,939 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000105_9355/part-00105-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:29,975 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000106_9356/part-00106-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:29,984 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000107_9357/part-00107-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:30,014 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000122_9358/part-00122-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:30,088 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000124_9359/part-00124-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:30,101 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000126_9360/part-00126-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:30,111 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000128_9361/part-00128-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:30,183 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000135_9363/part-00135-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:30,200 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000141_9364/part-00141-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:30,201 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000132_9362/part-00132-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:30,296 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000143_9365/part-00143-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:30,318 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000150_9366/part-00150-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:30,362 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000163_9367/part-00163-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:30,447 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000168_9368/part-00168-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:30,452 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000173_9369/part-00173-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:30,462 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000174_9370/part-00174-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:30,559 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000181_9371/part-00181-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:30,594 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000184_9372/part-00184-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:30,605 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000192_9373/part-00192-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:37:30,695 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000198_9374/part-00198-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:37:30,741 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_temporary/0/_temporary/attempt_20200507103728_0245_m_000000_9375/part-00000-e629c354-e16a-4de0-8b33-925bc81b0a62-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:37:33,313 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/season_scores_features_1/_SUCCESS is closed by HopsFS_DFSClient_NONMAPREDUCE_-397473362_14 | |
2020-05-07 10:37:38,765 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:37:48,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:37:58,683 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:38:04,333 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000002_18039/part-00002-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:04,333 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000011_18040/part-00011-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:04,341 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000014_18041/part-00014-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:04,439 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000019_18042/part-00019-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:04,456 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000021_18043/part-00021-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:04,456 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000024_18044/part-00024-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:04,566 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000030_18045/part-00030-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:04,634 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000035_18046/part-00035-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:04,637 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000043_18047/part-00043-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:04,656 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000048_18048/part-00048-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:04,686 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000049_18049/part-00049-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:04,698 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000051_18050/part-00051-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:04,715 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000053_18051/part-00053-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:04,752 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000065_18052/part-00065-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:04,777 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000066_18053/part-00066-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:04,785 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000069_18054/part-00069-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:04,834 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000075_18055/part-00075-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:04,864 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000077_18056/part-00077-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:04,872 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000084_18057/part-00084-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:04,927 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000089_18058/part-00089-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:04,951 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000095_18059/part-00095-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:04,970 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000102_18060/part-00102-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:05,002 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000103_18061/part-00103-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:05,011 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000105_18062/part-00105-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:05,047 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000106_18063/part-00106-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:05,076 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000107_18064/part-00107-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:05,092 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000122_18065/part-00122-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:05,116 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000124_18066/part-00124-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:05,153 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000126_18067/part-00126-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:05,165 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000128_18068/part-00128-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:05,195 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000132_18069/part-00132-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:05,231 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000135_18070/part-00135-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:05,248 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000141_18071/part-00141-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:05,285 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000143_18072/part-00143-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:05,311 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000150_18073/part-00150-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:05,367 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000163_18074/part-00163-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:05,378 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000168_18075/part-00168-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:05,387 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000173_18076/part-00173-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:05,432 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000174_18077/part-00174-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:05,452 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000181_18078/part-00181-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:05,470 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000184_18079/part-00184-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:05,512 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000192_18080/part-00192-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:05,522 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000198_18081/part-00198-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:05,528 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_temporary/0/_temporary/attempt_20200507103804_0352_m_000000_18082/part-00000-1488de41-cb5d-4f6a-8391-d79197a4d472-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:07,619 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/attendances_features_1/_SUCCESS is closed by HopsFS_DFSClient_NONMAPREDUCE_-397473362_14 | |
2020-05-07 10:38:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:38:18,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:38:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:38:38,684 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:38:40,617 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000002_29952/part-00002-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:40,622 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000014_29954/part-00014-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:40,624 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000011_29953/part-00011-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:40,667 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000021_29956/part-00021-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:40,667 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000019_29955/part-00019-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:40,674 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000024_29957/part-00024-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:40,709 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000030_29958/part-00030-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:40,714 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000035_29959/part-00035-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:40,723 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000043_29960/part-00043-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:40,764 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000048_29961/part-00048-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:40,787 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000049_29962/part-00049-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:40,804 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000051_29963/part-00051-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:40,861 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000053_29964/part-00053-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:40,868 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000065_29965/part-00065-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:40,889 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000066_29966/part-00066-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:40,928 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000069_29967/part-00069-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:40,937 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000075_29968/part-00075-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:40,946 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000077_29969/part-00077-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:41,000 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000084_29970/part-00084-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:41,004 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000089_29971/part-00089-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:41,012 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000095_29972/part-00095-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:41,066 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000102_29973/part-00102-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:41,069 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000103_29974/part-00103-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:41,077 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000105_29975/part-00105-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:41,157 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000106_29976/part-00106-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:41,170 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000107_29977/part-00107-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:41,197 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000122_29978/part-00122-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:41,256 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000124_29979/part-00124-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:41,266 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000126_29980/part-00126-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:41,268 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000128_29981/part-00128-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:41,333 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000132_29982/part-00132-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:41,353 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000135_29983/part-00135-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:41,366 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000141_29984/part-00141-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:41,405 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000143_29985/part-00143-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:41,418 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000150_29986/part-00150-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:41,432 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000163_29987/part-00163-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:41,456 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000168_29988/part-00168-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:41,469 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000173_29989/part-00173-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:41,480 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000174_29990/part-00174-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:41,519 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000181_29991/part-00181-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:41,529 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000184_29992/part-00184-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:41,540 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000192_29993/part-00192-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:38:41,568 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000198_29994/part-00198-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000011_9333_-485772497_154 | |
2020-05-07 10:38:41,580 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_temporary/0/_temporary/attempt_20200507103840_0488_m_000000_29995/part-00000-8d81904a-2a85-4d5b-810c-3999789ceaee-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:43,569 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/players_features_1/_SUCCESS is closed by HopsFS_DFSClient_NONMAPREDUCE_-397473362_14 | |
2020-05-07 10:38:47,880 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/teams_features_1/_temporary/0/_temporary/attempt_20200507103847_0527_m_000000_30392/part-00000-3c28c8ca-203f-4c8d-8ea5-432886d49716-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103728_0245_m_000002_9332_1118543941_43 | |
2020-05-07 10:38:48,633 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/teams_features_1/_SUCCESS is closed by HopsFS_DFSClient_NONMAPREDUCE_-397473362_14 | |
2020-05-07 10:38:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:38:58,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:39:01,311 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/demo_featurestore_harry001_Training_Datasets/tour_training_dataset_test_1/tour_training_dataset_test/_temporary/0/_temporary/attempt_20200507103900_1433_r_000001_0/part-r-00001 is closed by HopsFS_DFSClient_attempt_20200507103900_0000_m_000000_0_681099264_154 | |
2020-05-07 10:39:01,331 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/demo_featurestore_harry001_Training_Datasets/tour_training_dataset_test_1/tour_training_dataset_test/_temporary/0/_temporary/attempt_20200507103900_1433_r_000002_0/part-r-00002 is closed by HopsFS_DFSClient_attempt_20200507103900_0000_m_000000_0_151492743_43 | |
2020-05-07 10:39:01,335 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/demo_featurestore_harry001_Training_Datasets/tour_training_dataset_test_1/tour_training_dataset_test/_temporary/0/_temporary/attempt_20200507103900_1433_r_000000_0/part-r-00000 is closed by HopsFS_DFSClient_attempt_20200507103900_0000_m_000000_0_-1542818338_65 | |
2020-05-07 10:39:01,766 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/demo_featurestore_harry001_Training_Datasets/tour_training_dataset_test_1/tour_training_dataset_test/_SUCCESS is closed by HopsFS_DFSClient_NONMAPREDUCE_-1281733562_14 | |
2020-05-07 10:39:01,790 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/demo_featurestore_harry001_Training_Datasets/tour_training_dataset_test_1/tf_record_schema.txt is closed by HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:39:02,099 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10150 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/spark/applicationHistory/application_1588844087764_0001_1.snappy.inprogress | |
2020-05-07 10:39:02,125 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10150 State = COMMITTED size 15241102 byte | |
2020-05-07 10:39:02,518 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/applicationHistory/application_1588844087764_0001_1.snappy.inprogress is closed by HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:39:04,270 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10299 State = UNDER_CONSTRUCTION for /user/yarn/logs/demo_featurestore_harry001__harry000/logs/application_1588844087764_0001/ip-10-0-4-12.us-west-2.compute.internal_9000.tmp | |
2020-05-07 10:39:04,442 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10299 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:39:04,446 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/yarn/logs/demo_featurestore_harry001__harry000/logs/application_1588844087764_0001/ip-10-0-4-12.us-west-2.compute.internal_9000.tmp is closed by HopsFS_DFSClient_NONMAPREDUCE_1688492002_138 | |
2020-05-07 10:39:06,350 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10300 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Logs/Spark/application_1588844087764_0001/stdout.log | |
2020-05-07 10:39:06,605 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10300 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Logs/Spark/application_1588844087764_0001/stdout.log | |
2020-05-07 10:39:06,608 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10300 State = COMMITTED size 55504844 byte | |
2020-05-07 10:39:07,010 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Logs/Spark/application_1588844087764_0001/stdout.log is closed by HopsFS_DFSClient_NONMAPREDUCE_-1298065918_58 | |
2020-05-07 10:39:07,285 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10301 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Logs/Spark/application_1588844087764_0001/stderr.log | |
2020-05-07 10:39:07,291 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10301 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Logs/Spark/application_1588844087764_0001/stderr.log | |
2020-05-07 10:39:07,294 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10301 State = COMMITTED size 3267 byte | |
2020-05-07 10:39:07,695 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Logs/Spark/application_1588844087764_0001/stderr.log is closed by HopsFS_DFSClient_NONMAPREDUCE_-1298065918_58 | |
2020-05-07 10:39:07,836 INFO org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks: BLOCK* InvalidateBlocks: add bid= 10149 State = COMPLETE to [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 | |
2020-05-07 10:39:07,836 INFO BlockStateChange: BLOCK* addToInvalidates: bid= 10149 State = COMPLETE 10.0.4.12:50010 | |
2020-05-07 10:39:07,836 INFO org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks: BLOCK* InvalidateBlocks: add bid= 10148 State = COMPLETE to [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 | |
2020-05-07 10:39:07,836 INFO BlockStateChange: BLOCK* addToInvalidates: bid= 10148 State = COMPLETE 10.0.4.12:50010 | |
2020-05-07 10:39:07,836 INFO org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks: BLOCK* InvalidateBlocks: add bid= 10147 State = COMPLETE to [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 | |
2020-05-07 10:39:07,836 INFO BlockStateChange: BLOCK* addToInvalidates: bid= 10147 State = COMPLETE 10.0.4.12:50010 | |
2020-05-07 10:39:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:39:08,732 INFO BlockStateChange: BLOCK* BlockManager: ask 10.0.4.12:50010 to delete [blk_10148_1001, blk_10149_1001, blk_10147_1001] | |
2020-05-07 10:39:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:39:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:39:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:39:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:39:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:40:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:40:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:40:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:40:38,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:40:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:40:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:41:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:41:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:41:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:41:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:41:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:41:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:42:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:42:18,688 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:42:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:42:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:42:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:42:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:43:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:43:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:43:28,679 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:43:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:43:48,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:43:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:44:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:44:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:44:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:44:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:44:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:44:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:45:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:45:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:45:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:45:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:45:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:45:58,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:46:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:46:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:46:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:46:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:46:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:46:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:47:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:47:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:47:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:47:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:47:48,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:47:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:48:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:48:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:48:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:48:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:48:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:48:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:49:08,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:49:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:49:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:49:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:49:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:49:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:50:08,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:50:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:50:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:50:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:50:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:50:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:51:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:51:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:51:28,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:51:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:51:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:51:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:52:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:52:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:52:28,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:52:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:52:48,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:52:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:53:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:53:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:53:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:53:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:53:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:53:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:54:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:54:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:54:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:54:38,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:54:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:54:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:55:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:55:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:55:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:55:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:55:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:55:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:56:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:56:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:56:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:56:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:56:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:56:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:57:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:57:18,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:57:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:57:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:57:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:57:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:58:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:58:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:58:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:58:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:58:48,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:58:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:59:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:59:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:59:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:59:38,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:59:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:59:58,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:00:00,097 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 21600000 minutes, Emptier interval = 3600000 minutes. | |
2020-05-07 11:00:00,097 INFO org.apache.hadoop.fs.TrashPolicyDefault: TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ip-10-0-4-12.us-west-2.compute.internal/user/hdfs/.Trash | |
2020-05-07 11:00:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:00:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:00:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:00:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:00:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:00:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:01:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:01:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:01:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:01:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:01:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:01:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:02:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:02:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:02:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:02:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:02:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:02:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:03:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:03:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:03:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:03:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:03:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:03:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:04:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:04:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:04:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:04:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:04:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:04:58,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:05:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:05:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:05:28,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:05:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:05:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:05:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:06:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:06:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:06:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:06:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:06:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:06:58,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:07:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:07:18,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:07:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:07:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:07:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:07:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:08:08,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:08:18,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:08:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:08:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:08:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:08:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:09:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:09:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:09:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:09:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:09:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:09:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:10:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:10:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:10:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:10:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:10:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:10:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:11:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:11:18,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:11:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:11:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:11:48,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:11:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:12:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:12:18,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:12:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:12:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:12:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:12:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:13:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:13:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:13:28,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:13:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:13:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:13:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:14:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:14:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:14:28,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:14:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:14:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:14:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:15:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:15:18,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:15:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:15:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:15:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:15:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:16:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:16:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:16:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:16:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:16:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:16:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:17:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:17:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:17:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:17:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:17:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:17:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:18:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:18:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:18:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:18:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:18:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:18:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:19:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:19:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:19:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:19:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:19:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:19:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:20:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:20:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:20:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:20:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:20:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:20:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:21:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:21:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:21:28,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:21:38,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:21:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:21:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:22:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:22:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:22:28,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:22:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:22:48,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:22:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:23:08,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:23:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:23:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:23:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:23:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:23:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:24:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:24:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:24:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:24:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:24:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:24:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:25:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:25:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:25:28,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:25:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:25:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:25:58,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:26:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:26:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:26:28,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:26:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:26:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:26:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:27:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:27:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:27:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:27:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:27:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:27:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:28:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:28:14,971 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 containing 295 blocks is assigned to NN [ID: 2, IP: 10.0.4.12] | |
2020-05-07 11:28:14,980 INFO BlockStateChange: BLOCK* processReport success: from DatanodeRegistration(10.0.4.12:50010, datanodeUuid=a7438e0b-c413-4d38-888d-ab4392b95d31, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-4230c663-6049-437f-b406-77ff12af092d;nsid=911;c=1588843062940) storage: DatanodeStorage[DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9,DISK,NORMAL], blocks: 2000, hasStaleStorages: false, processing time: 1 ms. (buckets,bucketsMatching,blocks,toRemove,toInvalidate,toCorrupt,toUC,toAdd,safeBlocksIfSafeMode)=(1000,1000,2000,0,0,0,0,0,0) | |
2020-05-07 11:28:14,981 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 has completed | |
2020-05-07 11:28:18,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:28:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:28:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:28:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:28:58,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:29:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:29:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:29:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:29:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:29:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:29:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:30:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:30:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:30:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:30:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:30:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:30:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:31:08,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:31:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:31:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:31:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:31:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:31:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:32:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:32:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:32:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:32:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:32:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:32:58,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:33:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:33:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:33:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:33:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:33:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:33:58,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:34:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:34:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:34:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:34:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:34:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:34:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:35:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:35:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:35:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:35:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:35:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:35:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:36:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:36:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:36:28,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:36:38,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:36:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:36:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:37:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:37:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:37:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:37:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:37:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:37:58,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:38:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:38:18,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:38:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:38:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:38:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:38:58,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:39:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:39:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:39:28,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:39:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:39:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:39:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:40:08,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:40:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:40:28,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:40:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:40:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:40:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:41:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:41:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:41:28,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:41:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:41:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:41:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:42:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:42:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:42:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:42:38,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:42:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:42:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:43:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:43:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:43:28,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:43:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:43:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:43:58,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:44:08,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:44:18,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:44:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:44:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:44:48,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:44:58,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:45:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:45:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:45:28,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:45:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:45:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:45:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:46:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:46:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:46:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:46:38,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:46:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:46:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:47:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:47:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:47:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:47:38,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:47:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:47:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:48:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:48:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:48:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:48:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:48:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:48:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:49:08,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:49:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:49:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:49:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:49:48,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:49:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:50:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:50:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:50:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:50:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:50:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:50:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:51:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:51:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:51:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:51:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:51:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:51:58,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:52:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:52:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:52:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:52:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:52:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:52:58,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:53:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:53:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:53:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:53:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:53:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:53:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:54:08,682 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:54:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:54:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:54:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:54:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:54:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:55:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:55:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:55:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:55:38,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:55:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:55:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:56:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:56:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:56:28,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:56:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:56:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:56:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:57:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:57:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:57:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:57:38,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:57:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:57:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:58:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:58:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:58:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:58:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:58:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:58:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:59:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:59:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:59:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:59:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:59:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 11:59:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:00:00,058 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 21600000 minutes, Emptier interval = 3600000 minutes. | |
2020-05-07 12:00:00,058 INFO org.apache.hadoop.fs.TrashPolicyDefault: TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ip-10-0-4-12.us-west-2.compute.internal/user/hdfs/.Trash | |
2020-05-07 12:00:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:00:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:00:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:00:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:00:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:00:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:01:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:01:18,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:01:28,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:01:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:01:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:01:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:02:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:02:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:02:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:02:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:02:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:02:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:03:08,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:03:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:03:28,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:03:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:03:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:03:58,684 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:04:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:04:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:04:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:04:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:04:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:04:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:05:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:05:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:05:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:05:38,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:05:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:05:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:06:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:06:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:06:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:06:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:06:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:06:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:07:08,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:07:18,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:07:28,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:07:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:07:48,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:07:58,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:08:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:08:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:08:28,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:08:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:08:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:08:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:09:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:09:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:09:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:09:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:09:48,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:09:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:10:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:10:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:10:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:10:38,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:10:48,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:10:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:11:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:11:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:11:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:11:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:11:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:11:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:12:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:12:18,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:12:28,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:12:38,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:12:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:12:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:13:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:13:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:13:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:13:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:13:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:13:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:14:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:14:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:14:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:14:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:14:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:14:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:15:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:15:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:15:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:15:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:15:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:15:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:16:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:16:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:16:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:16:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:16:48,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:16:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:17:08,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:17:18,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:17:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:17:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:17:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:17:58,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:18:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:18:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:18:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:18:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:18:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:18:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:19:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:19:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:19:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:19:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:19:48,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:19:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:20:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:20:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:20:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:20:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:20:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:20:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:21:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:21:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:21:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:21:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:21:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:21:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:22:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:22:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:22:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:22:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:22:48,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:22:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:23:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:23:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:23:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:23:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:23:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:23:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:24:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:24:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:24:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:24:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:24:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:24:58,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:25:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:25:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:25:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:25:38,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:25:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:25:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:26:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:26:18,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:26:28,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:26:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:26:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:26:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:27:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:27:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:27:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:27:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:27:48,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:27:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:28:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:28:14,969 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 containing 295 blocks is assigned to NN [ID: 2, IP: 10.0.4.12] | |
2020-05-07 12:28:14,978 INFO BlockStateChange: BLOCK* processReport success: from DatanodeRegistration(10.0.4.12:50010, datanodeUuid=a7438e0b-c413-4d38-888d-ab4392b95d31, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-4230c663-6049-437f-b406-77ff12af092d;nsid=911;c=1588843062940) storage: DatanodeStorage[DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9,DISK,NORMAL], blocks: 2000, hasStaleStorages: false, processing time: 0 ms. (buckets,bucketsMatching,blocks,toRemove,toInvalidate,toCorrupt,toUC,toAdd,safeBlocksIfSafeMode)=(1000,1000,2000,0,0,0,0,0,0) | |
2020-05-07 12:28:14,979 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 has completed | |
2020-05-07 12:28:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:28:28,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:28:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:28:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:28:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:29:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:29:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:29:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:29:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:29:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:29:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:30:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:30:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:30:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:30:38,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:30:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:30:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:31:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:31:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:31:28,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:31:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:31:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:31:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:32:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:32:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:32:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:32:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:32:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:32:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:33:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:33:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:33:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:33:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:33:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:33:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:34:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:34:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:34:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:34:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:34:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:34:58,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:35:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:35:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:35:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:35:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:35:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:35:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:36:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:36:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:36:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:36:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:36:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:36:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:37:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:37:18,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:37:28,679 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:37:38,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:37:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:37:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:38:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:38:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:38:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:38:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:38:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:38:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:39:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:39:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:39:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:39:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:39:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:39:58,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:40:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:40:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:40:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:40:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:40:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:40:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:41:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:41:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:41:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:41:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:41:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:41:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:42:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:42:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:42:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:42:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:42:48,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:42:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:43:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:43:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:43:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:43:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:43:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:43:58,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:44:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:44:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:44:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:44:30,505 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/rossa is closed by HopsFS_DFSClient_NONMAPREDUCE_2143401710_606 | |
2020-05-07 12:44:30,862 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Logs/README.md" | |
2020-05-07 12:44:30,865 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10302 State = UNDER_CONSTRUCTION for /Projects/rossa/Logs/README.md | |
2020-05-07 12:44:30,871 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10302 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/rossa/Logs/README.md | |
2020-05-07 12:44:30,875 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10302 State = COMMITTED size 227 byte | |
2020-05-07 12:44:31,275 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/Logs/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_2143401710_606 | |
2020-05-07 12:44:31,277 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Logs/README.md" | |
2020-05-07 12:44:31,379 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Resources/README.md" | |
2020-05-07 12:44:31,382 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/Resources/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_2143401710_606 | |
2020-05-07 12:44:31,383 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Resources/README.md" | |
2020-05-07 12:44:36,799 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/rossa_Training_Datasets/README.md" | |
2020-05-07 12:44:36,802 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/rossa_Training_Datasets/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_2143401710_606 | |
2020-05-07 12:44:36,804 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/rossa_Training_Datasets/README.md" | |
2020-05-07 12:44:37,832 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/DataValidation/README.md" | |
2020-05-07 12:44:37,835 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/DataValidation/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_2143401710_606 | |
2020-05-07 12:44:37,838 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/DataValidation/README.md" | |
2020-05-07 12:44:37,889 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Experiments/README.md" | |
2020-05-07 12:44:37,892 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/Experiments/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_2143401710_606 | |
2020-05-07 12:44:37,894 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Experiments/README.md" | |
2020-05-07 12:44:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:44:47,816 INFO org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks: BLOCK* InvalidateBlocks: add bid= 10302 State = COMPLETE to [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 | |
2020-05-07 12:44:47,816 INFO BlockStateChange: BLOCK* addToInvalidates: bid= 10302 State = COMPLETE 10.0.4.12:50010 | |
2020-05-07 12:44:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:44:49,569 INFO BlockStateChange: BLOCK* BlockManager: ask 10.0.4.12:50010 to delete [blk_10302_1001] | |
2020-05-07 12:44:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:45:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:45:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:45:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:45:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:45:48,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:45:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:46:04,935 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/rossa is closed by HopsFS_DFSClient_NONMAPREDUCE_-1984827962_606 | |
2020-05-07 12:46:06,014 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Logs/README.md" | |
2020-05-07 12:46:06,017 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10303 State = UNDER_CONSTRUCTION for /Projects/rossa/Logs/README.md | |
2020-05-07 12:46:06,023 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10303 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/rossa/Logs/README.md | |
2020-05-07 12:46:06,025 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10303 State = COMMITTED size 227 byte | |
2020-05-07 12:46:06,426 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/Logs/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1984827962_606 | |
2020-05-07 12:46:06,428 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Logs/README.md" | |
2020-05-07 12:46:06,517 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Resources/README.md" | |
2020-05-07 12:46:06,520 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/Resources/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1984827962_606 | |
2020-05-07 12:46:06,522 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Resources/README.md" | |
2020-05-07 12:46:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:46:11,962 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/rossa_Training_Datasets/README.md" | |
2020-05-07 12:46:11,965 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/rossa_Training_Datasets/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1984827962_606 | |
2020-05-07 12:46:11,968 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/rossa_Training_Datasets/README.md" | |
2020-05-07 12:46:12,965 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/DataValidation/README.md" | |
2020-05-07 12:46:12,968 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/DataValidation/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1984827962_606 | |
2020-05-07 12:46:12,971 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/DataValidation/README.md" | |
2020-05-07 12:46:13,018 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Experiments/README.md" | |
2020-05-07 12:46:13,020 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/Experiments/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1984827962_606 | |
2020-05-07 12:46:13,023 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Experiments/README.md" | |
2020-05-07 12:46:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:46:23,961 INFO org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks: BLOCK* InvalidateBlocks: add bid= 10303 State = COMPLETE to [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 | |
2020-05-07 12:46:23,961 INFO BlockStateChange: BLOCK* addToInvalidates: bid= 10303 State = COMPLETE 10.0.4.12:50010 | |
2020-05-07 12:46:25,854 INFO BlockStateChange: BLOCK* BlockManager: ask 10.0.4.12:50010 to delete [blk_10303_1001] | |
2020-05-07 12:46:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:46:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:46:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:46:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:47:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:47:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:47:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:47:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:47:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:47:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:48:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:48:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:48:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:48:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:48:48,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:48:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:49:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:49:18,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:49:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:49:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:49:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:49:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:50:08,557 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/rossa is closed by HopsFS_DFSClient_NONMAPREDUCE_1872500631_31 | |
2020-05-07 12:50:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:50:09,234 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Logs/README.md" | |
2020-05-07 12:50:09,237 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10304 State = UNDER_CONSTRUCTION for /Projects/rossa/Logs/README.md | |
2020-05-07 12:50:09,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10304 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/rossa/Logs/README.md | |
2020-05-07 12:50:09,245 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10304 State = COMMITTED size 227 byte | |
2020-05-07 12:50:09,646 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/Logs/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_1872500631_31 | |
2020-05-07 12:50:09,647 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Logs/README.md" | |
2020-05-07 12:50:09,736 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Resources/README.md" | |
2020-05-07 12:50:09,739 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/Resources/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_1872500631_31 | |
2020-05-07 12:50:09,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Resources/README.md" | |
2020-05-07 12:50:14,794 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Experiments/README.md" | |
2020-05-07 12:50:14,797 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/Experiments/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_1872500631_31 | |
2020-05-07 12:50:14,800 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Experiments/README.md" | |
2020-05-07 12:50:17,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Jupyter/README.md" | |
2020-05-07 12:50:17,938 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/Jupyter/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_1872500631_31 | |
2020-05-07 12:50:17,941 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/Jupyter/README.md" | |
2020-05-07 12:50:17,990 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/rossa_Training_Datasets/README.md" | |
2020-05-07 12:50:17,993 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/rossa_Training_Datasets/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_1872500631_31 | |
2020-05-07 12:50:17,996 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/rossa_Training_Datasets/README.md" | |
2020-05-07 12:50:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:50:19,221 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/DataValidation/README.md" | |
2020-05-07 12:50:19,224 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/rossa/DataValidation/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_1872500631_31 | |
2020-05-07 12:50:19,227 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/rossa/DataValidation/README.md" | |
2020-05-07 12:50:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:50:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:50:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:50:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:51:08,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:51:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:51:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:51:38,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:51:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:51:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:52:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:52:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:52:28,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:52:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:52:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:52:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:53:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:53:18,679 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:53:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:53:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:53:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:53:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:54:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:54:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:54:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:54:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:54:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:54:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:55:08,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:55:18,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:55:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:55:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:55:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:55:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:56:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:56:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:56:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:56:38,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:56:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:56:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:57:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:57:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:57:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:57:38,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:57:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:57:58,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:58:08,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:58:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:58:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:58:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:58:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:58:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:59:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:59:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:59:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:59:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:59:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 12:59:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:00:00,062 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 21600000 minutes, Emptier interval = 3600000 minutes. | |
2020-05-07 13:00:00,062 INFO org.apache.hadoop.fs.TrashPolicyDefault: TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ip-10-0-4-12.us-west-2.compute.internal/user/hdfs/.Trash | |
2020-05-07 13:00:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:00:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:00:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:00:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:00:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:00:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:01:08,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:01:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:01:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:01:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:01:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:01:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:02:08,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:02:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:02:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:02:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:02:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:02:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:03:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:03:18,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:03:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:03:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:03:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:03:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:04:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:04:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:04:28,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:04:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:04:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:04:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:05:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:05:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:05:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:05:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:05:48,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:05:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:06:08,679 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:06:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:06:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:06:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:06:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:06:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:07:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:07:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:07:28,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:07:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:07:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:07:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:08:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:08:18,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:08:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:08:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:08:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:08:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:09:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:09:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:09:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:09:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:09:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:09:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:10:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:10:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:10:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:10:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:10:48,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:10:58,679 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:11:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:11:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:11:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:11:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:11:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:11:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:12:08,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:12:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:12:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:12:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:12:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:12:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:13:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:13:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:13:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:13:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:13:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:13:58,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:14:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:14:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:14:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:14:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:14:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:14:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:15:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:15:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:15:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:15:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:15:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:15:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:16:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:16:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:16:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:16:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:16:48,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:16:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:17:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:17:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:17:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:17:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:17:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:17:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:18:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:18:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:18:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:18:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:18:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:18:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:19:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:19:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:19:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:19:38,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:19:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:19:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:20:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:20:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:20:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:20:38,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:20:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:20:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:21:08,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:21:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:21:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:21:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:21:48,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:21:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:22:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:22:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:22:28,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:22:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:22:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:22:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:23:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:23:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:23:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:23:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:23:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:23:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:24:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:24:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:24:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:24:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:24:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:24:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:25:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:25:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:25:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:25:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:25:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:25:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:26:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:26:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:26:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:26:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:26:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:26:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:27:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:27:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:27:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:27:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:27:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:27:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:28:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 13:28:14,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 containing 296 blocks is assigned to NN [ID: 2, IP: 10.0.4.12] | |
2020-05-07 13:28:14,977 INFO BlockStateChange: BLOCK* processReport success: from DatanodeRegistration(10.0.4.12:50010, datanodeUuid=a7438e0b-c413-4d38-888d- |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment