hadoop-hdfs-namenode-ip-10-0-4-12.log
This file has been truncated, but you can view the full file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2020-05-07 09:31:41,054 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: | |
/************************************************************ | |
STARTUP_MSG: Starting NameNode | |
STARTUP_MSG: user = hdfs | |
STARTUP_MSG: host = ip-10-0-4-12.us-west-2.compute.internal/10.0.4.12 | |
STARTUP_MSG: args = [] | |
STARTUP_MSG: version = 2.8.2.10-SNAPSHOT | |
STARTUP_MSG: classpath = /srv/hops/hadoop/etc/hadoop:/srv/hops/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-gpu-management-2.8.2.10-20200427.065454-121.jar:/srv/hops/hadoop/share/hadoop/common/lib/nvidia-management.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/erasure-coding-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-3.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-metadata-dal-2.8.2.10-20200427.065409-123.jar:/srv/hops/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/common/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jmx_prometheus_javaagent-0.12.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/json-smart-1.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/srv/hops/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/common/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/srv/hops/hadoop/share/hadoop/common/lib/gson-2.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/service-discovery-client-0.4-20200409.074643-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/jcip-annotations-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/ndb-dal.jar:/srv/hops/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-nfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/json-20140107.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/spymemcached-2.11.7.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/netty-all-4.1.42.Final.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcpkix-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-nodemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-api-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcprov-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-resourcemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-2.10.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-web-proxy-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.2.10-20200427.065934-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.2.10-20200427.065930-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.2.10-20200427.065949-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.2.10-20200427.065948-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-20200427.065952-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.2.10-20200427.065922-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.2.10-20200427.070014-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.2.10-20200427.070006-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.2.10-20200427.070010-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.2.10-20200427.070012-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.2.10-20200427.070008-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.2.10-20200427.070003-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.2.10-20200427.070017-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.2.10-20200427.070015-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.2.10-20200427.070043-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.2.10-20200427.070052-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.2.10-20200427.070046-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.2.10-20200427.070034-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.2.10-20200427.070040-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.2.10-20200427.070038-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.2.10-20200427.070050-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.10-20200427.070054-119.jar::.:/srv/hops/hadoop/share/hadoop/yarn/test/*:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.2.10-20200427.065934-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.2.10-20200427.065930-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.2.10-20200427.065949-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.2.10-20200427.065948-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-20200427.065952-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.2.10-20200427.065922-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.2.10-20200427.070014-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.2.10-20200427.070006-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.2.10-20200427.070010-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.2.10-20200427.070012-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.2.10-20200427.070008-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.2.10-20200427.070003-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.2.10-20200427.070017-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.2.10-20200427.070015-119.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcpkix-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-nodemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-api-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcprov-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-resourcemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-2.10.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-web-proxy-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.2.10-20200427.070043-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.2.10-20200427.070052-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.2.10-20200427.070046-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.2.10-20200427.070034-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.2.10-20200427.070040-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.2.10-20200427.070038-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.2.10-20200427.070050-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.10-20200427.070054-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/test/*:/srv/hops/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-gpu-management-2.8.2.10-20200427.065454-121.jar:/srv/hops/hadoop/share/hadoop/common/lib/nvidia-management.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/erasure-coding-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-3.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-metadata-dal-2.8.2.10-20200427.065409-123.jar:/srv/hops/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/common/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jmx_prometheus_javaagent-0.12.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/json-smart-1.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/srv/hops/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/common/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/srv/hops/hadoop/share/hadoop/common/lib/gson-2.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/service-discovery-client-0.4-20200409.074643-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/jcip-annotations-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/ndb-dal.jar:/srv/hops/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/json-20140107.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/spymemcached-2.11.7.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/netty-all-4.1.42.Final.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-configuration-1.6.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hops-gpu-management-2.8.2.10-20200427.065454-121.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-datajoin-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-distcp-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/tools/lib/api-util-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-httpclient-3.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-beanutils-1.7.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-archive-logs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/apacheds-i18n-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jmespath-java-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/tools/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-databind-2.10.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-beanutils-core-1.8.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-archives-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jsr305-3.0.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-aws-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/nimbus-jose-jwt-3.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-ant-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/httpcore-4.4.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/aws-java-sdk-kms-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hops-metadata-dal-2.8.2.10-20200427.065409-123.jar:/srv/hops/hadoop/share/hadoop/tools/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-annotations-2.10.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-sls-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/httpclient-4.5.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/tools/lib/bcprov-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-rumen-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/curator-recipes-2.7.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/aws-java-sdk-core-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-gridmix-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-core-2.10.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/json-smart-1.1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-net-3.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jetty-sslengine-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jsch-0.1.54.jar:/srv/hops/hadoop/share/hadoop/tools/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/joda-time-2.9.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-extras-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-digester-1.8.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-dataformat-cbor-2.6.7.jar:/srv/hops/hadoop/share/hadoop/tools/lib/gson-2.8.5.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-auth-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/service-discovery-client-0.4-20200409.074643-1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/api-asn1-api-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jcip-annotations-1.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/ion-java-1.0.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/metrics-core-3.0.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/java-xmlbuilder-0.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/aws-java-sdk-s3-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-openstack-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jets3t-0.9.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/curator-framework-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-nfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-nfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.2.10-20200427.070043-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.2.10-20200427.070052-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.2.10-20200427.070046-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.2.10-20200427.070034-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.2.10-20200427.070040-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.2.10-20200427.070038-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.2.10-20200427.070050-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.10-20200427.070054-119.jar:/srv/hops/hadoop/contrib/capacity-scheduler/*.jar:/srv/hops/hadoop/contrib/capacity-scheduler/*.jar:/srv/hops/hadoop/contrib/capacity-scheduler/*.jar:/srv/hops/hadoop/share/hadoop/common/lib/jmx_prometheus_javaagent-0.12.0.jar | |
STARTUP_MSG: build = git@github.com:hopshadoop/hops.git -r 5bb94b87c4e62d91d17f97533ed018e07cf3f8bc; compiled by 'jenkins' on 2020-04-27T06:57Z | |
STARTUP_MSG: java = 1.8.0_252 | |
************************************************************/ | |
2020-05-07 09:31:41,062 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] | |
2020-05-07 09:31:41,065 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] | |
2020-05-07 09:31:41,223 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties | |
2020-05-07 09:31:41,261 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). | |
2020-05-07 09:31:41,262 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started | |
2020-05-07 09:31:41,312 WARN org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library | |
2020-05-07 09:31:41,386 INFO io.hops.resolvingcache.Cache: starting Resolving Cache [InMemoryCache] | |
2020-05-07 09:31:41,418 INFO io.hops.metadata.ndb.ClusterjConnector: Database connect string: 10.0.4.12:1186 | |
2020-05-07 09:31:41,418 INFO io.hops.metadata.ndb.ClusterjConnector: Database name: hops | |
2020-05-07 09:31:41,418 INFO io.hops.metadata.ndb.ClusterjConnector: Max Transactions: 1024 | |
2020-05-07 09:31:42,477 INFO io.hops.security.UsersGroups: UsersGroups Initialized. | |
2020-05-07 09:31:42,632 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070 | |
2020-05-07 09:31:42,688 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog | |
2020-05-07 09:31:42,694 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined | |
2020-05-07 09:31:42,700 INFO org.apache.hadoop.http.HttpServer3: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer3$QuotingInputFilter) | |
2020-05-07 09:31:42,702 INFO org.apache.hadoop.http.HttpServer3: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs | |
2020-05-07 09:31:42,702 INFO org.apache.hadoop.http.HttpServer3: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static | |
2020-05-07 09:31:42,703 INFO org.apache.hadoop.http.HttpServer3: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs | |
2020-05-07 09:31:42,723 INFO org.apache.hadoop.http.HttpServer3: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) | |
2020-05-07 09:31:42,725 INFO org.apache.hadoop.http.HttpServer3: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* | |
2020-05-07 09:31:42,729 INFO org.apache.hadoop.http.HttpServer3: Jetty bound to port 50070 | |
2020-05-07 09:31:42,729 INFO org.mortbay.log: jetty-6.1.26 | |
2020-05-07 09:31:42,861 INFO org.mortbay.log: Started HttpServer3$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 | |
2020-05-07 09:31:42,886 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found. | |
2020-05-07 09:31:42,988 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 | |
2020-05-07 09:31:42,988 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true | |
2020-05-07 09:31:42,990 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 | |
2020-05-07 09:31:42,990 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2020 May 07 09:31:42 | |
2020-05-07 09:31:42,995 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 3 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 50 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: slicerBatchSize = 500 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: misReplicatedNoOfBatchs = 20 | |
2020-05-07 09:31:42,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: slicerNbOfBatchs = 20 | |
2020-05-07 09:31:43,203 INFO com.zaxxer.hikari.HikariDataSource: HikariCP pool HikariPool-0 is starting. | |
2020-05-07 09:31:43,458 WARN io.hops.common.IDsGeneratorFactory: Called setConfiguration more than once. | |
2020-05-07 09:31:43,461 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE) | |
2020-05-07 09:31:43,461 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: superGroup = hdfs | |
2020-05-07 09:31:43,461 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true | |
2020-05-07 09:31:43,462 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true | |
2020-05-07 09:31:43,510 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Added new root inode | |
2020-05-07 09:31:43,510 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false | |
2020-05-07 09:31:43,511 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true | |
2020-05-07 09:31:43,511 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 13755 | |
2020-05-07 09:31:43,511 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: The maximum number of xattrs per inode is set to 32 | |
2020-05-07 09:31:43,511 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times | |
2020-05-07 09:31:43,516 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 | |
2020-05-07 09:31:43,516 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 | |
2020-05-07 09:31:43,516 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 | |
2020-05-07 09:31:43,518 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled | |
2020-05-07 09:31:43,518 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis | |
2020-05-07 09:31:43,528 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups | |
2020-05-07 09:31:43,629 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to 0.0.0.0:8020 | |
2020-05-07 09:31:43,634 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 12000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler | |
2020-05-07 09:31:43,642 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8020 | |
2020-05-07 09:31:43,642 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #2 for port 8020 | |
2020-05-07 09:31:43,642 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #3 for port 8020 | |
2020-05-07 09:31:43,755 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor | |
2020-05-07 09:31:43,766 INFO io.hops.leaderElection.LETransaction: LE Status: id 1 I can be the leader but I have weak locks. Retry with stronger lock | |
2020-05-07 09:31:43,766 INFO io.hops.leaderElection.LETransaction: LE Status: id 1 periodic update. Stronger locks requested in next round | |
2020-05-07 09:31:43,768 INFO io.hops.leaderElection.LETransaction: LE Status: id 1 I am the new LEADER. | |
2020-05-07 09:31:43,866 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean | |
2020-05-07 09:31:44,888 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: cealring the safe blocks tabl, this may take some time. | |
2020-05-07 09:31:44,893 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 | |
2020-05-07 09:31:44,893 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 | |
2020-05-07 09:31:44,893 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 | |
2020-05-07 09:31:44,900 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0 | |
2020-05-07 09:31:44,908 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 2 secs | |
2020-05-07 09:31:44,910 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes | |
2020-05-07 09:31:44,911 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks | |
2020-05-07 09:31:44,911 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: cealring the safe blocks tabl, this may take some time. | |
2020-05-07 09:31:44,918 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0 | |
2020-05-07 09:31:44,948 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting | |
2020-05-07 09:31:44,948 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: starting | |
2020-05-07 09:31:44,981 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Leader Node RPC up at: ip-10-0-4-12.us-west-2.compute.internal/10.0.4.12:8020 | |
2020-05-07 09:31:44,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state | |
2020-05-07 09:31:44,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Catching up to latest edits from old active before taking over writer role in edits logs | |
2020-05-07 09:31:44,983 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Marking all datandoes as stale | |
2020-05-07 09:31:44,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Reprocessing replication and invalidation queues | |
2020-05-07 09:31:44,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues | |
2020-05-07 09:31:44,990 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds | |
2020-05-07 09:31:45,007 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: processMisReplicated read 0/10000 in the Ids range [0 - 10000] (max inodeId when the process started: 1) | |
2020-05-07 09:31:45,019 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 0 | |
2020-05-07 09:31:45,019 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0 | |
2020-05-07 09:31:45,019 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0 | |
2020-05-07 09:31:45,019 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0 | |
2020-05-07 09:31:45,019 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0 | |
2020-05-07 09:31:45,019 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 32 msec | |
2020-05-07 09:31:45,497 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 60 minutes. | |
2020-05-07 09:31:45,498 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 60 minutes. | |
2020-05-07 09:31:48,713 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:31:58,688 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:32:08,698 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:32:18,680 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:32:28,689 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:32:38,690 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:32:48,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:32:58,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:33:02,470 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15: SIGTERM | |
2020-05-07 09:33:02,475 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: | |
/************************************************************ | |
SHUTDOWN_MSG: Shutting down NameNode at ip-10-0-4-12.us-west-2.compute.internal/10.0.4.12 | |
************************************************************/ | |
2020-05-07 09:33:04,181 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: | |
/************************************************************ | |
STARTUP_MSG: Starting NameNode | |
STARTUP_MSG: user = hdfs | |
STARTUP_MSG: host = ip-10-0-4-12.us-west-2.compute.internal/10.0.4.12 | |
STARTUP_MSG: args = [] | |
STARTUP_MSG: version = 2.8.2.10-SNAPSHOT | |
STARTUP_MSG: classpath = /srv/hops/hadoop/etc/hadoop:/srv/hops/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-gpu-management-2.8.2.10-20200427.065454-121.jar:/srv/hops/hadoop/share/hadoop/common/lib/nvidia-management.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/erasure-coding-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-3.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-metadata-dal-2.8.2.10-20200427.065409-123.jar:/srv/hops/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/common/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jmx_prometheus_javaagent-0.12.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/json-smart-1.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/srv/hops/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/common/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/srv/hops/hadoop/share/hadoop/common/lib/gson-2.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/service-discovery-client-0.4-20200409.074643-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/jcip-annotations-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/ndb-dal.jar:/srv/hops/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-nfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/json-20140107.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/spymemcached-2.11.7.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/netty-all-4.1.42.Final.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcpkix-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-nodemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-api-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcprov-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-resourcemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-2.10.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-web-proxy-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.2.10-20200427.065934-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.2.10-20200427.065930-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.2.10-20200427.065949-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.2.10-20200427.065948-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-20200427.065952-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.2.10-20200427.065922-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.2.10-20200427.070014-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.2.10-20200427.070006-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.2.10-20200427.070010-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.2.10-20200427.070012-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.2.10-20200427.070008-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.2.10-20200427.070003-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.2.10-20200427.070017-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.2.10-20200427.070015-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.2.10-20200427.070043-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.2.10-20200427.070052-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.2.10-20200427.070046-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.2.10-20200427.070034-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.2.10-20200427.070040-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.2.10-20200427.070038-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.2.10-20200427.070050-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.10-20200427.070054-119.jar::.:/srv/hops/hadoop/share/hadoop/yarn/test/*:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.2.10-20200427.065934-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.2.10-20200427.065930-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.2.10-20200427.065949-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.2.10-20200427.065948-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-20200427.065952-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.2.10-20200427.065922-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.2.10-20200427.070014-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.2.10-20200427.070006-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.2.10-20200427.070010-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.2.10-20200427.070012-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.2.10-20200427.070008-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.2.10-20200427.070003-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.2.10-20200427.070017-119.jar:/srv/hops/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.2.10-20200427.070015-119.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcpkix-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-nodemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-api-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/bcprov-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-resourcemanager-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-core-2.10.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-applicationhistoryservice-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/hadoop-yarn-server-web-proxy-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.2.10-20200427.070043-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.2.10-20200427.070052-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.2.10-20200427.070046-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.2.10-20200427.070034-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.2.10-20200427.070040-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.2.10-20200427.070038-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.2.10-20200427.070050-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.10-20200427.070054-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/srv/hops/hadoop/share/hadoop/mapreduce/test/*:/srv/hops/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-gpu-management-2.8.2.10-20200427.065454-121.jar:/srv/hops/hadoop/share/hadoop/common/lib/nvidia-management.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/erasure-coding-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-3.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/hops-metadata-dal-2.8.2.10-20200427.065409-123.jar:/srv/hops/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/common/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/srv/hops/hadoop/share/hadoop/common/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/common/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jmx_prometheus_javaagent-0.12.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/json-smart-1.1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/srv/hops/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/common/lib/junit-4.11.jar:/srv/hops/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/srv/hops/hadoop/share/hadoop/common/lib/gson-2.8.5.jar:/srv/hops/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/service-discovery-client-0.4-20200409.074643-1.jar:/srv/hops/hadoop/share/hadoop/common/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/common/lib/jcip-annotations-1.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/ndb-dal.jar:/srv/hops/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/srv/hops/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/srv/hops/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/json-20140107.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/args4j-2.0.29.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/hops-leader-election-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/spymemcached-2.11.7.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/netty-all-4.1.42.Final.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-math3-3.1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-configuration-1.6.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hops-gpu-management-2.8.2.10-20200427.065454-121.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-datajoin-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jetty-util-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-codec-1.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-distcp-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/htrace-core4-4.0.1-incubating.jar:/srv/hops/hadoop/share/hadoop/tools/lib/api-util-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-core-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/netty-3.6.2.Final.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-httpclient-3.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-beanutils-1.7.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/zookeeper-3.4.6.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jettison-1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-json-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-archive-logs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/protobuf-java-2.5.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/apacheds-i18n-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jmespath-java-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/tools/lib/curator-client-2.7.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-databind-2.10.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-beanutils-core-1.8.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-archives-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jsr305-3.0.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-aws-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/nimbus-jose-jwt-3.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-mapper-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/log4j-1.2.17.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-jaxrs-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-ant-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/httpcore-4.4.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-core-asl-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-client-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/aws-java-sdk-kms-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-cli-1.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hops-metadata-dal-2.8.2.10-20200427.065409-123.jar:/srv/hops/hadoop/share/hadoop/tools/lib/xmlenc-0.52.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-annotations-2.10.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-sls-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/httpclient-4.5.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/paranamer-2.3.jar:/srv/hops/hadoop/share/hadoop/tools/lib/bcprov-jdk15on-1.56.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-rumen-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/curator-recipes-2.7.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/guava-15.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/avro-1.7.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/aws-java-sdk-core-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-gridmix-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jersey-server-1.9.jar:/srv/hops/hadoop/share/hadoop/tools/lib/xz-1.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-core-2.10.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-compress-1.4.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/servlet-api-2.5.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/json-smart-1.1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-net-3.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jetty-sslengine-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-collections-3.2.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jsch-0.1.54.jar:/srv/hops/hadoop/share/hadoop/tools/lib/stax-api-1.0-2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-xc-1.9.13.jar:/srv/hops/hadoop/share/hadoop/tools/lib/joda-time-2.9.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-extras-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-digester-1.8.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jackson-dataformat-cbor-2.6.7.jar:/srv/hops/hadoop/share/hadoop/tools/lib/gson-2.8.5.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-auth-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-logging-1.1.3.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jaxb-impl-2.2.3-1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-io-2.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/service-discovery-client-0.4-20200409.074643-1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/activation-1.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/api-asn1-api-1.0.0-M20.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jcip-annotations-1.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/ion-java-1.0.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/metrics-core-3.0.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/java-xmlbuilder-0.4.jar:/srv/hops/hadoop/share/hadoop/tools/lib/snappy-java-1.0.4.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/aws-java-sdk-s3-1.11.199.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jaxb-api-2.2.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/asm-3.2.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jsp-api-2.1.jar:/srv/hops/hadoop/share/hadoop/tools/lib/commons-lang-2.6.jar:/srv/hops/hadoop/share/hadoop/tools/lib/hadoop-openstack-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jets3t-0.9.0.jar:/srv/hops/hadoop/share/hadoop/tools/lib/jetty-6.1.26.jar:/srv/hops/hadoop/share/hadoop/tools/lib/curator-framework-2.7.1.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-nfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-common-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/common/hadoop-nfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT-tests.jar:/srv/hops/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.2.10-SNAPSHOT.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.2.10-20200427.070043-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.2.10-20200427.070052-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.2.10-20200427.070046-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.2.10-20200427.070034-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.2.10-20200427.070040-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.2.10-20200427.070038-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.2.10-20200427.070050-119.jar:/srv/hops/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.10-20200427.070054-119.jar:/srv/hops/hadoop/contrib/capacity-scheduler/*.jar:/srv/hops/hadoop/contrib/capacity-scheduler/*.jar:/srv/hops/hadoop/contrib/capacity-scheduler/*.jar:/srv/hops/hadoop/share/hadoop/common/lib/jmx_prometheus_javaagent-0.12.0.jar | |
STARTUP_MSG: build = git@github.com:hopshadoop/hops.git -r 5bb94b87c4e62d91d17f97533ed018e07cf3f8bc; compiled by 'jenkins' on 2020-04-27T06:57Z | |
STARTUP_MSG: java = 1.8.0_252 | |
************************************************************/ | |
2020-05-07 09:33:04,189 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] | |
2020-05-07 09:33:04,191 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] | |
2020-05-07 09:33:04,325 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties | |
2020-05-07 09:33:04,352 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). | |
2020-05-07 09:33:04,352 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started | |
2020-05-07 09:33:04,408 WARN org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library | |
2020-05-07 09:33:04,490 INFO io.hops.resolvingcache.Cache: starting Resolving Cache [InMemoryCache] | |
2020-05-07 09:33:04,523 INFO io.hops.metadata.ndb.ClusterjConnector: Database connect string: 10.0.4.12:1186 | |
2020-05-07 09:33:04,523 INFO io.hops.metadata.ndb.ClusterjConnector: Database name: hops | |
2020-05-07 09:33:04,524 INFO io.hops.metadata.ndb.ClusterjConnector: Max Transactions: 1024 | |
2020-05-07 09:33:05,589 INFO io.hops.security.UsersGroups: UsersGroups Initialized. | |
2020-05-07 09:33:05,685 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070 | |
2020-05-07 09:33:05,732 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog | |
2020-05-07 09:33:05,737 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined | |
2020-05-07 09:33:05,741 INFO org.apache.hadoop.http.HttpServer3: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer3$QuotingInputFilter) | |
2020-05-07 09:33:05,743 INFO org.apache.hadoop.http.HttpServer3: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs | |
2020-05-07 09:33:05,743 INFO org.apache.hadoop.http.HttpServer3: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static | |
2020-05-07 09:33:05,743 INFO org.apache.hadoop.http.HttpServer3: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs | |
2020-05-07 09:33:05,758 INFO org.apache.hadoop.http.HttpServer3: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) | |
2020-05-07 09:33:05,759 INFO org.apache.hadoop.http.HttpServer3: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* | |
2020-05-07 09:33:05,762 INFO org.apache.hadoop.http.HttpServer3: Jetty bound to port 50070 | |
2020-05-07 09:33:05,762 INFO org.mortbay.log: jetty-6.1.26 | |
2020-05-07 09:33:05,863 INFO org.mortbay.log: Started HttpServer3$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 | |
2020-05-07 09:33:05,883 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found. | |
2020-05-07 09:33:05,961 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 | |
2020-05-07 09:33:05,961 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true | |
2020-05-07 09:33:05,963 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 | |
2020-05-07 09:33:05,963 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2020 May 07 09:33:05 | |
2020-05-07 09:33:05,967 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 3 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 50 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: slicerBatchSize = 500 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: misReplicatedNoOfBatchs = 20 | |
2020-05-07 09:33:05,968 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: slicerNbOfBatchs = 20 | |
2020-05-07 09:33:06,135 INFO com.zaxxer.hikari.HikariDataSource: HikariCP pool HikariPool-0 is starting. | |
2020-05-07 09:33:06,371 WARN io.hops.common.IDsGeneratorFactory: Called setConfiguration more than once. | |
2020-05-07 09:33:06,374 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE) | |
2020-05-07 09:33:06,374 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: superGroup = hdfs | |
2020-05-07 09:33:06,374 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true | |
2020-05-07 09:33:06,375 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true | |
2020-05-07 09:33:06,435 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false | |
2020-05-07 09:33:06,435 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true | |
2020-05-07 09:33:06,436 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 13755 | |
2020-05-07 09:33:06,436 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: The maximum number of xattrs per inode is set to 32 | |
2020-05-07 09:33:06,436 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times | |
2020-05-07 09:33:06,443 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 | |
2020-05-07 09:33:06,444 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 | |
2020-05-07 09:33:06,444 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 | |
2020-05-07 09:33:06,445 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled | |
2020-05-07 09:33:06,445 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis | |
2020-05-07 09:33:06,457 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups | |
2020-05-07 09:33:06,560 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to 0.0.0.0:8020 | |
2020-05-07 09:33:06,564 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 12000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler | |
2020-05-07 09:33:06,573 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8020 | |
2020-05-07 09:33:06,573 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #2 for port 8020 | |
2020-05-07 09:33:06,573 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #3 for port 8020 | |
2020-05-07 09:33:06,687 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor | |
2020-05-07 09:33:06,701 INFO io.hops.leaderElection.LETransaction: LE Status: id 2 I am a NON_LEADER process | |
2020-05-07 09:33:08,715 INFO io.hops.leaderElection.LETransaction: LE Status: id 2 I can be the leader but I have weak locks. Retry with stronger lock | |
2020-05-07 09:33:08,716 INFO io.hops.leaderElection.LETransaction: LE Status: id 2 periodic update. Stronger locks requested in next round | |
2020-05-07 09:33:08,718 INFO io.hops.leaderElection.LETransaction: LE Status: id 2 I am the new LEADER. | |
2020-05-07 09:33:08,803 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean | |
2020-05-07 09:33:09,825 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: cealring the safe blocks tabl, this may take some time. | |
2020-05-07 09:33:09,830 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 | |
2020-05-07 09:33:09,830 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 | |
2020-05-07 09:33:09,830 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 | |
2020-05-07 09:33:09,837 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0 | |
2020-05-07 09:33:09,845 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 3 secs | |
2020-05-07 09:33:09,847 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes | |
2020-05-07 09:33:09,848 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks | |
2020-05-07 09:33:09,848 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: cealring the safe blocks tabl, this may take some time. | |
2020-05-07 09:33:09,855 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0 | |
2020-05-07 09:33:09,895 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting | |
2020-05-07 09:33:09,895 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: starting | |
2020-05-07 09:33:10,027 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Leader Node RPC up at: ip-10-0-4-12.us-west-2.compute.internal/10.0.4.12:8020 | |
2020-05-07 09:33:10,028 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state | |
2020-05-07 09:33:10,028 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Catching up to latest edits from old active before taking over writer role in edits logs | |
2020-05-07 09:33:10,028 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Marking all datandoes as stale | |
2020-05-07 09:33:10,029 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Reprocessing replication and invalidation queues | |
2020-05-07 09:33:10,029 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues | |
2020-05-07 09:33:10,047 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds | |
2020-05-07 09:33:10,071 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: processMisReplicated read 0/10000 in the Ids range [0 - 10000] (max inodeId when the process started: 7) | |
2020-05-07 09:33:10,080 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 0 | |
2020-05-07 09:33:10,081 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0 | |
2020-05-07 09:33:10,081 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0 | |
2020-05-07 09:33:10,081 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0 | |
2020-05-07 09:33:10,081 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0 | |
2020-05-07 09:33:10,081 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 46 msec | |
2020-05-07 09:33:10,620 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 60 minutes. | |
2020-05-07 09:33:10,620 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 60 minutes. | |
2020-05-07 09:33:18,711 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:33:21,863 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(10.0.4.12:50010, datanodeUuid=a7438e0b-c413-4d38-888d-ab4392b95d31, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-4230c663-6049-437f-b406-77ff12af092d;nsid=911;c=1588843062940) storage a7438e0b-c413-4d38-888d-ab4392b95d31 | |
2020-05-07 09:33:21,864 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0 | |
2020-05-07 09:33:21,864 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.0.4.12:50010 | |
2020-05-07 09:33:21,918 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 containing 0 blocks is assigned to NN [ID: 2, IP: 10.0.4.12] | |
2020-05-07 09:33:21,921 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0 | |
2020-05-07 09:33:22,246 INFO BlockStateChange: BLOCK* processReport success: from DatanodeRegistration(10.0.4.12:50010, datanodeUuid=a7438e0b-c413-4d38-888d-ab4392b95d31, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-4230c663-6049-437f-b406-77ff12af092d;nsid=911;c=1588843062940) storage: DatanodeStorage[DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9,DISK,NORMAL], blocks: 5000, hasStaleStorages: true, processing time: 219 ms. (buckets,bucketsMatching,blocks,toRemove,toInvalidate,toCorrupt,toUC,toAdd,safeBlocksIfSafeMode)=(1000,0,5000,0,0,0,0,0,0) | |
2020-05-07 09:33:22,250 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 has completed | |
2020-05-07 09:33:28,684 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:33:30,165 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(10.0.4.12:50010, datanodeUuid=a7438e0b-c413-4d38-888d-ab4392b95d31, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-4230c663-6049-437f-b406-77ff12af092d;nsid=911;c=1588843062940) storage a7438e0b-c413-4d38-888d-ab4392b95d31 | |
2020-05-07 09:33:30,165 INFO org.apache.hadoop.net.NetworkTopology: Removing a node: /default-rack/10.0.4.12:50010 | |
2020-05-07 09:33:30,165 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.0.4.12:50010 | |
2020-05-07 09:33:30,194 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0 | |
2020-05-07 09:33:30,205 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 containing 0 blocks is assigned to NN [ID: 2, IP: 10.0.4.12] | |
2020-05-07 09:33:30,411 INFO BlockStateChange: BLOCK* processReport success: from DatanodeRegistration(10.0.4.12:50010, datanodeUuid=a7438e0b-c413-4d38-888d-ab4392b95d31, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-4230c663-6049-437f-b406-77ff12af092d;nsid=911;c=1588843062940) storage: DatanodeStorage[DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9,DISK,NORMAL], blocks: 5000, hasStaleStorages: false, processing time: 147 ms. (buckets,bucketsMatching,blocks,toRemove,toInvalidate,toCorrupt,toUC,toAdd,safeBlocksIfSafeMode)=(1000,0,5000,0,0,0,0,0,0) | |
2020-05-07 09:33:30,417 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 has completed | |
2020-05-07 09:33:38,687 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:33:48,747 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:33:58,693 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:34:08,763 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:34:18,683 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:34:28,725 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:34:38,702 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:34:48,710 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:34:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:35:08,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:35:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:35:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:35:38,682 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:35:48,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:35:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:36:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:36:18,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:36:18,777 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10000 State = UNDER_CONSTRUCTION for /apps/tez/apache-tez-0.9.1.2.tar.gz._COPYING_ | |
2020-05-07 09:36:18,983 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10000 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /apps/tez/apache-tez-0.9.1.2.tar.gz._COPYING_ | |
2020-05-07 09:36:18,989 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10000 State = COMMITTED size 13935245 byte | |
2020-05-07 09:36:19,392 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/tez/apache-tez-0.9.1.2.tar.gz._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1908209306_1 | |
2020-05-07 09:36:24,632 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/apps/tez/apache-tez-0.9.1.2.tar.gz" | |
2020-05-07 09:36:28,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:36:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:36:48,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:36:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:37:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:37:16,669 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10001 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/census/adult.data._COPYING_ | |
2020-05-07 09:37:16,788 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10001 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/census/adult.data._COPYING_ | |
2020-05-07 09:37:16,793 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10001 State = COMMITTED size 3974305 byte | |
2020-05-07 09:37:17,195 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/census/adult.data._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:17,256 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10002 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/census/adult.test._COPYING_ | |
2020-05-07 09:37:17,271 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10002 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/census/adult.test._COPYING_ | |
2020-05-07 09:37:17,275 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10002 State = COMMITTED size 2003153 byte | |
2020-05-07 09:37:17,676 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/census/adult.test._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:17,718 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10003 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/iris/iris.csv._COPYING_ | |
2020-05-07 09:37:17,727 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10003 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/iris/iris.csv._COPYING_ | |
2020-05-07 09:37:17,731 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10003 State = COMMITTED size 3966 byte | |
2020-05-07 09:37:18,133 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/iris/iris.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:18,156 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10004 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/iris/iris_knn.pkl._COPYING_ | |
2020-05-07 09:37:18,165 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10004 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/iris/iris_knn.pkl._COPYING_ | |
2020-05-07 09:37:18,168 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10004 State = COMMITTED size 14121 byte | |
2020-05-07 09:37:18,570 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/iris/iris_knn.pkl._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:18,623 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10005 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/MNIST/processed/test.pt._COPYING_ | |
2020-05-07 09:37:18,653 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10005 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/MNIST/processed/test.pt._COPYING_ | |
2020-05-07 09:37:18,658 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10005 State = COMMITTED size 7920381 byte | |
2020-05-07 09:37:18,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:37:19,060 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/MNIST/processed/test.pt._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:19,087 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10006 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/MNIST/processed/training.pt._COPYING_ | |
2020-05-07 09:37:19,199 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10006 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/MNIST/processed/training.pt._COPYING_ | |
2020-05-07 09:37:19,203 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10006 State = COMMITTED size 47520385 byte | |
2020-05-07 09:37:19,604 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/MNIST/processed/training.pt._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:19,637 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10007 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/t10k-images-idx3-ubyte.gz._COPYING_ | |
2020-05-07 09:37:19,649 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10007 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/t10k-images-idx3-ubyte.gz._COPYING_ | |
2020-05-07 09:37:19,653 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10007 State = COMMITTED size 1648877 byte | |
2020-05-07 09:37:20,055 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/t10k-images-idx3-ubyte.gz._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:20,089 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10008 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/t10k-labels-idx1-ubyte.gz._COPYING_ | |
2020-05-07 09:37:20,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10008 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/t10k-labels-idx1-ubyte.gz._COPYING_ | |
2020-05-07 09:37:20,105 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10008 State = COMMITTED size 4542 byte | |
2020-05-07 09:37:20,506 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/t10k-labels-idx1-ubyte.gz._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:20,530 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10009 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/train-images-idx3-ubyte.gz._COPYING_ | |
2020-05-07 09:37:20,555 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10009 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/train-images-idx3-ubyte.gz._COPYING_ | |
2020-05-07 09:37:20,558 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10009 State = COMMITTED size 9912422 byte | |
2020-05-07 09:37:20,960 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/train-images-idx3-ubyte.gz._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:20,983 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10010 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/train-labels-idx1-ubyte.gz._COPYING_ | |
2020-05-07 09:37:20,991 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10010 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/train-labels-idx1-ubyte.gz._COPYING_ | |
2020-05-07 09:37:20,995 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10010 State = COMMITTED size 28881 byte | |
2020-05-07 09:37:21,396 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/MNIST_data/train-labels-idx1-ubyte.gz._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:21,444 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10011 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/model/1/saved_model.pb._COPYING_ | |
2020-05-07 09:37:21,454 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10011 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/model/1/saved_model.pb._COPYING_ | |
2020-05-07 09:37:21,460 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10011 State = COMMITTED size 19060 byte | |
2020-05-07 09:37:21,859 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/model/1/saved_model.pb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:21,893 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10012 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/model/1/variables/variables.data-00000-of-00001._COPYING_ | |
2020-05-07 09:37:21,902 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10012 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/model/1/variables/variables.data-00000-of-00001._COPYING_ | |
2020-05-07 09:37:21,905 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10012 State = COMMITTED size 31400 byte | |
2020-05-07 09:37:22,307 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/model/1/variables/variables.data-00000-of-00001._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:22,332 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10013 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/model/1/variables/variables.index._COPYING_ | |
2020-05-07 09:37:22,340 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10013 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/model/1/variables/variables.index._COPYING_ | |
2020-05-07 09:37:22,344 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10013 State = COMMITTED size 159 byte | |
2020-05-07 09:37:22,745 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/model/1/variables/variables.index._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:22,780 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10014 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/model/2/saved_model.pb._COPYING_ | |
2020-05-07 09:37:22,788 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10014 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/model/2/saved_model.pb._COPYING_ | |
2020-05-07 09:37:22,793 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10014 State = COMMITTED size 19060 byte | |
2020-05-07 09:37:23,194 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/model/2/saved_model.pb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:23,230 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10015 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/model/2/variables/variables.data-00000-of-00001._COPYING_ | |
2020-05-07 09:37:23,239 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10015 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/model/2/variables/variables.data-00000-of-00001._COPYING_ | |
2020-05-07 09:37:23,244 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10015 State = COMMITTED size 31400 byte | |
2020-05-07 09:37:23,644 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/model/2/variables/variables.data-00000-of-00001._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:23,667 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10016 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/model/2/variables/variables.index._COPYING_ | |
2020-05-07 09:37:23,676 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10016 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/model/2/variables/variables.index._COPYING_ | |
2020-05-07 09:37:23,680 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10016 State = COMMITTED size 159 byte | |
2020-05-07 09:37:24,080 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/model/2/variables/variables.index._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:24,118 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10017 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/train/train.tfrecords._COPYING_ | |
2020-05-07 09:37:24,211 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10017 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/train/train.tfrecords._COPYING_ | |
2020-05-07 09:37:24,215 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10017 State = COMMITTED size 49005000 byte | |
2020-05-07 09:37:24,616 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/train/train.tfrecords._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:24,655 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10018 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/mnist/validation/validation.tfrecords._COPYING_ | |
2020-05-07 09:37:24,671 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10018 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/mnist/validation/validation.tfrecords._COPYING_ | |
2020-05-07 09:37:24,675 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10018 State = COMMITTED size 4455000 byte | |
2020-05-07 09:37:25,077 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/mnist/validation/validation.tfrecords._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:25,114 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10019 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/numpy/C_test.npy._COPYING_ | |
2020-05-07 09:37:25,137 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10019 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/numpy/C_test.npy._COPYING_ | |
2020-05-07 09:37:25,143 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10019 State = COMMITTED size 3072128 byte | |
2020-05-07 09:37:25,543 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/numpy/C_test.npy._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:25,575 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10020 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/data/visualization/Pokemon.csv._COPYING_ | |
2020-05-07 09:37:25,583 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10020 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/data/visualization/Pokemon.csv._COPYING_ | |
2020-05-07 09:37:25,587 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10020 State = COMMITTED size 44028 byte | |
2020-05-07 09:37:25,988 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/data/visualization/Pokemon.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:26,025 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10021 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Benchmarks/benchmark.ipynb._COPYING_ | |
2020-05-07 09:37:26,032 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10021 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Benchmarks/benchmark.ipynb._COPYING_ | |
2020-05-07 09:37:26,036 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10021 State = COMMITTED size 8181 byte | |
2020-05-07 09:37:26,437 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Benchmarks/benchmark.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:26,471 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10022 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/collective_allreduce_strategy/keras.ipynb._COPYING_ | |
2020-05-07 09:37:26,478 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10022 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/collective_allreduce_strategy/keras.ipynb._COPYING_ | |
2020-05-07 09:37:26,481 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10022 State = COMMITTED size 7802 byte | |
2020-05-07 09:37:26,882 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/collective_allreduce_strategy/keras.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:26,902 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10023 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/collective_allreduce_strategy/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:26,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10023 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/collective_allreduce_strategy/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:26,913 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10023 State = COMMITTED size 17026 byte | |
2020-05-07 09:37:27,313 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/collective_allreduce_strategy/mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:27,340 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10024 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/mirrored_strategy/keras.ipynb._COPYING_ | |
2020-05-07 09:37:27,347 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10024 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/mirrored_strategy/keras.ipynb._COPYING_ | |
2020-05-07 09:37:27,350 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10024 State = COMMITTED size 7388 byte | |
2020-05-07 09:37:27,751 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/mirrored_strategy/keras.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:27,772 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10025 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/mirrored_strategy/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:27,780 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10025 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/mirrored_strategy/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:27,784 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10025 State = COMMITTED size 15542 byte | |
2020-05-07 09:37:28,184 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/mirrored_strategy/mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:28,213 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10026 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/parameter_server_strategy/keras.ipynb._COPYING_ | |
2020-05-07 09:37:28,220 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10026 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/parameter_server_strategy/keras.ipynb._COPYING_ | |
2020-05-07 09:37:28,223 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10026 State = COMMITTED size 7698 byte | |
2020-05-07 09:37:28,624 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/parameter_server_strategy/keras.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:28,643 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10027 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/parameter_server_strategy/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:28,650 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10027 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/parameter_server_strategy/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:28,654 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10027 State = COMMITTED size 16778 byte | |
2020-05-07 09:37:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:37:29,055 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Distributed_Training/parameter_server_strategy/mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:29,088 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10028 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/sklearn/IrisClassification_And_Serving_SKLearn.ipynb._COPYING_ | |
2020-05-07 09:37:29,095 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10028 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/sklearn/IrisClassification_And_Serving_SKLearn.ipynb._COPYING_ | |
2020-05-07 09:37:29,099 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10028 State = COMMITTED size 18622 byte | |
2020-05-07 09:37:29,499 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/sklearn/IrisClassification_And_Serving_SKLearn.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:29,520 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10029 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/sklearn/iris_flower_classifier.py._COPYING_ | |
2020-05-07 09:37:29,527 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10029 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/sklearn/iris_flower_classifier.py._COPYING_ | |
2020-05-07 09:37:29,530 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10029 State = COMMITTED size 984 byte | |
2020-05-07 09:37:29,932 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/sklearn/iris_flower_classifier.py._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:29,972 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10030 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/tensorflow/model_repo_and_serving.ipynb._COPYING_ | |
2020-05-07 09:37:29,980 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10030 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/tensorflow/model_repo_and_serving.ipynb._COPYING_ | |
2020-05-07 09:37:29,985 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10030 State = COMMITTED size 65776 byte | |
2020-05-07 09:37:30,386 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/End_To_End_Pipeline/tensorflow/model_repo_and_serving.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:30,422 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10031 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Experiment/Keras/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:30,430 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10031 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Experiment/Keras/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:30,433 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10031 State = COMMITTED size 9282 byte | |
2020-05-07 09:37:30,833 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Experiment/Keras/mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:30,859 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10032 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Experiment/PyTorch/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:30,866 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10032 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Experiment/PyTorch/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:30,869 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10032 State = COMMITTED size 10702 byte | |
2020-05-07 09:37:31,270 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Experiment/PyTorch/mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:31,297 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10033 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Experiment/TensorFlow/minimal_mnist_classifier_on_hops.ipynb._COPYING_ | |
2020-05-07 09:37:31,305 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10033 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Experiment/TensorFlow/minimal_mnist_classifier_on_hops.ipynb._COPYING_ | |
2020-05-07 09:37:31,308 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10033 State = COMMITTED size 7804 byte | |
2020-05-07 09:37:31,709 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Experiment/TensorFlow/minimal_mnist_classifier_on_hops.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:31,729 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10034 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Experiment/TensorFlow/tensorboard_debugger.ipynb._COPYING_ | |
2020-05-07 09:37:31,736 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10034 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Experiment/TensorFlow/tensorboard_debugger.ipynb._COPYING_ | |
2020-05-07 09:37:31,740 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10034 State = COMMITTED size 11421 byte | |
2020-05-07 09:37:32,140 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Experiment/TensorFlow/tensorboard_debugger.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:32,166 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10035 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Filesystem/HopsFSOperations.ipynb._COPYING_ | |
2020-05-07 09:37:32,172 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10035 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Filesystem/HopsFSOperations.ipynb._COPYING_ | |
2020-05-07 09:37:32,176 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10035 State = COMMITTED size 7723 byte | |
2020-05-07 09:37:32,577 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Filesystem/HopsFSOperations.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:32,604 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10036 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Inference/Batch_Inference_Imagenet_Spark.ipynb._COPYING_ | |
2020-05-07 09:37:32,612 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10036 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Inference/Batch_Inference_Imagenet_Spark.ipynb._COPYING_ | |
2020-05-07 09:37:32,616 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10036 State = COMMITTED size 141805 byte | |
2020-05-07 09:37:33,017 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Inference/Batch_Inference_Imagenet_Spark.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:33,035 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10037 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Inference/Inference_Hello_World.ipynb._COPYING_ | |
2020-05-07 09:37:33,042 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10037 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Inference/Inference_Hello_World.ipynb._COPYING_ | |
2020-05-07 09:37:33,046 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10037 State = COMMITTED size 110552 byte | |
2020-05-07 09:37:33,446 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Inference/Inference_Hello_World.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:33,488 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10038 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Keras/evolutionary_search/keras_mnist.ipynb._COPYING_ | |
2020-05-07 09:37:33,495 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10038 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Keras/evolutionary_search/keras_mnist.ipynb._COPYING_ | |
2020-05-07 09:37:33,498 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10038 State = COMMITTED size 9470 byte | |
2020-05-07 09:37:33,899 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Keras/evolutionary_search/keras_mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:33,927 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10039 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Maggy/maggy-ablation-titanic-example.ipynb._COPYING_ | |
2020-05-07 09:37:33,936 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10039 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:37:33,940 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Maggy/maggy-ablation-titanic-example.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:33,959 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10040 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Maggy/maggy-fashion-mnist-example.ipynb._COPYING_ | |
2020-05-07 09:37:33,967 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10040 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Maggy/maggy-fashion-mnist-example.ipynb._COPYING_ | |
2020-05-07 09:37:33,970 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10040 State = COMMITTED size 13914 byte | |
2020-05-07 09:37:34,371 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/Maggy/maggy-fashion-mnist-example.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:34,403 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10041 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/PyTorch/differential_evolution/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:34,410 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10041 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/PyTorch/differential_evolution/mnist.ipynb._COPYING_ | |
2020-05-07 09:37:34,413 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10041 State = COMMITTED size 12094 byte | |
2020-05-07 09:37:34,814 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/PyTorch/differential_evolution/mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:34,853 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10042 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/TensorFlow/evolutionary_search/automl_fashion_mnist.ipynb._COPYING_ | |
2020-05-07 09:37:34,860 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10042 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/TensorFlow/evolutionary_search/automl_fashion_mnist.ipynb._COPYING_ | |
2020-05-07 09:37:34,865 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10042 State = COMMITTED size 16324 byte | |
2020-05-07 09:37:35,267 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/TensorFlow/evolutionary_search/automl_fashion_mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:35,311 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10043 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/TensorFlow/grid_search/grid_search_fashion_mnist.ipynb._COPYING_ | |
2020-05-07 09:37:35,319 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10043 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/TensorFlow/grid_search/grid_search_fashion_mnist.ipynb._COPYING_ | |
2020-05-07 09:37:35,324 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10043 State = COMMITTED size 16127 byte | |
2020-05-07 09:37:35,724 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Parallel_Experiments/TensorFlow/grid_search/grid_search_fashion_mnist.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:35,751 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10044 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Plotting/What_If_Tool_Notebook.ipynb._COPYING_ | |
2020-05-07 09:37:35,758 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10044 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Plotting/What_If_Tool_Notebook.ipynb._COPYING_ | |
2020-05-07 09:37:35,761 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10044 State = COMMITTED size 33925 byte | |
2020-05-07 09:37:36,162 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Plotting/What_If_Tool_Notebook.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:36,182 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10045 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Plotting/custom_scalar.ipynb._COPYING_ | |
2020-05-07 09:37:36,189 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10045 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Plotting/custom_scalar.ipynb._COPYING_ | |
2020-05-07 09:37:36,191 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10045 State = COMMITTED size 7691 byte | |
2020-05-07 09:37:36,592 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Plotting/custom_scalar.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:36,610 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10046 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Plotting/folium_heat_map.ipynb._COPYING_ | |
2020-05-07 09:37:36,616 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10046 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Plotting/folium_heat_map.ipynb._COPYING_ | |
2020-05-07 09:37:36,619 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10046 State = COMMITTED size 3078 byte | |
2020-05-07 09:37:37,020 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Plotting/folium_heat_map.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:37,039 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10047 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Plotting/ipyleaflet.ipynb._COPYING_ | |
2020-05-07 09:37:37,045 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10047 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Plotting/ipyleaflet.ipynb._COPYING_ | |
2020-05-07 09:37:37,048 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10047 State = COMMITTED size 12299 byte | |
2020-05-07 09:37:37,449 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Plotting/ipyleaflet.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:37,467 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10048 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/Plotting/matplotlib_sparkmagic.ipynb._COPYING_ | |
2020-05-07 09:37:37,478 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10048 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/Plotting/matplotlib_sparkmagic.ipynb._COPYING_ | |
2020-05-07 09:37:37,481 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10048 State = COMMITTED size 1810866 byte | |
2020-05-07 09:37:37,882 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/Plotting/matplotlib_sparkmagic.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:37,907 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10049 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/ablation-feature-vs-model.png._COPYING_ | |
2020-05-07 09:37:37,913 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10049 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/ablation-feature-vs-model.png._COPYING_ | |
2020-05-07 09:37:37,916 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10049 State = COMMITTED size 148848 byte | |
2020-05-07 09:37:38,317 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/ablation-feature-vs-model.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:38,336 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10050 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/custom_scalar.png._COPYING_ | |
2020-05-07 09:37:38,343 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10050 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/custom_scalar.png._COPYING_ | |
2020-05-07 09:37:38,346 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10050 State = COMMITTED size 181880 byte | |
2020-05-07 09:37:38,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:37:38,747 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/custom_scalar.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:38,766 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10051 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/experiments.gif._COPYING_ | |
2020-05-07 09:37:38,776 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10051 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/experiments.gif._COPYING_ | |
2020-05-07 09:37:38,779 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10051 State = COMMITTED size 1414448 byte | |
2020-05-07 09:37:39,179 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/experiments.gif._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:39,197 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10052 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/hops.png._COPYING_ | |
2020-05-07 09:37:39,203 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10052 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/hops.png._COPYING_ | |
2020-05-07 09:37:39,206 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10052 State = COMMITTED size 5252 byte | |
2020-05-07 09:37:39,607 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/hops.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:39,623 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10053 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/models.gif._COPYING_ | |
2020-05-07 09:37:39,630 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10053 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/models.gif._COPYING_ | |
2020-05-07 09:37:39,633 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10053 State = COMMITTED size 515693 byte | |
2020-05-07 09:37:40,034 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/models.gif._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:40,057 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10054 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/servings.gif._COPYING_ | |
2020-05-07 09:37:40,065 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10054 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/servings.gif._COPYING_ | |
2020-05-07 09:37:40,070 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10054 State = COMMITTED size 793603 byte | |
2020-05-07 09:37:40,469 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/servings.gif._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:40,487 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10055 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving1.png._COPYING_ | |
2020-05-07 09:37:40,493 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10055 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving1.png._COPYING_ | |
2020-05-07 09:37:40,496 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10055 State = COMMITTED size 51670 byte | |
2020-05-07 09:37:40,897 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving1.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:40,915 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10056 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving2.png._COPYING_ | |
2020-05-07 09:37:40,922 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10056 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving2.png._COPYING_ | |
2020-05-07 09:37:40,925 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10056 State = COMMITTED size 23510 byte | |
2020-05-07 09:37:41,326 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving2.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:41,344 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10057 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving3.png._COPYING_ | |
2020-05-07 09:37:41,353 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10057 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving3.png._COPYING_ | |
2020-05-07 09:37:41,357 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10057 State = COMMITTED size 69308 byte | |
2020-05-07 09:37:41,758 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/sklearn_serving3.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:41,777 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10058 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/images/tensorboard_debug.png._COPYING_ | |
2020-05-07 09:37:41,785 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10058 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/images/tensorboard_debug.png._COPYING_ | |
2020-05-07 09:37:41,787 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10058 State = COMMITTED size 156767 byte | |
2020-05-07 09:37:42,188 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/images/tensorboard_debug.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:42,211 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10059 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/numpy/numpy-hdfs.ipynb._COPYING_ | |
2020-05-07 09:37:42,218 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10059 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/numpy/numpy-hdfs.ipynb._COPYING_ | |
2020-05-07 09:37:42,221 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10059 State = COMMITTED size 1092 byte | |
2020-05-07 09:37:42,622 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/numpy/numpy-hdfs.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:42,649 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10060 State = UNDER_CONSTRUCTION for /user/hdfs/tensorflow_demo/notebooks/pandas/pandas-hdfs.ipynb._COPYING_ | |
2020-05-07 09:37:42,657 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10060 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/tensorflow_demo/notebooks/pandas/pandas-hdfs.ipynb._COPYING_ | |
2020-05-07 09:37:42,661 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10060 State = COMMITTED size 1537 byte | |
2020-05-07 09:37:43,061 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/tensorflow_demo/notebooks/pandas/pandas-hdfs.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_327411109_1 | |
2020-05-07 09:37:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:37:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:38:05,723 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10061 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/data/attendances.csv._COPYING_ | |
2020-05-07 09:38:05,821 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10061 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:38:05,831 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/data/attendances.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:05,857 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10062 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/data/games.csv._COPYING_ | |
2020-05-07 09:38:05,867 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10062 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/data/games.csv._COPYING_ | |
2020-05-07 09:38:05,871 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10062 State = COMMITTED size 76451 byte | |
2020-05-07 09:38:06,271 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/data/games.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:06,293 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10063 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/data/players.csv._COPYING_ | |
2020-05-07 09:38:06,300 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10063 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/data/players.csv._COPYING_ | |
2020-05-07 09:38:06,303 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10063 State = COMMITTED size 212910 byte | |
2020-05-07 09:38:06,704 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/data/players.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:06,723 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10064 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/data/season_scores.csv._COPYING_ | |
2020-05-07 09:38:06,731 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10064 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/data/season_scores.csv._COPYING_ | |
2020-05-07 09:38:06,734 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10064 State = COMMITTED size 8378 byte | |
2020-05-07 09:38:07,135 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/data/season_scores.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:07,158 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10065 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/data/teams.csv._COPYING_ | |
2020-05-07 09:38:07,165 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10065 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/data/teams.csv._COPYING_ | |
2020-05-07 09:38:07,167 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10065 State = COMMITTED size 2307 byte | |
2020-05-07 09:38:07,569 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/data/teams.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:07,595 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10066 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/FeatureStoreQuickStart.ipynb._COPYING_ | |
2020-05-07 09:38:07,601 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10066 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/FeatureStoreQuickStart.ipynb._COPYING_ | |
2020-05-07 09:38:07,603 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10066 State = COMMITTED size 24136 byte | |
2020-05-07 09:38:08,006 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/FeatureStoreQuickStart.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:08,062 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10067 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/FeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:08,071 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10067 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/FeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:08,074 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10067 State = COMMITTED size 747622 byte | |
2020-05-07 09:38:08,475 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/FeaturestoreTourPython.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:08,496 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10068 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/FeaturestoreTourScala.ipynb._COPYING_ | |
2020-05-07 09:38:08,504 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10068 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/FeaturestoreTourScala.ipynb._COPYING_ | |
2020-05-07 09:38:08,506 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10068 State = COMMITTED size 122995 byte | |
2020-05-07 09:38:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:38:08,908 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/FeaturestoreTourScala.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:08,934 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10069 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/aws/S3-FeatureStore.ipynb._COPYING_ | |
2020-05-07 09:38:08,942 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10069 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:38:08,950 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/aws/S3-FeatureStore.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:08,969 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10070 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/aws/SageMakerFeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:08,983 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10070 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/aws/SageMakerFeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:08,987 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10070 State = COMMITTED size 462660 byte | |
2020-05-07 09:38:09,387 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/aws/SageMakerFeaturestoreTourPython.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:09,418 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10071 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/aws/data/Sacramentorealestatetransactions.csv._COPYING_ | |
2020-05-07 09:38:09,427 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10071 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/aws/data/Sacramentorealestatetransactions.csv._COPYING_ | |
2020-05-07 09:38:09,430 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10071 State = COMMITTED size 113183 byte | |
2020-05-07 09:38:09,832 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/aws/data/Sacramentorealestatetransactions.csv._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:09,872 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10072 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FatureStore.ipynb._COPYING_ | |
2020-05-07 09:38:09,881 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10072 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FatureStore.ipynb._COPYING_ | |
2020-05-07 09:38:09,885 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10072 State = COMMITTED size 6860 byte | |
2020-05-07 09:38:10,286 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FatureStore.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:10,306 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10073 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FeatureStore-Setup.ipynb._COPYING_ | |
2020-05-07 09:38:10,314 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10073 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FeatureStore-Setup.ipynb._COPYING_ | |
2020-05-07 09:38:10,317 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10073 State = COMMITTED size 3270 byte | |
2020-05-07 09:38:10,718 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FeatureStore-Setup.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:10,736 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10074 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FeatureStore.ipynb._COPYING_ | |
2020-05-07 09:38:10,743 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10074 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FeatureStore.ipynb._COPYING_ | |
2020-05-07 09:38:10,746 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10074 State = COMMITTED size 6861 byte | |
2020-05-07 09:38:11,147 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/databricks/Databricks-FeatureStore.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:11,166 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10075 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/databricks/DatabricksFeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:11,176 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10075 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/databricks/DatabricksFeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:11,178 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10075 State = COMMITTED size 582544 byte | |
2020-05-07 09:38:11,579 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/databricks/DatabricksFeaturestoreTourPython.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:11,601 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10076 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/databricks/FeatureStoreQuickStartDatabricks.ipynb._COPYING_ | |
2020-05-07 09:38:11,617 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10076 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/databricks/FeatureStoreQuickStartDatabricks.ipynb._COPYING_ | |
2020-05-07 09:38:11,620 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10076 State = COMMITTED size 14556 byte | |
2020-05-07 09:38:12,020 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/databricks/FeatureStoreQuickStartDatabricks.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:12,045 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10077 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/datasets/TitanicTrainingDatasetPython.ipynb._COPYING_ | |
2020-05-07 09:38:12,052 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10077 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/datasets/TitanicTrainingDatasetPython.ipynb._COPYING_ | |
2020-05-07 09:38:12,055 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10077 State = COMMITTED size 11407 byte | |
2020-05-07 09:38:12,457 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/datasets/TitanicTrainingDatasetPython.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:12,484 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10078 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/delta/DeltaOnHops.ipynb._COPYING_ | |
2020-05-07 09:38:12,493 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10078 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/delta/DeltaOnHops.ipynb._COPYING_ | |
2020-05-07 09:38:12,496 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10078 State = COMMITTED size 19374 byte | |
2020-05-07 09:38:12,897 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/delta/DeltaOnHops.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:12,924 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10079 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/hudi/HudiOnHops.ipynb._COPYING_ | |
2020-05-07 09:38:12,930 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10079 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/hudi/HudiOnHops.ipynb._COPYING_ | |
2020-05-07 09:38:12,934 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10079 State = COMMITTED size 68350 byte | |
2020-05-07 09:38:13,334 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/hudi/HudiOnHops.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:13,358 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10080 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/image_datasets/ImageDatasetFeaturestore.ipynb._COPYING_ | |
2020-05-07 09:38:13,365 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10080 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/image_datasets/ImageDatasetFeaturestore.ipynb._COPYING_ | |
2020-05-07 09:38:13,368 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10080 State = COMMITTED size 11931 byte | |
2020-05-07 09:38:13,768 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/image_datasets/ImageDatasetFeaturestore.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:13,787 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10081 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/image_datasets/ImageFeatureGroup.ipynb._COPYING_ | |
2020-05-07 09:38:13,793 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10081 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/image_datasets/ImageFeatureGroup.ipynb._COPYING_ | |
2020-05-07 09:38:13,797 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10081 State = COMMITTED size 18615 byte | |
2020-05-07 09:38:14,197 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/image_datasets/ImageFeatureGroup.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:14,223 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10082 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/admin_fs_tags.png._COPYING_ | |
2020-05-07 09:38:14,230 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10082 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/admin_fs_tags.png._COPYING_ | |
2020-05-07 09:38:14,233 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10082 State = COMMITTED size 417928 byte | |
2020-05-07 09:38:14,634 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/admin_fs_tags.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:14,653 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10083 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/concepts.png._COPYING_ | |
2020-05-07 09:38:14,660 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10083 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/concepts.png._COPYING_ | |
2020-05-07 09:38:14,663 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10083 State = COMMITTED size 50873 byte | |
2020-05-07 09:38:15,064 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/concepts.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:15,085 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10084 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/create_tags.png._COPYING_ | |
2020-05-07 09:38:15,092 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10084 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/create_tags.png._COPYING_ | |
2020-05-07 09:38:15,096 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10084 State = COMMITTED size 49985 byte | |
2020-05-07 09:38:15,496 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/create_tags.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:15,519 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10085 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/delta_dataset.png._COPYING_ | |
2020-05-07 09:38:15,527 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10085 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/delta_dataset.png._COPYING_ | |
2020-05-07 09:38:15,530 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10085 State = COMMITTED size 523229 byte | |
2020-05-07 09:38:15,931 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/delta_dataset.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:15,950 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10086 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/featurestore_incremental_pull.png._COPYING_ | |
2020-05-07 09:38:15,957 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10086 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/featurestore_incremental_pull.png._COPYING_ | |
2020-05-07 09:38:15,960 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10086 State = COMMITTED size 203952 byte | |
2020-05-07 09:38:16,361 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/featurestore_incremental_pull.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:16,380 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10087 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/fg_stats_1.png._COPYING_ | |
2020-05-07 09:38:16,386 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10087 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/fg_stats_1.png._COPYING_ | |
2020-05-07 09:38:16,389 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10087 State = COMMITTED size 440893 byte | |
2020-05-07 09:38:16,790 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/fg_stats_1.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:16,809 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10088 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/hudi_dataset.png._COPYING_ | |
2020-05-07 09:38:16,816 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10088 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/hudi_dataset.png._COPYING_ | |
2020-05-07 09:38:16,819 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10088 State = COMMITTED size 354375 byte | |
2020-05-07 09:38:17,219 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/hudi_dataset.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:17,237 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10089 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_1.png._COPYING_ | |
2020-05-07 09:38:17,244 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10089 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_1.png._COPYING_ | |
2020-05-07 09:38:17,249 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10089 State = COMMITTED size 95014 byte | |
2020-05-07 09:38:17,648 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_1.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:17,669 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10090 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_2.png._COPYING_ | |
2020-05-07 09:38:17,675 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10090 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_2.png._COPYING_ | |
2020-05-07 09:38:17,678 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10090 State = COMMITTED size 30481 byte | |
2020-05-07 09:38:18,079 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_2.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:18,097 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10091 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_3.png._COPYING_ | |
2020-05-07 09:38:18,106 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10091 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_3.png._COPYING_ | |
2020-05-07 09:38:18,109 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10091 State = COMMITTED size 56907 byte | |
2020-05-07 09:38:18,510 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_3.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:18,528 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10092 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_4.png._COPYING_ | |
2020-05-07 09:38:18,535 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10092 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_4.png._COPYING_ | |
2020-05-07 09:38:18,538 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10092 State = COMMITTED size 49404 byte | |
2020-05-07 09:38:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:38:18,940 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/image_dataset_tutorial_4.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:18,958 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10093 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/incr_load.png._COPYING_ | |
2020-05-07 09:38:18,965 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10093 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/incr_load.png._COPYING_ | |
2020-05-07 09:38:18,968 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10093 State = COMMITTED size 96959 byte | |
2020-05-07 09:38:19,369 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/incr_load.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:19,385 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10094 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/model.png._COPYING_ | |
2020-05-07 09:38:19,393 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10094 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/model.png._COPYING_ | |
2020-05-07 09:38:19,396 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10094 State = COMMITTED size 21281 byte | |
2020-05-07 09:38:19,796 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/model.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:19,815 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10095 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/near_real_time.jpg._COPYING_ | |
2020-05-07 09:38:19,822 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10095 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/near_real_time.jpg._COPYING_ | |
2020-05-07 09:38:19,825 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10095 State = COMMITTED size 26672 byte | |
2020-05-07 09:38:20,226 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/near_real_time.jpg._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:20,246 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10096 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/overview.png._COPYING_ | |
2020-05-07 09:38:20,254 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10096 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/overview.png._COPYING_ | |
2020-05-07 09:38:20,258 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10096 State = COMMITTED size 29440 byte | |
2020-05-07 09:38:20,658 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/overview.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:20,675 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10097 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm1.png._COPYING_ | |
2020-05-07 09:38:20,681 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10097 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm1.png._COPYING_ | |
2020-05-07 09:38:20,684 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10097 State = COMMITTED size 21284 byte | |
2020-05-07 09:38:21,085 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm1.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:21,104 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10098 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm2.png._COPYING_ | |
2020-05-07 09:38:21,110 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10098 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm2.png._COPYING_ | |
2020-05-07 09:38:21,113 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10098 State = COMMITTED size 22301 byte | |
2020-05-07 09:38:21,514 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm2.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:21,534 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10099 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm3.png._COPYING_ | |
2020-05-07 09:38:21,541 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10099 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm3.png._COPYING_ | |
2020-05-07 09:38:21,544 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10099 State = COMMITTED size 70895 byte | |
2020-05-07 09:38:21,945 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm3.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:21,963 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10100 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm4.png._COPYING_ | |
2020-05-07 09:38:21,970 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10100 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm4.png._COPYING_ | |
2020-05-07 09:38:21,973 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10100 State = COMMITTED size 37376 byte | |
2020-05-07 09:38:22,374 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm4.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:22,394 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10101 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm5.png._COPYING_ | |
2020-05-07 09:38:22,401 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10101 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm5.png._COPYING_ | |
2020-05-07 09:38:22,404 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10101 State = COMMITTED size 46393 byte | |
2020-05-07 09:38:22,805 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm5.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:22,823 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10102 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm6.png._COPYING_ | |
2020-05-07 09:38:22,830 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10102 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm6.png._COPYING_ | |
2020-05-07 09:38:22,833 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10102 State = COMMITTED size 23761 byte | |
2020-05-07 09:38:23,234 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm6.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:23,253 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10103 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/petastorm7.png._COPYING_ | |
2020-05-07 09:38:23,260 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10103 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/petastorm7.png._COPYING_ | |
2020-05-07 09:38:23,263 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10103 State = COMMITTED size 22384 byte | |
2020-05-07 09:38:23,664 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/petastorm7.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:23,680 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10104 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/query_optimizer.png._COPYING_ | |
2020-05-07 09:38:23,687 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10104 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/query_optimizer.png._COPYING_ | |
2020-05-07 09:38:23,690 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10104 State = COMMITTED size 94773 byte | |
2020-05-07 09:38:24,091 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/query_optimizer.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:24,107 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10105 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/select_fs.png._COPYING_ | |
2020-05-07 09:38:24,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10105 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/select_fs.png._COPYING_ | |
2020-05-07 09:38:24,116 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10105 State = COMMITTED size 11700 byte | |
2020-05-07 09:38:24,517 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/select_fs.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:24,533 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10106 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/share_featurestore.png._COPYING_ | |
2020-05-07 09:38:24,540 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10106 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/share_featurestore.png._COPYING_ | |
2020-05-07 09:38:24,542 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10106 State = COMMITTED size 72783 byte | |
2020-05-07 09:38:24,944 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/share_featurestore.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:24,964 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10107 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/to_admin.png._COPYING_ | |
2020-05-07 09:38:24,971 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10107 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/to_admin.png._COPYING_ | |
2020-05-07 09:38:24,974 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10107 State = COMMITTED size 48799 byte | |
2020-05-07 09:38:25,375 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/to_admin.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:25,393 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10108 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/images/upsert_illustration.png._COPYING_ | |
2020-05-07 09:38:25,400 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10108 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/images/upsert_illustration.png._COPYING_ | |
2020-05-07 09:38:25,403 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10108 State = COMMITTED size 425340 byte | |
2020-05-07 09:38:25,804 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/images/upsert_illustration.png._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:25,833 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10109 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/online_featurestore/OnlineFeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:25,841 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10109 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/online_featurestore/OnlineFeaturestoreTourPython.ipynb._COPYING_ | |
2020-05-07 09:38:25,844 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10109 State = COMMITTED size 30666 byte | |
2020-05-07 09:38:26,245 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/online_featurestore/OnlineFeaturestoreTourPython.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:26,263 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10110 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/online_featurestore/OnlineFeaturestoreTourScala.ipynb._COPYING_ | |
2020-05-07 09:38:26,269 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10110 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/online_featurestore/OnlineFeaturestoreTourScala.ipynb._COPYING_ | |
2020-05-07 09:38:26,272 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10110 State = COMMITTED size 26146 byte | |
2020-05-07 09:38:26,672 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/online_featurestore/OnlineFeaturestoreTourScala.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:26,696 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10111 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormHelloWorld.ipynb._COPYING_ | |
2020-05-07 09:38:26,702 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10111 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormHelloWorld.ipynb._COPYING_ | |
2020-05-07 09:38:26,705 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10111 State = COMMITTED size 36272 byte | |
2020-05-07 09:38:27,106 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormHelloWorld.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:27,124 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10112 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_CreateDataset.ipynb._COPYING_ | |
2020-05-07 09:38:27,130 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10112 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_CreateDataset.ipynb._COPYING_ | |
2020-05-07 09:38:27,133 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10112 State = COMMITTED size 43037 byte | |
2020-05-07 09:38:27,534 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_CreateDataset.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:27,551 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10113 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_PyTorch.ipynb._COPYING_ | |
2020-05-07 09:38:27,558 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10113 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_PyTorch.ipynb._COPYING_ | |
2020-05-07 09:38:27,560 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10113 State = COMMITTED size 79254 byte | |
2020-05-07 09:38:27,961 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_PyTorch.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:27,985 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10114 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_Tensorflow.ipynb._COPYING_ | |
2020-05-07 09:38:27,991 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10114 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_Tensorflow.ipynb._COPYING_ | |
2020-05-07 09:38:27,994 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10114 State = COMMITTED size 46561 byte | |
2020-05-07 09:38:28,394 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/petastorm/PetastormMNIST_Tensorflow.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:28,420 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10115 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/query_planner/FeaturestoreQueryPlanner.ipynb._COPYING_ | |
2020-05-07 09:38:28,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10115 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/query_planner/FeaturestoreQueryPlanner.ipynb._COPYING_ | |
2020-05-07 09:38:28,429 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10115 State = COMMITTED size 17540 byte | |
2020-05-07 09:38:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:38:28,829 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/query_planner/FeaturestoreQueryPlanner.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:28,852 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10116 State = UNDER_CONSTRUCTION for /user/hdfs/featurestore_demo/notebooks/visualizations/Feature_Visualizations.ipynb._COPYING_ | |
2020-05-07 09:38:28,859 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10116 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/featurestore_demo/notebooks/visualizations/Feature_Visualizations.ipynb._COPYING_ | |
2020-05-07 09:38:28,862 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10116 State = COMMITTED size 669353 byte | |
2020-05-07 09:38:29,263 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/featurestore_demo/notebooks/visualizations/Feature_Visualizations.ipynb._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1978226852_1 | |
2020-05-07 09:38:38,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:38:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:38:58,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:39:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:39:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:39:28,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:39:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:39:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:39:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:40:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:40:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:40:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:40:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:40:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:40:58,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:41:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:41:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:41:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:41:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:41:48,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:41:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:42:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:42:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:42:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:42:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:42:48,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:42:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:43:08,699 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:43:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:43:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:43:38,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:43:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:43:58,691 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:44:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:44:18,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:44:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:44:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:44:48,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:44:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:45:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:45:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:45:28,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:45:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:45:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:45:58,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:46:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:46:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:46:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:46:37,922 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:46:37,923 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:46:37,923 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10117 State = UNDER_CONSTRUCTION for /user/spark/hops-verification-assembly-1.3.0-SNAPSHOT.jar._COPYING_ | |
2020-05-07 09:46:38,149 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10117 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:46:38,163 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/hops-verification-assembly-1.3.0-SNAPSHOT.jar._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_678724206_1 | |
2020-05-07 09:46:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:46:39,833 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-verification-assembly-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:46:43,198 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-verification-assembly-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:46:48,521 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/hops-examples-spark-1.3.0-SNAPSHOT.jar._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-453314954_1 | |
2020-05-07 09:46:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:46:50,272 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-examples-spark-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:46:53,618 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-examples-spark-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:46:58,680 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:08,132 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:47:08,132 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:47:08,132 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10118 State = UNDER_CONSTRUCTION for /user/spark/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar._COPYING_ | |
2020-05-07 09:47:08,224 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10118 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:47:08,237 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_1193308009_1 | |
2020-05-07 09:47:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:09,871 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:47:13,204 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:47:18,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:28,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:28,896 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:47:28,896 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:47:28,896 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10119 State = UNDER_CONSTRUCTION for /user/spark/hops-examples-featurestore-util4j-1.3.0-SNAPSHOT.jar._COPYING_ | |
2020-05-07 09:47:29,029 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10119 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:47:29,042 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/hops-examples-featurestore-util4j-1.3.0-SNAPSHOT.jar._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_572138777_1 | |
2020-05-07 09:47:30,843 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-examples-featurestore-util4j-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:47:34,171 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hops-examples-featurestore-util4j-1.3.0-SNAPSHOT.jar" | |
2020-05-07 09:47:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:39,538 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/featurestore_util.py._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_1819718216_1 | |
2020-05-07 09:47:41,236 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/featurestore_util.py" | |
2020-05-07 09:47:44,542 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/featurestore_util.py" | |
2020-05-07 09:47:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:49,569 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/metrics.properties._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_1271699741_1 | |
2020-05-07 09:47:51,296 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/metrics.properties" | |
2020-05-07 09:47:54,595 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/metrics.properties" | |
2020-05-07 09:47:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:47:59,578 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10120 State = UNDER_CONSTRUCTION for /user/hdfs/metrics.properties._COPYING_ | |
2020-05-07 09:47:59,660 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10120 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:47:59,673 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/metrics.properties._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_664392635_1 | |
2020-05-07 09:48:01,318 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/metrics.properties" | |
2020-05-07 09:48:04,745 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/metrics.properties" | |
2020-05-07 09:48:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:48:09,785 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/log4j.properties._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1196503116_1 | |
2020-05-07 09:48:11,490 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/log4j.properties" | |
2020-05-07 09:48:14,852 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/log4j.properties" | |
2020-05-07 09:48:18,628 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:48:18,628 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:48:18,628 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10121 State = UNDER_CONSTRUCTION for /user/spark/cacerts.jks._COPYING_ | |
2020-05-07 09:48:18,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:48:18,716 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10121 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:48:18,730 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/cacerts.jks._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_848252496_1 | |
2020-05-07 09:48:20,440 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/cacerts.jks" | |
2020-05-07 09:48:23,846 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/cacerts.jks" | |
2020-05-07 09:48:27,282 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:48:27,282 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 09:48:27,282 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10122 State = UNDER_CONSTRUCTION for /user/spark/cacerts.pem._COPYING_ | |
2020-05-07 09:48:27,391 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10122 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 09:48:27,400 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/cacerts.pem._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_1491068508_1 | |
2020-05-07 09:48:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:48:29,076 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/cacerts.pem" | |
2020-05-07 09:48:32,445 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/cacerts.pem" | |
2020-05-07 09:48:37,512 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/hive-site.xml._COPYING_ is closed by HopsFS_DFSClient_NONMAPREDUCE_-1387422785_1 | |
2020-05-07 09:48:38,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:48:39,222 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hive-site.xml" | |
2020-05-07 09:48:42,608 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/hive-site.xml" | |
2020-05-07 09:48:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:48:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:49:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:49:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:49:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:49:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:49:48,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:49:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:50:08,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:50:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:50:28,679 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:50:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:50:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:50:58,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:51:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:51:18,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:51:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:51:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:51:48,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:51:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:52:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:52:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:52:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:52:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:52:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:52:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:53:08,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:53:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:53:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:53:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:53:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:53:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:54:08,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:54:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:54:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:54:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:54:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:54:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:55:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:55:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:55:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:55:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:55:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:55:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:56:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:56:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:56:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:56:38,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:56:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:56:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:57:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:57:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:57:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:57:38,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:57:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:57:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:58:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:58:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:58:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:58:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:58:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:58:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:59:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:59:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:59:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:59:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:59:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 09:59:58,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:00:00,049 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 21600000 minutes, Emptier interval = 3600000 minutes. | |
2020-05-07 10:00:00,049 INFO org.apache.hadoop.fs.TrashPolicyDefault: TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ip-10-0-4-12.us-west-2.compute.internal/user/hdfs/.Trash | |
2020-05-07 10:00:00,071 INFO org.apache.hadoop.fs.TrashPolicyDefault: Created trash checkpoint: /user/hdfs/.Trash/200507100000 | |
2020-05-07 10:00:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:00:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:00:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:00:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:00:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:00:58,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:01:08,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:01:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:01:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:01:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:01:48,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:01:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:02:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:02:18,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:02:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:02:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:02:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:02:58,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:03:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:03:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:03:28,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:03:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:03:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:03:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:04:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:04:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:04:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:04:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:04:48,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:04:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:05:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:05:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:05:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:05:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:05:48,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:05:58,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:06:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:06:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:06:28,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:06:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:06:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:06:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:07:08,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:07:18,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:07:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:07:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:07:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:07:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:08:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:08:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:08:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:08:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:08:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:08:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:09:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:09:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:09:28,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:09:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:09:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:09:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:10:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:10:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:10:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:10:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:10:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:10:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:11:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:11:18,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:11:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:11:38,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:11:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:11:58,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:12:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:12:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:12:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:12:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:12:48,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:12:58,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:13:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:13:18,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:13:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:13:38,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:13:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:13:58,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:14:08,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:14:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:14:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:14:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:14:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:14:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:15:08,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:15:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:15:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:15:38,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:15:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:15:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:16:08,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:16:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:16:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:16:38,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:16:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:16:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:17:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:17:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:17:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:17:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:17:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:17:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:18:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:18:18,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:18:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:18:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:18:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:18:58,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:19:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:19:18,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:19:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:19:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:19:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:19:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:20:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:20:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:20:28,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:20:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:20:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:20:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:21:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:21:18,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:21:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:21:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:21:48,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:21:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:22:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:22:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:22:28,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:22:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:22:48,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:22:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:23:08,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:23:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:23:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:23:38,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:23:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:23:58,685 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:24:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:24:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:24:28,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:24:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:24:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:24:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:25:08,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:25:18,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:25:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:25:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:25:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:25:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:26:08,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:26:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:26:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:26:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:26:48,671 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:26:58,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:27:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:27:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:27:28,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:27:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:27:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:27:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:28:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:28:14,977 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 containing 123 blocks is assigned to NN [ID: 2, IP: 10.0.4.12] | |
2020-05-07 10:28:14,990 INFO BlockStateChange: BLOCK* processReport success: from DatanodeRegistration(10.0.4.12:50010, datanodeUuid=a7438e0b-c413-4d38-888d-ab4392b95d31, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-4230c663-6049-437f-b406-77ff12af092d;nsid=911;c=1588843062940) storage: DatanodeStorage[DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9,DISK,NORMAL], blocks: 2000, hasStaleStorages: false, processing time: 1 ms. (buckets,bucketsMatching,blocks,toRemove,toInvalidate,toCorrupt,toUC,toAdd,safeBlocksIfSafeMode)=(1000,1000,2000,0,0,0,0,0,0) | |
2020-05-07 10:28:14,992 INFO org.apache.hadoop.hdfs.server.blockmanagement.BRTrackingService: Block report from 10.0.4.12:50010 has completed | |
2020-05-07 10:28:18,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:28:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:28:38,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:28:48,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:28:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:29:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:29:18,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:29:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:29:38,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:29:48,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:29:58,664 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:30:08,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:30:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:30:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:30:38,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:30:48,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:30:58,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:31:08,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:31:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:31:28,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:31:38,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:31:48,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:31:58,669 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:32:08,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:32:18,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:32:28,673 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:32:38,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:32:48,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:32:58,668 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:33:08,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:33:18,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:33:28,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:33:32,008 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/flax is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:33,075 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Logs/README.md" | |
2020-05-07 10:33:33,094 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10123 State = UNDER_CONSTRUCTION for /Projects/flax/Logs/README.md | |
2020-05-07 10:33:33,214 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10123 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/flax/Logs/README.md | |
2020-05-07 10:33:33,217 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10123 State = COMMITTED size 227 byte | |
2020-05-07 10:33:33,619 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/Logs/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:33,621 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Logs/README.md" | |
2020-05-07 10:33:33,741 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Resources/README.md" | |
2020-05-07 10:33:33,746 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/Resources/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:33,748 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Resources/README.md" | |
2020-05-07 10:33:38,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:33:39,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Experiments/README.md" | |
2020-05-07 10:33:39,183 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/Experiments/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:39,187 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Experiments/README.md" | |
2020-05-07 10:33:42,735 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Jupyter/README.md" | |
2020-05-07 10:33:42,739 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/Jupyter/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:42,743 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Jupyter/README.md" | |
2020-05-07 10:33:45,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Models/README.md" | |
2020-05-07 10:33:45,986 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/Models/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:45,990 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/Models/README.md" | |
2020-05-07 10:33:46,445 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/flax_Training_Datasets/README.md" | |
2020-05-07 10:33:46,449 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/flax_Training_Datasets/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:46,463 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/flax_Training_Datasets/README.md" | |
2020-05-07 10:33:47,032 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/DataValidation/README.md" | |
2020-05-07 10:33:47,035 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/flax/DataValidation/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1073063760_31 | |
2020-05-07 10:33:47,040 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/flax/DataValidation/README.md" | |
2020-05-07 10:33:48,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:33:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:08,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:18,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:28,665 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:38,674 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:48,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:53,853 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/demo_featurestore_harry001 is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:34:54,170 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Logs/README.md" | |
2020-05-07 10:34:54,174 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10124 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Logs/README.md | |
2020-05-07 10:34:54,183 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10124 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Logs/README.md | |
2020-05-07 10:34:54,185 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10124 State = COMMITTED size 227 byte | |
2020-05-07 10:34:54,587 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Logs/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:34:54,589 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Logs/README.md" | |
2020-05-07 10:34:54,693 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Resources/README.md" | |
2020-05-07 10:34:54,696 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:34:54,699 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Resources/README.md" | |
2020-05-07 10:34:58,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:34:59,877 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Experiments/README.md" | |
2020-05-07 10:34:59,882 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Experiments/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:34:59,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Experiments/README.md" | |
2020-05-07 10:35:03,051 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/README.md" | |
2020-05-07 10:35:03,055 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:35:03,059 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/README.md" | |
2020-05-07 10:35:04,129 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/demo_featurestore_harry001_Training_Datasets/README.md" | |
2020-05-07 10:35:04,134 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/demo_featurestore_harry001_Training_Datasets/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:35:04,138 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/demo_featurestore_harry001_Training_Datasets/README.md" | |
2020-05-07 10:35:05,163 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/DataValidation/README.md" | |
2020-05-07 10:35:05,167 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/DataValidation/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-1797656710_28 | |
2020-05-07 10:35:05,171 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/DataValidation/README.md" | |
2020-05-07 10:35:05,548 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:05,548 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:05,548 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10125 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar | |
2020-05-07 10:35:05,616 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10125 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar | |
2020-05-07 10:35:05,619 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10125 State = COMMITTED size 6522467 byte | |
2020-05-07 10:35:06,020 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:06,029 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar" | |
2020-05-07 10:35:06,032 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar" | |
2020-05-07 10:35:06,057 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/data/attendances.csv is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:06,072 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:06,072 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:06,072 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10126 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/TestJob/data/games.csv | |
2020-05-07 10:35:06,080 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10126 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/TestJob/data/games.csv | |
2020-05-07 10:35:06,084 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10126 State = COMMITTED size 76451 byte | |
2020-05-07 10:35:06,485 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/data/games.csv is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:06,498 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:06,498 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:06,498 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10127 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/TestJob/data/players.csv | |
2020-05-07 10:35:06,507 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10127 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/TestJob/data/players.csv | |
2020-05-07 10:35:06,510 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10127 State = COMMITTED size 212910 byte | |
2020-05-07 10:35:06,910 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/data/players.csv is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:06,922 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/data/season_scores.csv is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:06,935 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/data/teams.csv is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:06,955 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar" | |
2020-05-07 10:35:06,958 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/hops-examples-featurestore-tour-1.3.0-SNAPSHOT.jar" | |
2020-05-07 10:35:06,968 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/attendances.csv" | |
2020-05-07 10:35:06,971 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/attendances.csv" | |
2020-05-07 10:35:06,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/games.csv" | |
2020-05-07 10:35:06,977 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/games.csv" | |
2020-05-07 10:35:06,980 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/players.csv" | |
2020-05-07 10:35:06,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/players.csv" | |
2020-05-07 10:35:06,986 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/season_scores.csv" | |
2020-05-07 10:35:06,989 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/season_scores.csv" | |
2020-05-07 10:35:06,992 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/teams.csv" | |
2020-05-07 10:35:06,995 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/data/teams.csv" | |
2020-05-07 10:35:07,016 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/FeatureStoreQuickStart.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:07,036 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:07,036 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:07,036 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10128 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourPython.ipynb | |
2020-05-07 10:35:07,045 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10128 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourPython.ipynb | |
2020-05-07 10:35:07,047 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10128 State = COMMITTED size 747622 byte | |
2020-05-07 10:35:07,449 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourPython.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:07,464 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:07,464 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:07,464 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10129 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourScala.ipynb | |
2020-05-07 10:35:07,471 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10129 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourScala.ipynb | |
2020-05-07 10:35:07,473 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10129 State = COMMITTED size 122995 byte | |
2020-05-07 10:35:07,875 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourScala.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:07,896 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/aws/S3-FeatureStore.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:07,907 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:07,907 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:07,908 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10130 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/aws/SageMakerFeaturestoreTourPython.ipynb | |
2020-05-07 10:35:07,914 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10130 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/aws/SageMakerFeaturestoreTourPython.ipynb | |
2020-05-07 10:35:07,919 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10130 State = COMMITTED size 462660 byte | |
2020-05-07 10:35:08,318 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/aws/SageMakerFeaturestoreTourPython.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:08,336 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:08,336 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:08,336 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10131 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/aws/data/Sacramentorealestatetransactions.csv | |
2020-05-07 10:35:08,342 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10131 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/aws/data/Sacramentorealestatetransactions.csv | |
2020-05-07 10:35:08,345 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10131 State = COMMITTED size 113183 byte | |
2020-05-07 10:35:08,670 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:35:08,746 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/aws/data/Sacramentorealestatetransactions.csv is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:08,766 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FatureStore.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:08,777 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FeatureStore-Setup.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:08,787 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FeatureStore.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:08,798 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:08,798 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:08,798 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10132 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/databricks/DatabricksFeaturestoreTourPython.ipynb | |
2020-05-07 10:35:08,806 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10132 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/databricks/DatabricksFeaturestoreTourPython.ipynb | |
2020-05-07 10:35:08,809 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10132 State = COMMITTED size 582544 byte | |
2020-05-07 10:35:09,210 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/databricks/DatabricksFeaturestoreTourPython.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,221 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/databricks/FeatureStoreQuickStartDatabricks.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,243 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/datasets/TitanicTrainingDatasetPython.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,261 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/delta/DeltaOnHops.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,279 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:09,279 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:09,279 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10133 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/hudi/HudiOnHops.ipynb | |
2020-05-07 10:35:09,286 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10133 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/hudi/HudiOnHops.ipynb | |
2020-05-07 10:35:09,288 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10133 State = COMMITTED size 68350 byte | |
2020-05-07 10:35:09,690 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/hudi/HudiOnHops.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,710 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/image_datasets/ImageDatasetFeaturestore.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,720 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/image_datasets/ImageFeatureGroup.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:09,743 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:09,743 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:09,743 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10134 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/admin_fs_tags.png | |
2020-05-07 10:35:09,750 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10134 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/admin_fs_tags.png | |
2020-05-07 10:35:09,752 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10134 State = COMMITTED size 417928 byte | |
2020-05-07 10:35:10,154 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/admin_fs_tags.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:10,169 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/concepts.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:10,183 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/create_tags.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:10,201 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:10,201 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:10,201 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10135 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/delta_dataset.png | |
2020-05-07 10:35:10,215 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10135 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/delta_dataset.png | |
2020-05-07 10:35:10,220 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10135 State = COMMITTED size 523229 byte | |
2020-05-07 10:35:10,619 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/delta_dataset.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:10,632 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:10,633 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:10,633 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10136 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/featurestore_incremental_pull.png | |
2020-05-07 10:35:10,642 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10136 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/featurestore_incremental_pull.png | |
2020-05-07 10:35:10,646 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10136 State = COMMITTED size 203952 byte | |
2020-05-07 10:35:11,046 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/featurestore_incremental_pull.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:11,058 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:11,058 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:11,059 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10137 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/fg_stats_1.png | |
2020-05-07 10:35:11,066 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10137 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/fg_stats_1.png | |
2020-05-07 10:35:11,069 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10137 State = COMMITTED size 440893 byte | |
2020-05-07 10:35:11,469 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/fg_stats_1.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:11,482 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:11,482 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:11,482 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10138 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/hudi_dataset.png | |
2020-05-07 10:35:11,489 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10138 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/hudi_dataset.png | |
2020-05-07 10:35:11,491 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10138 State = COMMITTED size 354375 byte | |
2020-05-07 10:35:11,893 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/hudi_dataset.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:11,904 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:11,904 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:11,904 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10139 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_1.png | |
2020-05-07 10:35:11,911 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10139 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_1.png | |
2020-05-07 10:35:11,914 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10139 State = COMMITTED size 95014 byte | |
2020-05-07 10:35:12,314 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_1.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,325 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_2.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,336 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_3.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,346 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_4.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,356 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:12,356 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:12,356 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10140 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/incr_load.png | |
2020-05-07 10:35:12,363 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10140 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/incr_load.png | |
2020-05-07 10:35:12,367 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10140 State = COMMITTED size 96959 byte | |
2020-05-07 10:35:12,768 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/incr_load.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,781 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/model.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,796 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/near_real_time.jpg is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,808 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/overview.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,821 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm1.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,835 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm2.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:12,848 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:12,848 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:12,849 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10141 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/petastorm3.png | |
2020-05-07 10:35:12,856 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10141 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/petastorm3.png | |
2020-05-07 10:35:12,860 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10141 State = COMMITTED size 70895 byte | |
2020-05-07 10:35:13,260 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm3.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,272 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm4.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,283 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm5.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,293 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm6.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,303 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/petastorm7.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,314 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:13,314 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:13,315 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10142 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/query_optimizer.png | |
2020-05-07 10:35:13,322 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10142 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/query_optimizer.png | |
2020-05-07 10:35:13,324 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10142 State = COMMITTED size 94773 byte | |
2020-05-07 10:35:13,726 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/query_optimizer.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,738 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/select_fs.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:13,749 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:13,749 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:13,750 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10143 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/share_featurestore.png | |
2020-05-07 10:35:13,759 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10143 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/share_featurestore.png | |
2020-05-07 10:35:13,762 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10143 State = COMMITTED size 72783 byte | |
2020-05-07 10:35:14,163 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/share_featurestore.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,175 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/to_admin.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,186 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:14,186 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:14,186 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10144 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/images/upsert_illustration.png | |
2020-05-07 10:35:14,193 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10144 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/images/upsert_illustration.png | |
2020-05-07 10:35:14,196 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10144 State = COMMITTED size 425340 byte | |
2020-05-07 10:35:14,597 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/images/upsert_illustration.png is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,617 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/online_featurestore/OnlineFeaturestoreTourPython.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,627 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/online_featurestore/OnlineFeaturestoreTourScala.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,646 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormHelloWorld.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,656 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_CreateDataset.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:14,667 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:14,667 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:14,667 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10145 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_PyTorch.ipynb | |
2020-05-07 10:35:14,673 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10145 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_PyTorch.ipynb | |
2020-05-07 10:35:14,676 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10145 State = COMMITTED size 79254 byte | |
2020-05-07 10:35:15,078 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_PyTorch.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:15,092 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_Tensorflow.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:15,117 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/query_planner/FeaturestoreQueryPlanner.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:15,148 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:15,148 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:15,148 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10146 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Jupyter/visualizations/Feature_Visualizations.ipynb | |
2020-05-07 10:35:15,161 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10146 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Jupyter/visualizations/Feature_Visualizations.ipynb | |
2020-05-07 10:35:15,165 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10146 State = COMMITTED size 669353 byte | |
2020-05-07 10:35:15,565 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Jupyter/visualizations/Feature_Visualizations.ipynb is closed by HopsFS_DFSClient_NONMAPREDUCE_-1796722656_28 | |
2020-05-07 10:35:15,585 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/README.md" | |
2020-05-07 10:35:15,588 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/README.md" | |
2020-05-07 10:35:15,591 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/FeatureStoreQuickStart.ipynb" | |
2020-05-07 10:35:15,594 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/FeatureStoreQuickStart.ipynb" | |
2020-05-07 10:35:15,597 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:15,601 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:15,604 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourScala.ipynb" | |
2020-05-07 10:35:15,606 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/FeaturestoreTourScala.ipynb" | |
2020-05-07 10:35:15,695 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/visualizations/Feature_Visualizations.ipynb" | |
2020-05-07 10:35:15,699 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/visualizations/Feature_Visualizations.ipynb" | |
2020-05-07 10:35:15,706 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/query_planner/FeaturestoreQueryPlanner.ipynb" | |
2020-05-07 10:35:15,718 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/query_planner/FeaturestoreQueryPlanner.ipynb" | |
2020-05-07 10:35:15,725 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormHelloWorld.ipynb" | |
2020-05-07 10:35:15,729 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormHelloWorld.ipynb" | |
2020-05-07 10:35:15,733 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_CreateDataset.ipynb" | |
2020-05-07 10:35:15,737 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_CreateDataset.ipynb" | |
2020-05-07 10:35:15,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_PyTorch.ipynb" | |
2020-05-07 10:35:15,743 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_PyTorch.ipynb" | |
2020-05-07 10:35:15,747 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_Tensorflow.ipynb" | |
2020-05-07 10:35:15,750 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/petastorm/PetastormMNIST_Tensorflow.ipynb" | |
2020-05-07 10:35:15,755 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/online_featurestore/OnlineFeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:15,758 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/online_featurestore/OnlineFeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:15,762 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/online_featurestore/OnlineFeaturestoreTourScala.ipynb" | |
2020-05-07 10:35:15,765 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/online_featurestore/OnlineFeaturestoreTourScala.ipynb" | |
2020-05-07 10:35:15,770 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/admin_fs_tags.png" | |
2020-05-07 10:35:15,774 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/admin_fs_tags.png" | |
2020-05-07 10:35:15,777 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/concepts.png" | |
2020-05-07 10:35:15,780 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/concepts.png" | |
2020-05-07 10:35:15,783 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/create_tags.png" | |
2020-05-07 10:35:15,786 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/create_tags.png" | |
2020-05-07 10:35:15,788 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/delta_dataset.png" | |
2020-05-07 10:35:15,792 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/delta_dataset.png" | |
2020-05-07 10:35:15,795 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/featurestore_incremental_pull.png" | |
2020-05-07 10:35:15,798 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/featurestore_incremental_pull.png" | |
2020-05-07 10:35:15,801 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/fg_stats_1.png" | |
2020-05-07 10:35:15,804 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/fg_stats_1.png" | |
2020-05-07 10:35:15,807 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/hudi_dataset.png" | |
2020-05-07 10:35:15,811 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/hudi_dataset.png" | |
2020-05-07 10:35:15,814 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_1.png" | |
2020-05-07 10:35:15,817 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_1.png" | |
2020-05-07 10:35:15,822 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_2.png" | |
2020-05-07 10:35:15,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_2.png" | |
2020-05-07 10:35:15,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_3.png" | |
2020-05-07 10:35:15,832 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_3.png" | |
2020-05-07 10:35:15,835 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_4.png" | |
2020-05-07 10:35:15,839 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/image_dataset_tutorial_4.png" | |
2020-05-07 10:35:15,842 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/incr_load.png" | |
2020-05-07 10:35:15,846 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/incr_load.png" | |
2020-05-07 10:35:15,849 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/model.png" | |
2020-05-07 10:35:15,852 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/model.png" | |
2020-05-07 10:35:15,855 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/near_real_time.jpg" | |
2020-05-07 10:35:15,858 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/near_real_time.jpg" | |
2020-05-07 10:35:15,861 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/overview.png" | |
2020-05-07 10:35:15,864 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/overview.png" | |
2020-05-07 10:35:15,867 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm1.png" | |
2020-05-07 10:35:15,870 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm1.png" | |
2020-05-07 10:35:15,873 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm2.png" | |
2020-05-07 10:35:15,876 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm2.png" | |
2020-05-07 10:35:15,880 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm3.png" | |
2020-05-07 10:35:15,883 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm3.png" | |
2020-05-07 10:35:15,886 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm4.png" | |
2020-05-07 10:35:15,888 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm4.png" | |
2020-05-07 10:35:15,891 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm5.png" | |
2020-05-07 10:35:15,894 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm5.png" | |
2020-05-07 10:35:15,896 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm6.png" | |
2020-05-07 10:35:15,899 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm6.png" | |
2020-05-07 10:35:15,902 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm7.png" | |
2020-05-07 10:35:15,905 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/petastorm7.png" | |
2020-05-07 10:35:15,907 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/query_optimizer.png" | |
2020-05-07 10:35:15,910 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/query_optimizer.png" | |
2020-05-07 10:35:15,913 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/select_fs.png" | |
2020-05-07 10:35:15,916 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/select_fs.png" | |
2020-05-07 10:35:15,919 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/share_featurestore.png" | |
2020-05-07 10:35:15,923 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/share_featurestore.png" | |
2020-05-07 10:35:15,926 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/to_admin.png" | |
2020-05-07 10:35:15,930 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/to_admin.png" | |
2020-05-07 10:35:15,933 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/upsert_illustration.png" | |
2020-05-07 10:35:15,936 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/images/upsert_illustration.png" | |
2020-05-07 10:35:15,941 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/image_datasets/ImageDatasetFeaturestore.ipynb" | |
2020-05-07 10:35:15,944 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/image_datasets/ImageDatasetFeaturestore.ipynb" | |
2020-05-07 10:35:15,948 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/image_datasets/ImageFeatureGroup.ipynb" | |
2020-05-07 10:35:15,951 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/image_datasets/ImageFeatureGroup.ipynb" | |
2020-05-07 10:35:15,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/hudi/HudiOnHops.ipynb" | |
2020-05-07 10:35:15,959 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/hudi/HudiOnHops.ipynb" | |
2020-05-07 10:35:15,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/delta/DeltaOnHops.ipynb" | |
2020-05-07 10:35:15,966 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/delta/DeltaOnHops.ipynb" | |
2020-05-07 10:35:15,970 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/datasets/TitanicTrainingDatasetPython.ipynb" | |
2020-05-07 10:35:15,973 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/datasets/TitanicTrainingDatasetPython.ipynb" | |
2020-05-07 10:35:15,977 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FatureStore.ipynb" | |
2020-05-07 10:35:15,980 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FatureStore.ipynb" | |
2020-05-07 10:35:15,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FeatureStore-Setup.ipynb" | |
2020-05-07 10:35:15,986 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FeatureStore-Setup.ipynb" | |
2020-05-07 10:35:15,988 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FeatureStore.ipynb" | |
2020-05-07 10:35:15,991 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/Databricks-FeatureStore.ipynb" | |
2020-05-07 10:35:15,994 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/DatabricksFeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:15,997 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/DatabricksFeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:16,000 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/FeatureStoreQuickStartDatabricks.ipynb" | |
2020-05-07 10:35:16,003 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/databricks/FeatureStoreQuickStartDatabricks.ipynb" | |
2020-05-07 10:35:16,007 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/aws/S3-FeatureStore.ipynb" | |
2020-05-07 10:35:16,010 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/aws/S3-FeatureStore.ipynb" | |
2020-05-07 10:35:16,014 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/aws/SageMakerFeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:16,017 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/aws/SageMakerFeaturestoreTourPython.ipynb" | |
2020-05-07 10:35:16,031 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/aws/data/Sacramentorealestatetransactions.csv" | |
2020-05-07 10:35:16,034 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/Jupyter/aws/data/Sacramentorealestatetransactions.csv" | |
2020-05-07 10:35:16,104 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/Projects/demo_featurestore_harry001/TestJob/README.md" | |
2020-05-07 10:35:16,108 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/TestJob/README.md is closed by HopsFS_DFSClient_NONMAPREDUCE_-459498988_28 | |
2020-05-07 10:35:17,553 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10147 State = UNDER_CONSTRUCTION for /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__tstore.jks | |
2020-05-07 10:35:17,561 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10147 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__tstore.jks | |
2020-05-07 10:35:17,564 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10147 State = COMMITTED size 1494 byte | |
2020-05-07 10:35:17,965 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__tstore.jks is closed by HopsFS_DFSClient_NONMAPREDUCE_-756788838_56 | |
2020-05-07 10:35:17,967 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__tstore.jks" | |
2020-05-07 10:35:17,970 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__tstore.jks" | |
2020-05-07 10:35:17,986 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10148 State = UNDER_CONSTRUCTION for /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__kstore.jks | |
2020-05-07 10:35:17,992 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10148 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__kstore.jks | |
2020-05-07 10:35:17,995 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10148 State = COMMITTED size 3318 byte | |
2020-05-07 10:35:18,395 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__kstore.jks is closed by HopsFS_DFSClient_NONMAPREDUCE_-756788838_56 | |
2020-05-07 10:35:18,397 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__kstore.jks" | |
2020-05-07 10:35:18,400 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__kstore.jks" | |
2020-05-07 10:35:18,416 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10149 State = UNDER_CONSTRUCTION for /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__cert.key | |
2020-05-07 10:35:18,422 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10149 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__cert.key | |
2020-05-07 10:35:18,425 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10149 State = COMMITTED size 64 byte | |
2020-05-07 10:35:18,667 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:35:18,827 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__cert.key is closed by HopsFS_DFSClient_NONMAPREDUCE_-756788838_56 | |
2020-05-07 10:35:18,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__cert.key" | |
2020-05-07 10:35:18,832 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/hdfs/kafkacerts/demo_featurestore_harry001__harry000/application_1588844087764_0001/demo_featurestore_harry001__harry000__cert.key" | |
2020-05-07 10:35:27,005 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No component was locked in the path using sub tree flag. Path: "/user/spark/applicationHistory/application_1588844087764_0001_1.snappy.inprogress" | |
2020-05-07 10:35:27,059 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:27,059 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:35:27,059 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10150 State = UNDER_CONSTRUCTION for /user/spark/applicationHistory/application_1588844087764_0001_1.snappy.inprogress | |
2020-05-07 10:35:27,268 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /user/spark/applicationHistory/application_1588844087764_0001_1.snappy.inprogress for HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:35:28,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:35:38,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:35:48,721 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:35:58,685 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:35:58,768 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/games_features_1/_temporary/0/_temporary/attempt_20200507103556_0066_m_000000_267/part-00000-75891584-a213-4cb9-b9bc-13eec19f6a9f-c000.snappy.orc is closed by HopsFS_DFSClient_attempt_20200507103556_0066_m_000000_267_-358058680_65 | |
2020-05-07 10:35:59,617 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /apps/hive/warehouse/demo_featurestore_harry001_featurestore.db/games_features_1/_SUCCESS is closed by HopsFS_DFSClient_NONMAPREDUCE_-397473362_14 | |
2020-05-07 10:36:08,688 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:36:14,356 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/hoodie.properties is closed by HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:36:14,868 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/20200507103614.inflight is closed by HopsFS_DFSClient_NONMAPREDUCE_861260045_14 | |
2020-05-07 10:36:17,141 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:17,141 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:17,141 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10151 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_2 | |
2020-05-07 10:36:17,175 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:17,175 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:17,175 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10152 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_1 | |
2020-05-07 10:36:17,270 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_2 for HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:17,291 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10151 State = UNDER_CONSTRUCTION size 93 byte | |
2020-05-07 10:36:17,298 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_2 is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:17,437 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/37ab9806-893f-4127-b4c9-fb6546eb3d16-0_2-134-538_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:17,488 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_1 for HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:17,505 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10152 State = UNDER_CONSTRUCTION size 93 byte | |
2020-05-07 10:36:17,519 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_1 is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:17,526 WARN org.apache.hadoop.hdfs.StateChange: DIR* FSDirectory.unprotectedRenameTo: failed to rename /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata_1 to /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/.hoodie_partition_metadata because destination exists | |
2020-05-07 10:36:17,541 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/38a7db2e-1619-4b67-8a65-8ac2202d0892-0_0-134-536_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:17,544 INFO org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks: BLOCK* InvalidateBlocks: add bid= 10152 State = COMPLETE to [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 | |
2020-05-07 10:36:17,544 INFO BlockStateChange: BLOCK* addToInvalidates: bid= 10152 State = COMPLETE 10.0.4.12:50010 | |
2020-05-07 10:36:17,565 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/fc1fe50c-3322-4c91-b8bb-00f7c3bb1667-0_1-134-537_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:18,244 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,244 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,244 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10153 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/fc1fe50c-3322-4c91-b8bb-00f7c3bb1667-0_1-134-537_20200507103614.parquet | |
2020-05-07 10:36:18,271 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,272 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,272 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10154 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/37ab9806-893f-4127-b4c9-fb6546eb3d16-0_2-134-538_20200507103614.parquet | |
2020-05-07 10:36:18,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10153 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/fc1fe50c-3322-4c91-b8bb-00f7c3bb1667-0_1-134-537_20200507103614.parquet | |
2020-05-07 10:36:18,278 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,278 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,279 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10155 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/38a7db2e-1619-4b67-8a65-8ac2202d0892-0_0-134-536_20200507103614.parquet | |
2020-05-07 10:36:18,282 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10153 State = COMMITTED size 434842 byte | |
2020-05-07 10:36:18,376 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10154 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/37ab9806-893f-4127-b4c9-fb6546eb3d16-0_2-134-538_20200507103614.parquet | |
2020-05-07 10:36:18,380 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10154 State = COMMITTED size 434849 byte | |
2020-05-07 10:36:18,405 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10155 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/38a7db2e-1619-4b67-8a65-8ac2202d0892-0_0-134-536_20200507103614.parquet | |
2020-05-07 10:36:18,410 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10155 State = COMMITTED size 434819 byte | |
2020-05-07 10:36:18,676 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. | |
2020-05-07 10:36:18,681 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/fc1fe50c-3322-4c91-b8bb-00f7c3bb1667-0_1-134-537_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:18,780 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/82851757-3e4a-4953-a6fa-0ecab8f772a3-0_3-134-539_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:18,788 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/37ab9806-893f-4127-b4c9-fb6546eb3d16-0_2-134-538_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:18,817 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/38a7db2e-1619-4b67-8a65-8ac2202d0892-0_0-134-536_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:18,880 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,880 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:18,881 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10156 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/82851757-3e4a-4953-a6fa-0ecab8f772a3-0_3-134-539_20200507103614.parquet | |
2020-05-07 10:36:18,965 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10156 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/82851757-3e4a-4953-a6fa-0ecab8f772a3-0_3-134-539_20200507103614.parquet | |
2020-05-07 10:36:18,998 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10156 State = COMMITTED size 434862 byte | |
2020-05-07 10:36:19,016 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/9acb654f-47f1-448e-aaf5-393063f4652c-0_4-134-540_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:19,142 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/0a072f10-eaa4-47fe-b1b8-2a26d10426b9-0_5-134-541_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:19,191 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,191 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,192 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10157 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/9acb654f-47f1-448e-aaf5-393063f4652c-0_4-134-540_20200507103614.parquet | |
2020-05-07 10:36:19,212 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10157 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/9acb654f-47f1-448e-aaf5-393063f4652c-0_4-134-540_20200507103614.parquet | |
2020-05-07 10:36:19,215 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10157 State = COMMITTED size 434904 byte | |
2020-05-07 10:36:19,229 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,230 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,230 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10158 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0a072f10-eaa4-47fe-b1b8-2a26d10426b9-0_5-134-541_20200507103614.parquet | |
2020-05-07 10:36:19,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10158 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0a072f10-eaa4-47fe-b1b8-2a26d10426b9-0_5-134-541_20200507103614.parquet | |
2020-05-07 10:36:19,245 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10158 State = COMMITTED size 434895 byte | |
2020-05-07 10:36:19,378 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/82851757-3e4a-4953-a6fa-0ecab8f772a3-0_3-134-539_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:19,431 INFO BlockStateChange: BLOCK* BlockManager: ask 10.0.4.12:50010 to delete [blk_10152_1001] | |
2020-05-07 10:36:19,451 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/3d98fb81-795d-4fee-a603-919a22ce116e-0_6-134-542_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:19,491 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,491 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,491 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10159 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/3d98fb81-795d-4fee-a603-919a22ce116e-0_6-134-542_20200507103614.parquet | |
2020-05-07 10:36:19,514 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10159 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/3d98fb81-795d-4fee-a603-919a22ce116e-0_6-134-542_20200507103614.parquet | |
2020-05-07 10:36:19,521 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10159 State = COMMITTED size 434925 byte | |
2020-05-07 10:36:19,616 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/9acb654f-47f1-448e-aaf5-393063f4652c-0_4-134-540_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:19,647 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0a072f10-eaa4-47fe-b1b8-2a26d10426b9-0_5-134-541_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:19,701 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/cca3a42e-ec0c-44ce-955f-6ef1c696393b-0_7-134-543_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:19,783 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/7c2b085a-afe5-4820-95b7-355da12baf0d-0_8-134-544_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:19,853 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,854 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,854 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,854 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10160 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/cca3a42e-ec0c-44ce-955f-6ef1c696393b-0_7-134-543_20200507103614.parquet | |
2020-05-07 10:36:19,854 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:19,854 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10161 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7c2b085a-afe5-4820-95b7-355da12baf0d-0_8-134-544_20200507103614.parquet | |
2020-05-07 10:36:19,872 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10160 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/cca3a42e-ec0c-44ce-955f-6ef1c696393b-0_7-134-543_20200507103614.parquet | |
2020-05-07 10:36:19,874 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10161 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7c2b085a-afe5-4820-95b7-355da12baf0d-0_8-134-544_20200507103614.parquet | |
2020-05-07 10:36:19,878 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10160 State = COMMITTED size 434931 byte | |
2020-05-07 10:36:19,893 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10161 State = COMMITTED size 434912 byte | |
2020-05-07 10:36:19,920 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/3d98fb81-795d-4fee-a603-919a22ce116e-0_6-134-542_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:19,986 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/4b76f43e-f989-4606-acd0-62c93bdd7ace-0_9-134-545_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:20,026 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,026 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,026 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10162 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/4b76f43e-f989-4606-acd0-62c93bdd7ace-0_9-134-545_20200507103614.parquet | |
2020-05-07 10:36:20,042 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10162 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/4b76f43e-f989-4606-acd0-62c93bdd7ace-0_9-134-545_20200507103614.parquet | |
2020-05-07 10:36:20,047 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10162 State = COMMITTED size 434919 byte | |
2020-05-07 10:36:20,279 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/cca3a42e-ec0c-44ce-955f-6ef1c696393b-0_7-134-543_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:20,280 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7c2b085a-afe5-4820-95b7-355da12baf0d-0_8-134-544_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:20,445 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/ed45a508-9e2e-47c6-a6f7-c7fe77bb9eb4-0_10-134-546_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:20,453 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/4b76f43e-f989-4606-acd0-62c93bdd7ace-0_9-134-545_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:20,458 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/9f43e40c-1684-4ef0-9970-28d71f903e52-0_11-134-547_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:20,511 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,511 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,511 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10163 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ed45a508-9e2e-47c6-a6f7-c7fe77bb9eb4-0_10-134-546_20200507103614.parquet | |
2020-05-07 10:36:20,532 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,532 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,532 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10164 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/9f43e40c-1684-4ef0-9970-28d71f903e52-0_11-134-547_20200507103614.parquet | |
2020-05-07 10:36:20,558 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10163 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:20,567 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ed45a508-9e2e-47c6-a6f7-c7fe77bb9eb4-0_10-134-546_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:20,585 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10164 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/9f43e40c-1684-4ef0-9970-28d71f903e52-0_11-134-547_20200507103614.parquet | |
2020-05-07 10:36:20,616 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10164 State = COMMITTED size 435048 byte | |
2020-05-07 10:36:20,623 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/89b783e7-43c3-47a7-8eb1-b5924da32afc-0_12-134-548_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:20,679 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,679 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,679 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10165 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/89b783e7-43c3-47a7-8eb1-b5924da32afc-0_12-134-548_20200507103614.parquet | |
2020-05-07 10:36:20,680 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/f3792b85-5e39-47e8-bd9c-7e53f7874ea8-0_13-134-549_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:20,712 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10165 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/89b783e7-43c3-47a7-8eb1-b5924da32afc-0_12-134-548_20200507103614.parquet | |
2020-05-07 10:36:20,722 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,722 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:20,722 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10166 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/f3792b85-5e39-47e8-bd9c-7e53f7874ea8-0_13-134-549_20200507103614.parquet | |
2020-05-07 10:36:20,723 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10165 State = COMMITTED size 435026 byte | |
2020-05-07 10:36:20,744 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10166 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/f3792b85-5e39-47e8-bd9c-7e53f7874ea8-0_13-134-549_20200507103614.parquet | |
2020-05-07 10:36:20,750 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10166 State = COMMITTED size 435037 byte | |
2020-05-07 10:36:20,997 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/9f43e40c-1684-4ef0-9970-28d71f903e52-0_11-134-547_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:21,045 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/17cb9a03-df7a-43ec-921a-3d58673e2964-0_14-134-550_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:21,065 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,065 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,066 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10167 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/17cb9a03-df7a-43ec-921a-3d58673e2964-0_14-134-550_20200507103614.parquet | |
2020-05-07 10:36:21,074 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10167 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/17cb9a03-df7a-43ec-921a-3d58673e2964-0_14-134-550_20200507103614.parquet | |
2020-05-07 10:36:21,077 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10167 State = COMMITTED size 435028 byte | |
2020-05-07 10:36:21,118 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/89b783e7-43c3-47a7-8eb1-b5924da32afc-0_12-134-548_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:21,149 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/f3792b85-5e39-47e8-bd9c-7e53f7874ea8-0_13-134-549_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,170 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/a01ca614-84e3-4d80-ae19-7315fcea7b14-0_15-134-551_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:21,204 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,204 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,205 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10168 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/a01ca614-84e3-4d80-ae19-7315fcea7b14-0_15-134-551_20200507103614.parquet | |
2020-05-07 10:36:21,219 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10168 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/a01ca614-84e3-4d80-ae19-7315fcea7b14-0_15-134-551_20200507103614.parquet | |
2020-05-07 10:36:21,221 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/c8bf45ae-0ef2-4ddf-ad77-f965a92a0733-0_16-134-552_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,223 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10168 State = COMMITTED size 435046 byte | |
2020-05-07 10:36:21,244 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,244 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,244 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10169 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/c8bf45ae-0ef2-4ddf-ad77-f965a92a0733-0_16-134-552_20200507103614.parquet | |
2020-05-07 10:36:21,253 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10169 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:21,266 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/c8bf45ae-0ef2-4ddf-ad77-f965a92a0733-0_16-134-552_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,329 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/6d065511-629e-4db2-a864-42198e1e2e35-0_17-134-553_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,354 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,354 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,354 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10170 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/6d065511-629e-4db2-a864-42198e1e2e35-0_17-134-553_20200507103614.parquet | |
2020-05-07 10:36:21,362 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10170 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:21,366 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/6d065511-629e-4db2-a864-42198e1e2e35-0_17-134-553_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,420 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/57156339-7924-4945-a18b-bf17e896b084-0_18-134-554_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,439 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,440 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,440 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10171 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/57156339-7924-4945-a18b-bf17e896b084-0_18-134-554_20200507103614.parquet | |
2020-05-07 10:36:21,448 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10171 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/57156339-7924-4945-a18b-bf17e896b084-0_18-134-554_20200507103614.parquet | |
2020-05-07 10:36:21,451 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10171 State = COMMITTED size 435106 byte | |
2020-05-07 10:36:21,478 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/17cb9a03-df7a-43ec-921a-3d58673e2964-0_14-134-550_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:21,535 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/7d723ee8-1ca5-4e91-86d8-a819dddf791e-0_19-134-555_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:21,559 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,559 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,559 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10172 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7d723ee8-1ca5-4e91-86d8-a819dddf791e-0_19-134-555_20200507103614.parquet | |
2020-05-07 10:36:21,568 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10172 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:21,572 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7d723ee8-1ca5-4e91-86d8-a819dddf791e-0_19-134-555_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:21,624 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/a01ca614-84e3-4d80-ae19-7315fcea7b14-0_15-134-551_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:21,625 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/0b7f0ab1-200a-420b-81da-c5b82640b199-0_20-134-556_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:21,662 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,662 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,662 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10173 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0b7f0ab1-200a-420b-81da-c5b82640b199-0_20-134-556_20200507103614.parquet | |
2020-05-07 10:36:21,680 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10173 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0b7f0ab1-200a-420b-81da-c5b82640b199-0_20-134-556_20200507103614.parquet | |
2020-05-07 10:36:21,685 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10173 State = COMMITTED size 435089 byte | |
2020-05-07 10:36:21,698 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/08add3f3-483c-47f0-92a7-583e0aa1bbeb-0_21-134-557_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:21,732 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,732 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,733 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10174 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/08add3f3-483c-47f0-92a7-583e0aa1bbeb-0_21-134-557_20200507103614.parquet | |
2020-05-07 10:36:21,745 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10174 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/08add3f3-483c-47f0-92a7-583e0aa1bbeb-0_21-134-557_20200507103614.parquet | |
2020-05-07 10:36:21,749 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10174 State = COMMITTED size 435162 byte | |
2020-05-07 10:36:21,852 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/57156339-7924-4945-a18b-bf17e896b084-0_18-134-554_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,920 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/ff88e70f-fb59-45da-bf2c-ac95d495f27c-0_22-134-558_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:21,952 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,952 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:21,953 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10175 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ff88e70f-fb59-45da-bf2c-ac95d495f27c-0_22-134-558_20200507103614.parquet | |
2020-05-07 10:36:21,965 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10175 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ff88e70f-fb59-45da-bf2c-ac95d495f27c-0_22-134-558_20200507103614.parquet | |
2020-05-07 10:36:21,974 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10175 State = COMMITTED size 435114 byte | |
2020-05-07 10:36:22,086 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/0b7f0ab1-200a-420b-81da-c5b82640b199-0_20-134-556_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:22,148 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/f851f67f-2ebe-466b-92cb-5b05caec6daa-0_23-134-559_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:22,150 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/08add3f3-483c-47f0-92a7-583e0aa1bbeb-0_21-134-557_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:22,174 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,174 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,174 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10176 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/f851f67f-2ebe-466b-92cb-5b05caec6daa-0_23-134-559_20200507103614.parquet | |
2020-05-07 10:36:22,183 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10176 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/f851f67f-2ebe-466b-92cb-5b05caec6daa-0_23-134-559_20200507103614.parquet | |
2020-05-07 10:36:22,188 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10176 State = COMMITTED size 435189 byte | |
2020-05-07 10:36:22,223 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/5fa07bb8-18e8-4c83-b0c0-697c53e31378-0_24-134-560_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:22,250 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,250 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,250 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10177 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/5fa07bb8-18e8-4c83-b0c0-697c53e31378-0_24-134-560_20200507103614.parquet | |
2020-05-07 10:36:22,258 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10177 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/5fa07bb8-18e8-4c83-b0c0-697c53e31378-0_24-134-560_20200507103614.parquet | |
2020-05-07 10:36:22,262 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10177 State = COMMITTED size 435190 byte | |
2020-05-07 10:36:22,373 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ff88e70f-fb59-45da-bf2c-ac95d495f27c-0_22-134-558_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:22,453 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/2fd55a0a-20f0-4309-9961-45d8c058092f-0_25-134-561_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:22,485 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,485 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,485 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10178 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/2fd55a0a-20f0-4309-9961-45d8c058092f-0_25-134-561_20200507103614.parquet | |
2020-05-07 10:36:22,502 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10178 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/2fd55a0a-20f0-4309-9961-45d8c058092f-0_25-134-561_20200507103614.parquet | |
2020-05-07 10:36:22,505 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10178 State = COMMITTED size 435117 byte | |
2020-05-07 10:36:22,587 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/f851f67f-2ebe-466b-92cb-5b05caec6daa-0_23-134-559_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:22,646 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/44665e12-befe-4dd1-ab53-af4e465bfadf-0_26-134-562_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:22,672 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/5fa07bb8-18e8-4c83-b0c0-697c53e31378-0_24-134-560_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:22,677 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,677 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,677 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10179 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/44665e12-befe-4dd1-ab53-af4e465bfadf-0_26-134-562_20200507103614.parquet | |
2020-05-07 10:36:22,696 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10179 State = UNDER_CONSTRUCTION size 0 byte | |
2020-05-07 10:36:22,724 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/44665e12-befe-4dd1-ab53-af4e465bfadf-0_26-134-562_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:22,836 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/7a2e9a96-fe9a-4d20-802c-bf646910dc8d-0_27-134-563_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:22,869 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/eaefb650-3148-4de7-86cb-46fea890718d-0_28-134-564_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:22,896 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,896 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,896 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10180 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7a2e9a96-fe9a-4d20-802c-bf646910dc8d-0_27-134-563_20200507103614.parquet | |
2020-05-07 10:36:22,928 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/2fd55a0a-20f0-4309-9961-45d8c058092f-0_25-134-561_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:22,931 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,931 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:22,931 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10181 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/eaefb650-3148-4de7-86cb-46fea890718d-0_28-134-564_20200507103614.parquet | |
2020-05-07 10:36:22,933 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10180 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7a2e9a96-fe9a-4d20-802c-bf646910dc8d-0_27-134-563_20200507103614.parquet | |
2020-05-07 10:36:22,955 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10181 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/eaefb650-3148-4de7-86cb-46fea890718d-0_28-134-564_20200507103614.parquet | |
2020-05-07 10:36:22,965 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10180 State = COMMITTED size 435202 byte | |
2020-05-07 10:36:23,040 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10181 State = COMMITTED size 435299 byte | |
2020-05-07 10:36:23,048 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/ed8d4b1b-467a-4a44-8d50-0f98373dfa69-0_29-134-565_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_2132051116_74 | |
2020-05-07 10:36:23,067 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:23,067 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:23,067 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate BlkInfoUnderConstruction bid= 10182 State = UNDER_CONSTRUCTION for /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ed8d4b1b-467a-4a44-8d50-0f98373dfa69-0_29-134-565_20200507103614.parquet | |
2020-05-07 10:36:23,075 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* BlkInfoUnderConstruction bid= 10182 State = COMMITTED is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/ed8d4b1b-467a-4a44-8d50-0f98373dfa69-0_29-134-565_20200507103614.parquet | |
2020-05-07 10:36:23,078 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: [DISK]DS-5f59be93-a406-4fa1-9c4d-06c7c92ebeb9:NORMAL:10.0.4.12:50010 is added to BlkInfoUnderConstruction bid= 10182 State = COMMITTED size 435239 byte | |
2020-05-07 10:36:23,340 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/7a2e9a96-fe9a-4d20-802c-bf646910dc8d-0_27-134-563_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:23,373 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/1/eaefb650-3148-4de7-86cb-46fea890718d-0_28-134-564_20200507103614.parquet is closed by HopsFS_DFSClient_NONMAPREDUCE_-2130769148_65 | |
2020-05-07 10:36:23,413 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /Projects/demo_featurestore_harry001/Resources/games_features_hudi_tour_1/.hoodie/.temp/20200507103614/1/73df5707-0fd2-4567-80d7-7d8497864118-0_30-134-566_20200507103614.marker is closed by HopsFS_DFSClient_NONMAPREDUCE_1280169176_43 | |
2020-05-07 10:36:23,449 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2020-05-07 10:36:23,449 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DB], storagePolicy=BlockStoragePolicy{DB:14, storageTypes=[DB], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy | |
2 |