Skip to content

Instantly share code, notes, and snippets.

@kawamon
Created March 11, 2014 14:20
Show Gist options
  • Save kawamon/9486695 to your computer and use it in GitHub Desktop.
Save kawamon/9486695 to your computer and use it in GitHub Desktop.
[root@huetest hadoop-hdfs]# sudo -u hdfs hdfs namenode -upgrade
14/03/11 02:22:55 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = huetest/127.0.0.1
STARTUP_MSG: args = [-upgrade]
STARTUP_MSG: version = 2.2.0-cdh5.0.0-beta-2
STARTUP_MSG: classpath = /etc/hadoop/conf:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh5.0.0-beta-2.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/hue-plugins-3.0.0-cdh5.0.0-beta-1.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//hadoop-annotations-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//hadoop-auth-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop/.//hadoop-nfs-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//hadoop-common-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//parquet-scrooge.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-common-2.2.0-cdh5.0.0-beta-2-tests.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.2.0-cdh5.0.0-beta-2-tests.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.2.0-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-0.20-mapreduce/./:/usr/lib/hadoop-0.20-mapreduce/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-codec-1.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/servlet-api-2.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jets3t-0.9.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-digester-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-json-1.9.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hadoop-fairscheduler-2.2.0-mr1-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/activation-1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/zookeeper-3.4.5-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jline-0.9.94.jar:/usr/lib/hadoop-0.20-mapreduce/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jettison-1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/xmlenc-0.52.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-lang-2.6.jar:/usr/lib/hadoop-0.20-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jetty-6.1.26.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/junit-4.8.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-el-1.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/avro-compiler.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/ant-contrib-1.0b3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-cli-1.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-0.20-mapreduce/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-net-3.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/avro.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsp-api-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsch-0.1.42.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/httpcore-4.2.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/guava-11.0.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/httpclient-4.2.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-examples.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-test.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-ant.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-ant-2.2.0-mr1-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-core-2.2.0-mr1-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-test-2.2.0-mr1-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-examples-2.2.0-mr1-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-core.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-tools-2.2.0-mr1-cdh5.0.0-beta-2.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-tools.jar
STARTUP_MSG: build = git://github.sf.cloudera.com/CDH/cdh.git -r 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on 2014-02-07T18:47Z
STARTUP_MSG: java = 1.7.0_45
************************************************************/
14/03/11 02:22:55 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/03/11 02:22:55 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
14/03/11 02:22:55 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
14/03/11 02:22:55 INFO impl.MetricsSystemImpl: NameNode metrics system started
14/03/11 02:22:56 INFO hdfs.DFSUtil: Starting web server as: ${dfs.web.authentication.kerberos.principal}
14/03/11 02:22:56 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
14/03/11 02:22:56 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
14/03/11 02:22:56 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
14/03/11 02:22:56 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
14/03/11 02:22:56 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
14/03/11 02:22:56 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
14/03/11 02:22:56 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
14/03/11 02:22:56 INFO http.HttpServer2: Added filter 'SPNEGO' (class=org.apache.hadoop.hdfs.web.AuthFilter)
14/03/11 02:22:56 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
14/03/11 02:22:56 INFO http.HttpServer2: Jetty bound to port 50070
14/03/11 02:22:56 INFO mortbay.log: jetty-6.1.26
14/03/11 02:22:57 WARN server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret
14/03/11 02:22:57 INFO mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
14/03/11 02:22:57 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!
14/03/11 02:22:57 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!
14/03/11 02:22:57 INFO namenode.FSNamesystem: fsLock is fair:true
14/03/11 02:22:57 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/03/11 02:22:57 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/03/11 02:22:57 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/03/11 02:22:57 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
14/03/11 02:22:57 INFO util.GSet: Computing capacity for map BlocksMap
14/03/11 02:22:57 INFO util.GSet: VM type = 64-bit
14/03/11 02:22:57 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
14/03/11 02:22:57 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/03/11 02:22:57 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/03/11 02:22:57 INFO blockmanagement.BlockManager: defaultReplication = 1
14/03/11 02:22:57 INFO blockmanagement.BlockManager: maxReplication = 512
14/03/11 02:22:57 INFO blockmanagement.BlockManager: minReplication = 1
14/03/11 02:22:57 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/03/11 02:22:57 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/03/11 02:22:57 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/03/11 02:22:57 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/03/11 02:22:57 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
14/03/11 02:22:57 INFO namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
14/03/11 02:22:57 INFO namenode.FSNamesystem: supergroup = supergroup
14/03/11 02:22:57 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/03/11 02:22:57 INFO namenode.FSNamesystem: HA Enabled: false
14/03/11 02:22:57 INFO namenode.FSNamesystem: Append Enabled: true
14/03/11 02:22:57 INFO util.GSet: Computing capacity for map INodeMap
14/03/11 02:22:57 INFO util.GSet: VM type = 64-bit
14/03/11 02:22:57 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
14/03/11 02:22:57 INFO util.GSet: capacity = 2^20 = 1048576 entries
14/03/11 02:22:57 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/03/11 02:22:57 INFO util.GSet: Computing capacity for map cachedBlocks
14/03/11 02:22:57 INFO util.GSet: VM type = 64-bit
14/03/11 02:22:57 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
14/03/11 02:22:57 INFO util.GSet: capacity = 2^18 = 262144 entries
14/03/11 02:22:57 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/03/11 02:22:57 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/03/11 02:22:57 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 0
14/03/11 02:22:57 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/03/11 02:22:57 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/03/11 02:22:57 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/03/11 02:22:57 INFO util.GSet: VM type = 64-bit
14/03/11 02:22:57 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
14/03/11 02:22:57 INFO util.GSet: capacity = 2^15 = 32768 entries
14/03/11 02:22:58 INFO common.Storage: Lock on /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/in_use.lock acquired by nodename 4773@huetest
14/03/11 02:22:58 INFO common.Storage: Using clusterid: CID-8e5e6b46-55f0-49d5-be4d-6277174325c0
14/03/11 02:22:58 INFO namenode.FileJournalManager: Recovering unfinalized segments in /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current
14/03/11 02:22:58 INFO namenode.FileJournalManager: Finalizing edits file /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/edits_inprogress_0000000000000003286 -> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/edits_0000000000000003286-0000000000000003286
14/03/11 02:22:58 INFO namenode.FSImage: Loading image file /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/fsimage_0000000000000003285 using no compression
14/03/11 02:22:58 INFO namenode.FSImage: Number of files = 172
14/03/11 02:22:58 INFO namenode.FSImage: Number of files under construction = 0
14/03/11 02:22:58 INFO namenode.FSImage: Image file /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/fsimage_0000000000000003285 of size 19130 bytes loaded in 0 seconds.
14/03/11 02:22:58 INFO namenode.FSImage: Loaded image for txid 3285 from /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/fsimage_0000000000000003285
14/03/11 02:22:58 INFO namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@2b1ecc13 expecting start txid #3286
14/03/11 02:22:58 INFO namenode.FSImage: Start loading edits file /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/edits_0000000000000003286-0000000000000003286
14/03/11 02:22:58 INFO namenode.EditLogInputStream: Fast-forwarding stream '/var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/edits_0000000000000003286-0000000000000003286' to transaction ID 3286
14/03/11 02:22:58 INFO namenode.FSImage: Edits file /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/edits_0000000000000003286-0000000000000003286 of size 1048576 edits # 1 loaded in 0 seconds
14/03/11 02:22:58 INFO namenode.FSImage: Starting upgrade of local storage directories.
old LV = -47; old CTime = 0.
new LV = -51; new CTime = 1394518978593
14/03/11 02:22:58 INFO namenode.NNUpgradeUtil: Starting upgrade of storage directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
14/03/11 02:22:58 INFO namenode.FSImage: Saving image file /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/fsimage.ckpt_0000000000000003286 using no compression
14/03/11 02:22:58 INFO namenode.FSImage: Image file /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/fsimage.ckpt_0000000000000003286 of size 19150 bytes saved in 0 seconds.
14/03/11 02:22:58 INFO namenode.FSImageTransactionalStorageInspector: No version file in /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
14/03/11 02:22:58 INFO namenode.NNUpgradeUtil: Performing upgrade of storage directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
14/03/11 02:22:58 INFO namenode.FSEditLog: Starting log segment at 3287
14/03/11 02:22:58 INFO namenode.NameCache: initialized with 0 entries 0 lookups
14/03/11 02:22:58 INFO namenode.FSNamesystem: Finished loading FSImage in 735 msecs
14/03/11 02:22:58 INFO namenode.NameNode: RPC server is binding to huetest:8020
14/03/11 02:22:59 INFO ipc.Server: Starting Socket Reader #1 for port 8020
14/03/11 02:22:59 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean
14/03/11 02:22:59 INFO namenode.FSNamesystem: Number of blocks under construction: 0
14/03/11 02:22:59 INFO namenode.FSNamesystem: Number of blocks under construction: 0
14/03/11 02:22:59 INFO hdfs.StateChange: STATE* Safe mode ON.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:22:59 INFO ipc.Server: IPC Server Responder: starting
14/03/11 02:22:59 INFO ipc.Server: IPC Server listener on 8020: starting
14/03/11 02:22:59 INFO namenode.NameNode: NameNode RPC up at: huetest/127.0.0.1:8020
14/03/11 02:22:59 INFO namenode.FSNamesystem: Starting services required for active state
14/03/11 02:22:59 INFO blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 300000 milliseconds
14/03/11 02:22:59 INFO blockmanagement.CacheReplicationMonitor: Rescanning because of pending operations
14/03/11 02:22:59 INFO blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 13 millisecond(s).
14/03/11 02:23:02 ERROR security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:23:02 INFO ipc.Server: IPC Server handler 2 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from 127.0.0.1:54864 Call#446 Retry#0: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:23:10 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Log not rolled. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:23:10 INFO ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 127.0.0.1:54865 Call#63 Retry#0: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Log not rolled. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:23:13 ERROR security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:23:13 INFO ipc.Server: IPC Server handler 3 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from 127.0.0.1:54866 Call#450 Retry#0: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:23:23 ERROR security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:23:23 INFO ipc.Server: IPC Server handler 2 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from 127.0.0.1:54866 Call#454 Retry#0: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:23:33 ERROR security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:23:33 INFO ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from 127.0.0.1:54867 Call#458 Retry#0: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:23:43 ERROR security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:23:43 INFO ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from 127.0.0.1:54868 Call#462 Retry#0: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:23:53 ERROR security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:23:53 INFO ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from 127.0.0.1:54869 Call#466 Retry#0: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:24:03 ERROR security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:24:03 INFO ipc.Server: IPC Server handler 3 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from 127.0.0.1:54870 Call#470 Retry#0: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:24:10 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Log not rolled. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:24:10 INFO ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 127.0.0.1:54871 Call#65 Retry#0: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Log not rolled. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:24:13 ERROR security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:24:13 INFO ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from 127.0.0.1:54870 Call#474 Retry#0: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/system. Name node is in safe mode.
The reported blocks 0 needs additional 96 blocks to reach the threshold 0.9990 of total blocks 96.
Safe mode will be turned off automatically
14/03/11 02:24:23 ERROR security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot set permission for /var/lib/hadoop-hdfs/cache/mapred/mapred/sys
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment