Created
December 5, 2015 08:14
-
-
Save AdamJHowell/0d34a805b951e74e2210 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2015-12-05 08:12:30,309 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: | |
/************************************************************ | |
STARTUP_MSG: Starting NameNode | |
STARTUP_MSG: host = ec2-52-24-174-146.us-west-2.compute.amazonaws.com/172.31.22.102 | |
STARTUP_MSG: args = [] | |
STARTUP_MSG: version = 1.2.1 | |
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013 | |
STARTUP_MSG: java = 1.7.0_80 | |
************************************************************/ | |
2015-12-05 08:12:30,503 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties | |
2015-12-05 08:12:30,513 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. | |
2015-12-05 08:12:30,514 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). | |
2015-12-05 08:12:30,514 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started | |
2015-12-05 08:12:30,948 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. | |
2015-12-05 08:12:30,956 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists! | |
2015-12-05 08:12:30,964 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered. | |
2015-12-05 08:12:30,965 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered. | |
2015-12-05 08:12:30,998 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap | |
2015-12-05 08:12:30,998 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit | |
2015-12-05 08:12:30,998 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 1013645312 | |
2015-12-05 08:12:30,998 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries | |
2015-12-05 08:12:30,999 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152 | |
2015-12-05 08:12:31,035 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu | |
2015-12-05 08:12:31,035 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup | |
2015-12-05 08:12:31,035 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=false | |
2015-12-05 08:12:31,042 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100 | |
2015-12-05 08:12:31,042 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) | |
2015-12-05 08:12:31,286 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean | |
2015-12-05 08:12:31,305 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0 | |
2015-12-05 08:12:31,305 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times | |
2015-12-05 08:12:31,318 INFO org.apache.hadoop.hdfs.server.common.Storage: Start loading image file /home/ubuntu/hdfstmp/dfs/name/current/fsimage | |
2015-12-05 08:12:31,318 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1 | |
2015-12-05 08:12:31,321 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0 | |
2015-12-05 08:12:31,321 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /home/ubuntu/hdfstmp/dfs/name/current/fsimage of size 112 bytes loaded in 0 seconds. | |
2015-12-05 08:12:31,321 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start loading edits file /home/ubuntu/hdfstmp/dfs/name/current/edits | |
2015-12-05 08:12:31,322 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: EOF of /home/ubuntu/hdfstmp/dfs/name/current/edits, reached end of edit log Number of transactions found: 0. Bytes read: 4 | |
2015-12-05 08:12:31,322 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start checking end of edit log (/home/ubuntu/hdfstmp/dfs/name/current/edits) ... | |
2015-12-05 08:12:31,322 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Checked the bytes after the end of edit log (/home/ubuntu/hdfstmp/dfs/name/current/edits): | |
2015-12-05 08:12:31,322 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Padding position = -1 (-1 means padding not found) | |
2015-12-05 08:12:31,323 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edit log length = 4 | |
2015-12-05 08:12:31,323 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Read length = 4 | |
2015-12-05 08:12:31,323 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Corruption length = 0 | |
2015-12-05 08:12:31,323 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Toleration length = 0 (= dfs.namenode.edits.toleration.length) | |
2015-12-05 08:12:31,324 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Summary: |---------- Read=4 ----------|-- Corrupt=0 --|-- Pad=0 --| | |
2015-12-05 08:12:31,324 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edits file /home/ubuntu/hdfstmp/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds. | |
2015-12-05 08:12:31,325 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /home/ubuntu/hdfstmp/dfs/name/current/fsimage of size 112 bytes saved in 0 seconds. | |
2015-12-05 08:12:31,380 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/ubuntu/hdfstmp/dfs/name/current/edits | |
2015-12-05 08:12:31,380 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/ubuntu/hdfstmp/dfs/name/current/edits | |
2015-12-05 08:12:31,396 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups | |
2015-12-05 08:12:31,396 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 367 msecs | |
2015-12-05 08:12:31,397 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.threshold.pct = 0.9990000128746033 | |
2015-12-05 08:12:31,397 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 | |
2015-12-05 08:12:31,397 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.extension = 30000 | |
2015-12-05 08:12:31,397 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks excluded by safe block count: 0 total blocks: 0 and thus the safe blocks: 0 | |
2015-12-05 08:12:31,404 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0 | |
2015-12-05 08:12:31,404 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0 | |
2015-12-05 08:12:31,404 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0 | |
2015-12-05 08:12:31,404 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of over-replicated blocks = 0 | |
2015-12-05 08:12:31,404 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 7 msec | |
2015-12-05 08:12:31,404 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs | |
2015-12-05 08:12:31,404 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes | |
2015-12-05 08:12:31,404 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks | |
2015-12-05 08:12:31,418 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list | |
2015-12-05 08:12:31,432 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 2 msec | |
2015-12-05 08:12:31,432 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 2 msec processing time, 2 msec clock time, 1 cycles | |
2015-12-05 08:12:31,432 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec | |
2015-12-05 08:12:31,432 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles | |
2015-12-05 08:12:31,438 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered. | |
2015-12-05 08:12:31,455 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedExceptionjava.lang.InterruptedException: sleep interrupted | |
2015-12-05 08:12:31,456 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor | |
java.lang.InterruptedException: sleep interrupted | |
at java.lang.Thread.sleep(Native Method) | |
at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65) | |
at java.lang.Thread.run(Thread.java:745) | |
2015-12-05 08:12:31,457 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 | |
2015-12-05 08:12:31,457 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/ubuntu/hdfstmp/dfs/name/current/edits | |
2015-12-05 08:12:31,457 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/ubuntu/hdfstmp/dfs/name/current/edits | |
2015-12-05 08:12:31,467 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Problem binding to ec2-52-24-174-146.us-west-2.compute.amazonaws.com.com/54.201.82.69:8020 : Cannot assign requested address | |
at org.apache.hadoop.ipc.Server.bind(Server.java:267) | |
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:341) | |
at org.apache.hadoop.ipc.Server.<init>(Server.java:1539) | |
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:569) | |
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:530) | |
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:324) | |
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569) | |
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479) | |
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488) | |
Caused by: java.net.BindException: Cannot assign requested address | |
at sun.nio.ch.Net.bind0(Native Method) | |
at sun.nio.ch.Net.bind(Net.java:463) | |
at sun.nio.ch.Net.bind(Net.java:455) | |
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) | |
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) | |
at org.apache.hadoop.ipc.Server.bind(Server.java:265) | |
... 8 more | |
2015-12-05 08:12:31,474 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: | |
/************************************************************ | |
SHUTDOWN_MSG: Shutting down NameNode at ec2-52-24-174-146.us-west-2.compute.amazonaws.com/172.31.22.102 | |
************************************************************/ |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment