Skip to content

Instantly share code, notes, and snippets.

@miguno
Created November 9, 2011 09:10
Show Gist options
  • Save miguno/1350917 to your computer and use it in GitHub Desktop.
Save miguno/1350917 to your computer and use it in GitHub Desktop.
anubhav
2011-11-08 20:34:36,671 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = dinesh-Studio-1555/172.16.9.93
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-11-08 20:34:41,568 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /app/hadoop/tmp/dfs/data is not formatted.
2011-11-08 20:34:41,568 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2011-11-08 20:34:41,644 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
2011-11-08 20:34:41,648 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
2011-11-08 20:34:41,651 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2011-11-08 20:34:46,723 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2011-11-08 20:34:46,797 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
2011-11-08 20:34:46,797 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
2011-11-08 20:34:46,797 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2011-11-08 20:34:46,797 INFO org.mortbay.log: jetty-6.1.14
2011-11-08 20:34:47,129 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2011-11-08 20:34:47,135 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=DataNode, sessionId=null
2011-11-08 20:34:52,157 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=DataNode, port=50020
2011-11-08 20:34:52,162 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2011-11-08 20:34:52,164 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2011-11-08 20:34:52,164 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
2011-11-08 20:34:52,164 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
2011-11-08 20:34:52,167 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
2011-11-08 20:34:52,167 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(dinesh-Studio-1555:50010, storageID=, infoPort=50075, ipcPort=50020)
2011-11-08 20:34:52,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id DS-952196365-172.16.9.93-50010-1320764692169 is assigned to data-node 172.16.9.93:50010
2011-11-08 20:34:52,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.16.9.93:50010, storageID=DS-952196365-172.16.9.93-50010-1320764692169, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/app/hadoop/tmp/dfs/data/current'}
2011-11-08 20:34:52,184 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2011-11-08 20:34:52,204 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 9 msecs
2011-11-08 20:34:52,205 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner.
2011-11-08 20:51:18,633 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 2 msecs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment