Skip to content

Instantly share code, notes, and snippets.

@bowyern
Created October 23, 2020 14:13
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bowyern/87ddd5529a978019f292e0b63e1d810f to your computer and use it in GitHub Desktop.
Save bowyern/87ddd5529a978019f292e0b63e1d810f to your computer and use it in GitHub Desktop.
2020-10-21 23:39:49,788 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = fstore/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.2.0.0-RC4
STARTUP_MSG: classpath = TRIMMED
STARTUP_MSG: build = git@github.com:hopshadoop/hops.git -r 5e3672f34a246afc247b6b89176465874b9dd48e; compiled by 'jenkins' on 2020-10-02T09:51Z
STARTUP_MSG: java = 1.8.0_265
************************************************************/
2020-10-21 23:39:49,809 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2020-10-21 23:39:49,893 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2020-10-21 23:39:49,995 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
2020-10-21 23:39:50,037 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2020-10-21 23:39:50,038 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2020-10-21 23:39:50,348 INFO io.hops.resolvingcache.Cache: starting Resolving Cache [InMemoryCache]
2020-10-21 23:39:50,390 INFO io.hops.metadata.ndb.ClusterjConnector: Database connect string: 192.168.100.249:1186
2020-10-21 23:39:50,390 INFO io.hops.metadata.ndb.ClusterjConnector: Database name: hops
2020-10-21 23:39:50,390 INFO io.hops.metadata.ndb.ClusterjConnector: Max Transactions: 1024
2020-10-21 23:39:50,391 INFO io.hops.metadata.ndb.DBSessionProvider: Database connect string: 192.168.100.249:1186
2020-10-21 23:39:50,391 INFO io.hops.metadata.ndb.DBSessionProvider: Database name: hops
2020-10-21 23:39:50,391 INFO io.hops.metadata.ndb.DBSessionProvider: Max Transactions: 1024
2020-10-21 23:39:51,683 INFO io.hops.security.UsersGroups: UsersGroups Initialized.
2020-10-21 23:39:51,886 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2020-10-21 23:39:51,918 INFO org.eclipse.jetty.util.log: Logging initialized @3274ms
2020-10-21 23:39:52,049 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2020-10-21 23:39:52,052 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2020-10-21 23:39:52,059 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2020-10-21 23:39:52,061 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2020-10-21 23:39:52,061 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2020-10-21 23:39:52,061 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2020-10-21 23:39:52,149 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2020-10-21 23:39:52,151 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2020-10-21 23:39:52,154 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2020-10-21 23:39:52,155 INFO org.eclipse.jetty.server.Server: jetty-9.3.24.v20180605, build timestamp: 2018-06-05T17:11:56Z, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
2020-10-21 23:39:52,195 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@649725e3{/logs,file:///srv/hops/hadoop-3.2.0.0-RC4/logs/,AVAILABLE}
2020-10-21 23:39:52,195 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@4c168660{/static,file:///srv/hops/hadoop-3.2.0.0-RC4/share/hadoop/hdfs/webapps/static/,AVAILABLE}
2020-10-21 23:39:52,303 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@4940809c{/,file:///srv/hops/hadoop-3.2.0.0-RC4/share/hadoop/hdfs/webapps/hdfs/,AVAILABLE}{/hdfs}
2020-10-21 23:39:52,306 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@7eb01b12{HTTP/1.1,[http/1.1]}{0.0.0.0:50070}
2020-10-21 23:39:52,307 INFO org.eclipse.jetty.server.Server: Started @3664ms
2020-10-21 23:39:52,359 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2020-10-21 23:39:52,427 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2020-10-21 23:39:52,427 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2020-10-21 23:39:52,429 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2020-10-21 23:39:52,429 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2020 Oct 21 23:39:52
2020-10-21 23:39:52,434 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2020-10-21 23:39:52,434 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 3
2020-10-21 23:39:52,434 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2020-10-21 23:39:52,434 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2020-10-21 23:39:52,434 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 50
2020-10-21 23:39:52,434 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2020-10-21 23:39:52,434 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2020-10-21 23:39:52,434 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2020-10-21 23:39:52,434 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2020-10-21 23:39:52,434 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: slicerBatchSize = 500
2020-10-21 23:39:52,434 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: misReplicatedNoOfBatchs = 20
2020-10-21 23:39:52,434 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: slicerNbOfBatchs = 20
2020-10-21 23:39:52,440 INFO com.zaxxer.hikari.HikariDataSource: HikariCP pool HikariPool-0 is starting.
2020-10-21 23:39:52,698 WARN io.hops.common.IDsGeneratorFactory: Called setConfiguration more than once.
2020-10-21 23:39:52,702 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
2020-10-21 23:39:52,702 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: superGroup = hdfs
2020-10-21 23:39:52,702 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2020-10-21 23:39:52,704 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2020-10-21 23:39:52,844 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Added new root inode
2020-10-21 23:39:52,844 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? true
2020-10-21 23:39:52,844 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2020-10-21 23:39:52,844 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 1039755
2020-10-21 23:39:52,844 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: The maximum number of xattrs per inode is set to 32
2020-10-21 23:39:52,844 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2020-10-21 23:39:52,861 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2020-10-21 23:39:52,861 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2020-10-21 23:39:52,861 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2020-10-21 23:39:52,866 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2020-10-21 23:39:53,194 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to 0.0.0.0:8020
2020-10-21 23:39:53,201 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 12000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2020-10-21 23:39:53,213 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8020
2020-10-21 23:39:53,213 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #2 for port 8020
2020-10-21 23:39:53,213 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #3 for port 8020
2020-10-21 23:39:53,558 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
2020-10-21 23:39:53,589 INFO io.hops.leaderElection.LETransaction: LE Status: id 1 I can be the leader but I have weak locks. Retry with stronger lock
2020-10-21 23:39:53,590 INFO io.hops.leaderElection.LETransaction: LE Status: id 1 periodic update. Stronger locks requested in next round
2020-10-21 23:39:53,593 INFO io.hops.leaderElection.LETransaction: LE Status: id 1 I am the new LEADER.
2020-10-21 23:39:53,690 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2020-10-21 23:39:54,722 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: cealring the safe blocks tabl, this may take some time.
2020-10-21 23:39:54,728 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2020-10-21 23:39:54,729 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2020-10-21 23:39:54,729 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2020-10-21 23:39:54,739 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2020-10-21 23:39:54,747 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 2 secs
2020-10-21 23:39:54,749 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2020-10-21 23:39:54,752 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2020-10-21 23:39:54,752 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: cealring the safe blocks tabl, this may take some time.
2020-10-21 23:39:54,773 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2020-10-21 23:39:54,841 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: starting
2020-10-21 23:39:54,841 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2020-10-21 23:39:54,865 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Leader Node RPC up at: fstore/127.0.1.1:8020
2020-10-21 23:39:55,222 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2020-10-21 23:39:55,222 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Catching up to latest edits from old active before taking over writer role in edits logs
2020-10-21 23:39:55,222 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Marking all datandoes as stale
2020-10-21 23:39:55,222 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Reprocessing replication and invalidation queues
2020-10-21 23:39:55,222 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2020-10-21 23:39:55,232 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2020-10-21 23:39:55,343 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: processMisReplicated read 0/10000 in the Ids range [0 - 10000] (max inodeId when the process started: 1)
2020-10-21 23:39:55,357 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 0
2020-10-21 23:39:55,357 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0
2020-10-21 23:39:55,357 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0
2020-10-21 23:39:55,357 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0
2020-10-21 23:39:55,357 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0
2020-10-21 23:39:55,357 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 131 msec
2020-10-21 23:39:56,079 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 60 minutes.
2020-10-21 23:41:44,479 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15: SIGTERM
2020-10-21 23:41:44,484 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at fstore/127.0.1.1
************************************************************/
2020-10-21 23:41:48,382 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = fstore/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.2.0.0-RC4
STARTUP_MSG: classpath = <TRIMMED>
STARTUP_MSG: build = git@github.com:hopshadoop/hops.git -r 5e3672f34a246afc247b6b89176465874b9dd48e; compiled by 'jenkins' on 2020-10-02T09:51Z
STARTUP_MSG: java = 1.8.0_265
************************************************************/
2020-10-21 23:41:48,387 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2020-10-21 23:41:48,454 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2020-10-21 23:41:48,524 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
2020-10-21 23:41:48,555 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2020-10-21 23:41:48,556 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2020-10-21 23:41:48,679 INFO io.hops.resolvingcache.Cache: starting Resolving Cache [InMemoryCache]
2020-10-21 23:41:48,715 INFO io.hops.metadata.ndb.ClusterjConnector: Database connect string: 192.168.100.249:1186
2020-10-21 23:41:48,715 INFO io.hops.metadata.ndb.ClusterjConnector: Database name: hops
2020-10-21 23:41:48,715 INFO io.hops.metadata.ndb.ClusterjConnector: Max Transactions: 1024
2020-10-21 23:41:48,716 INFO io.hops.metadata.ndb.DBSessionProvider: Database connect string: 192.168.100.249:1186
2020-10-21 23:41:48,716 INFO io.hops.metadata.ndb.DBSessionProvider: Database name: hops
2020-10-21 23:41:48,716 INFO io.hops.metadata.ndb.DBSessionProvider: Max Transactions: 1024
2020-10-21 23:41:49,809 INFO io.hops.security.UsersGroups: UsersGroups Initialized.
2020-10-21 23:41:49,905 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2020-10-21 23:41:49,917 INFO org.eclipse.jetty.util.log: Logging initialized @2193ms
2020-10-21 23:41:50,020 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2020-10-21 23:41:50,024 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2020-10-21 23:41:50,034 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2020-10-21 23:41:50,036 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2020-10-21 23:41:50,036 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2020-10-21 23:41:50,036 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2020-10-21 23:41:50,063 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2020-10-21 23:41:50,064 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2020-10-21 23:41:50,067 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2020-10-21 23:41:50,068 INFO org.eclipse.jetty.server.Server: jetty-9.3.24.v20180605, build timestamp: 2018-06-05T17:11:56Z, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
2020-10-21 23:41:50,091 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@107ed6fc{/logs,file:///srv/hops/hadoop-3.2.0.0-RC4/logs/,AVAILABLE}
2020-10-21 23:41:50,091 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@186978a6{/static,file:///srv/hops/hadoop-3.2.0.0-RC4/share/hadoop/hdfs/webapps/static/,AVAILABLE}
2020-10-21 23:41:50,174 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@5990e6c5{/,file:///srv/hops/hadoop-3.2.0.0-RC4/share/hadoop/hdfs/webapps/hdfs/,AVAILABLE}{/hdfs}
2020-10-21 23:41:50,179 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@d78795{HTTP/1.1,[http/1.1]}{0.0.0.0:50070}
2020-10-21 23:41:50,179 INFO org.eclipse.jetty.server.Server: Started @2456ms
2020-10-21 23:41:50,198 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2020-10-21 23:41:50,244 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2020-10-21 23:41:50,244 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2020-10-21 23:41:50,246 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2020-10-21 23:41:50,246 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2020 Oct 21 23:41:50
2020-10-21 23:41:50,250 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2020-10-21 23:41:50,251 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 3
2020-10-21 23:41:50,251 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2020-10-21 23:41:50,251 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2020-10-21 23:41:50,251 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 50
2020-10-21 23:41:50,251 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2020-10-21 23:41:50,251 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2020-10-21 23:41:50,251 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2020-10-21 23:41:50,251 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2020-10-21 23:41:50,251 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: slicerBatchSize = 500
2020-10-21 23:41:50,251 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: misReplicatedNoOfBatchs = 20
2020-10-21 23:41:50,251 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: slicerNbOfBatchs = 20
2020-10-21 23:41:50,256 INFO com.zaxxer.hikari.HikariDataSource: HikariCP pool HikariPool-0 is starting.
2020-10-21 23:41:50,498 WARN io.hops.common.IDsGeneratorFactory: Called setConfiguration more than once.
2020-10-21 23:41:50,501 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
2020-10-21 23:41:50,501 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: superGroup = hdfs
2020-10-21 23:41:50,501 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2020-10-21 23:41:50,503 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2020-10-21 23:41:50,558 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? true
2020-10-21 23:41:50,558 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2020-10-21 23:41:50,558 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 1039755
2020-10-21 23:41:50,558 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: The maximum number of xattrs per inode is set to 32
2020-10-21 23:41:50,558 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2020-10-21 23:41:50,564 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2020-10-21 23:41:50,564 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2020-10-21 23:41:50,564 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2020-10-21 23:41:50,569 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2020-10-21 23:41:50,667 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to 0.0.0.0:8020
2020-10-21 23:41:50,672 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 12000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2020-10-21 23:41:50,680 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8020
2020-10-21 23:41:50,680 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #2 for port 8020
2020-10-21 23:41:50,680 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #3 for port 8020
2020-10-21 23:41:50,814 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
2020-10-21 23:41:50,828 INFO io.hops.leaderElection.LETransaction: LE Status: id 2 I am a NON_LEADER process
2020-10-21 23:41:52,843 INFO io.hops.leaderElection.LETransaction: LE Status: id 2 I can be the leader but I have weak locks. Retry with stronger lock
2020-10-21 23:41:52,843 INFO io.hops.leaderElection.LETransaction: LE Status: id 2 periodic update. Stronger locks requested in next round
2020-10-21 23:41:52,845 INFO io.hops.leaderElection.LETransaction: LE Status: id 2 I am the new LEADER.
2020-10-21 23:41:52,930 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2020-10-21 23:41:53,953 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: cealring the safe blocks tabl, this may take some time.
2020-10-21 23:41:53,958 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2020-10-21 23:41:53,958 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2020-10-21 23:41:53,958 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2020-10-21 23:41:53,964 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2020-10-21 23:41:53,979 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 3 secs
2020-10-21 23:41:53,981 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2020-10-21 23:41:53,983 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2020-10-21 23:41:53,983 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: cealring the safe blocks tabl, this may take some time.
2020-10-21 23:41:53,988 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2020-10-21 23:41:54,018 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2020-10-21 23:41:54,018 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: starting
2020-10-21 23:41:54,027 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Leader Node RPC up at: fstore/127.0.1.1:8020
2020-10-21 23:41:54,157 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2020-10-21 23:41:54,157 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Catching up to latest edits from old active before taking over writer role in edits logs
2020-10-21 23:41:54,157 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Marking all datandoes as stale
2020-10-21 23:41:54,157 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Reprocessing replication and invalidation queues
2020-10-21 23:41:54,157 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2020-10-21 23:41:54,164 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2020-10-21 23:41:54,213 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: processMisReplicated read 0/10000 in the Ids range [0 - 10000] (max inodeId when the process started: 7)
2020-10-21 23:41:54,218 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 0
2020-10-21 23:41:54,218 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0
2020-10-21 23:41:54,218 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0
2020-10-21 23:41:54,218 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0
2020-10-21 23:41:54,218 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0
2020-10-21 23:41:54,218 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 58 msec
2020-10-21 23:41:54,660 INFO org.apache.hadoop.fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 60 minutes.
2020-10-21 23:42:25,620 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.100.249:50010, datanodeUuid=3dce914b-0353-4f3f-bd39-3c955847cc2b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-0dcfe208-8599-4cd4-b816-a48677cb81d3;nsid=911;c=1603319801273) storage 3dce914b-0353-4f3f-bd39-3c955847cc2b
2020-10-21 23:42:25,621 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2020-10-21 23:42:25,622 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.100.249:50010
2020-10-21 23:42:25,645 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:50010, datanodeUuid=3dce914b-0353-4f3f-bd39-3c955847cc2b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-0dcfe208-8599-4cd4-b816-a48677cb81d3;nsid=911;c=1603319801273) storage 3dce914b-0353-4f3f-bd39-3c955847cc2b
2020-10-21 23:42:25,645 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: 192.168.100.249:50010 is replaced by DatanodeRegistration(127.0.0.1:50010, datanodeUuid=3dce914b-0353-4f3f-bd39-3c955847cc2b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-0dcfe208-8599-4cd4-b816-a48677cb81d3;nsid=911;c=1603319801273) with the same storageID 3dce914b-0353-4f3f-bd39-3c955847cc2b
2020-10-21 23:42:25,646 INFO org.apache.hadoop.net.NetworkTopology: Removing a node: /default-rack/192.168.100.249:50010
2020-10-21 23:42:25,646 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
2020-10-21 23:42:25,675 ERROR org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.getDatanode: Data node DatanodeRegistration(192.168.100.249:50010, datanodeUuid=3dce914b-0353-4f3f-bd39-3c955847cc2b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-0dcfe208-8599-4cd4-b816-a48677cb81d3;nsid=911;c=1603319801273) is attempting to report storage ID 3dce914b-0353-4f3f-bd39-3c955847cc2b. Node 127.0.0.1:50010 is expected to serve this storage.
2020-10-21 23:42:25,675 INFO org.apache.hadoop.ipc.Server: IPC Server handler 13 on 8020, call Call#5 Retry#0 org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.getNextNamenodeToSendBlockReport from 127.0.0.1:54344
org.apache.hadoop.hdfs.protocol.UnregisteredNodeException: Data node DatanodeRegistration(192.168.100.249:50010, datanodeUuid=3dce914b-0353-4f3f-bd39-3c955847cc2b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-0dcfe208-8599-4cd4-b816-a48677cb81d3;nsid=911;c=1603319801273) is attempting to report storage ID 3dce914b-0353-4f3f-bd39-3c955847cc2b. Node 127.0.0.1:50010 is expected to serve this storage.
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanode(DatanodeManager.java:511)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getNextNamenodeToSendBlockReport(NameNode.java:1337)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getNextNamenodeToSendBlockReport(NameNodeRpcServer.java:1256)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.getNextNamenodeToSendBlockReport(DatanodeProtocolServerSideTranslatorPB.java:332)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:35625)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:868)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:814)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1821)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2900)
2020-10-21 23:42:25,696 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2020-10-21 23:42:26,691 ERROR org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.getDatanode: Data node DatanodeRegistration(192.168.100.249:50010, datanodeUuid=3dce914b-0353-4f3f-bd39-3c955847cc2b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-0dcfe208-8599-4cd4-b816-a48677cb81d3;nsid=911;c=1603319801273) is attempting to report storage ID 3dce914b-0353-4f3f-bd39-3c955847cc2b. Node 127.0.0.1:50010 is expected to serve this storage.
2020-10-21 23:42:26,691 INFO org.apache.hadoop.ipc.Server: IPC Server handler 15 on 8020, call Call#7 Retry#0 org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.getNextNamenodeToSendBlockReport from 127.0.0.1:54344
org.apache.hadoop.hdfs.protocol.UnregisteredNodeException: Data node DatanodeRegistration(192.168.100.249:50010, datanodeUuid=3dce914b-0353-4f3f-bd39-3c955847cc2b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-0dcfe208-8599-4cd4-b816-a48677cb81d3;nsid=911;c=1603319801273) is attempting to report storage ID 3dce914b-0353-4f3f-bd39-3c955847cc2b. Node 127.0.0.1:50010 is expected to serve this storage.
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanode(DatanodeManager.java:511)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getNextNamenodeToSendBlockReport(NameNode.java:1337)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getNextNamenodeToSendBlockReport(NameNodeRpcServer.java:1256)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.getNextNamenodeToSendBlockReport(DatanodeProtocolServerSideTranslatorPB.java:332)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:35625)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:868)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:814)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1821)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2900)
2020-10-21 23:42:27,709 ERROR org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.getDatanode: Data node DatanodeRegistration(192.168.100.249:50010, datanodeUuid=3dce914b-0353-4f3f-bd39-3c955847cc2b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-0dcfe208-8599-4cd4-b816-a48677cb81d3;nsid=911;c=1603319801273) is attempting to report storage ID 3dce914b-0353-4f3f-bd39-3c955847cc2b. Node 127.0.0.1:50010 is expected to serve this storage.
2020-10-21 23:42:27,709 INFO org.apache.hadoop.ipc.Server: IPC Server handler 11 on 8020, call Call#8 Retry#0 org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.getNextNamenodeToSendBlockReport from 127.0.0.1:54344
org.apache.hadoop.hdfs.protocol.UnregisteredNodeException: Data node DatanodeRegistration(192.168.100.249:50010, datanodeUuid=3dce914b-0353-4f3f-bd39-3c955847cc2b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-50;cid=CID-0dcfe208-8599-4cd4-b816-a48677cb81d3;nsid=911;c=1603319801273) is attempting to report storage ID 3dce914b-0353-4f3f-bd39-3c955847cc2b. Node 127.0.0.1:50010 is expected to serve this storage.
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanode(DatanodeManager.java:511)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getNextNamenodeToSendBlockReport(NameNode.java:1337)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getNextNamenodeToSendBlockReport(NameNodeRpcServer.java:1256)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.getNextNamenodeToSendBlockReport(DatanodeProtocolServerSideTranslatorPB.java:332)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:35625)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:868)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:814)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1821)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2900)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment