Skip to content

Instantly share code, notes, and snippets.

@rajcspsg
Created October 7, 2019 04:04
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save rajcspsg/26e24f5fd955d635152c0d43a4b63647 to your computer and use it in GitHub Desktop.
Save rajcspsg/26e24f5fd955d635152c0d43a4b63647 to your computer and use it in GitHub Desktop.
docker data node start issue
[vagrant@master docker-hadoop-master]$ docker service logs hadoop_datanode
hadoop_datanode.0.bwruqadoq28r@master.com | Configuring core
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting hadoop.proxyuser.hue.hosts=*
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting fs.defaultFS=hdfs://namenode:9000
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting hadoop.http.staticuser.user=root
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting hadoop.proxyuser.hue.groups=*
hadoop_datanode.0.bwruqadoq28r@master.com | Configuring hdfs
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting dfs.webhdfs.enabled=true
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting dfs.permissions.enabled=false
hadoop_datanode.0.bwruqadoq28r@master.com | Configuring yarn
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.timeline-service.enabled=true
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.timeline-service.generic-application-history.enabled=true
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.log-aggregation-enable=true
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.resourcemanager.hostname=resourcemanager
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.timeline-service.hostname=historyserver
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.resourcemanager.address=resourcemanager:8032
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting mapreduce.map.output.compress=true
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.nodemanager.resource.memory-mb=16384
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.resourcemanager.recovery.enabled=true
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.nodemanager.resource.cpu-vcores=8
hadoop_datanode.0.bwruqadoq28r@master.com | Configuring httpfs
hadoop_datanode.0.bwruqadoq28r@master.com | Configuring kms
hadoop_datanode.0.bwruqadoq28r@master.com | Configuring mapred
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting mapreduce.map.java.opts=-Xmx3072m
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting mapreduce.reduce.memory.mb=8192
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting mapreduce.reduce.java.opts=-Xmx6144m
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting mapreduce.map.memory.mb=4096
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting mapred.child.java.opts=-Xmx4096m
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting mapreduce.framework.name=yarn
hadoop_datanode.0.bwruqadoq28r@master.com | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.bwruqadoq28r@master.com | Configuring for multihomed network
hadoop_datanode.0.bwruqadoq28r@master.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.bwruqadoq28r@master.com | [1/100] check for namenode:50070...
hadoop_datanode.0.bwruqadoq28r@master.com | [1/100] namenode:50070 is not available yet
hadoop_datanode.0.bwruqadoq28r@master.com | [1/100] try in 5s once again ...
hadoop_datanode.0.bwruqadoq28r@master.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.bwruqadoq28r@master.com | [2/100] check for namenode:50070...
hadoop_datanode.0.bwruqadoq28r@master.com | [2/100] namenode:50070 is not available yet
hadoop_datanode.0.bwruqadoq28r@master.com | [2/100] try in 5s once again ...
hadoop_datanode.0.nm128y78xaq9@master.com | Configuring core
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting hadoop.proxyuser.hue.hosts=*
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting fs.defaultFS=hdfs://namenode:9000
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting hadoop.http.staticuser.user=root
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting hadoop.proxyuser.hue.groups=*
hadoop_datanode.0.nm128y78xaq9@master.com | Configuring hdfs
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting dfs.webhdfs.enabled=true
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting dfs.permissions.enabled=false
hadoop_datanode.0.nm128y78xaq9@master.com | Configuring yarn
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.timeline-service.enabled=true
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.timeline-service.generic-application-history.enabled=true
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.log-aggregation-enable=true
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.resourcemanager.hostname=resourcemanager
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.timeline-service.hostname=historyserver
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.resourcemanager.address=resourcemanager:8032
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting mapreduce.map.output.compress=true
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.nodemanager.resource.memory-mb=16384
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.resourcemanager.recovery.enabled=true
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.nodemanager.resource.cpu-vcores=8
hadoop_datanode.0.nm128y78xaq9@master.com | Configuring httpfs
hadoop_datanode.0.nm128y78xaq9@master.com | Configuring kms
hadoop_datanode.0.nm128y78xaq9@master.com | Configuring mapred
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting mapreduce.map.java.opts=-Xmx3072m
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting mapreduce.reduce.memory.mb=8192
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting mapreduce.reduce.java.opts=-Xmx6144m
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting mapreduce.map.memory.mb=4096
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting mapred.child.java.opts=-Xmx4096m
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting mapreduce.framework.name=yarn
hadoop_datanode.0.nm128y78xaq9@master.com | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.nm128y78xaq9@master.com | Configuring for multihomed network
hadoop_datanode.0.nb2mz1252yxe@master.com | Configuring core
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting hadoop.proxyuser.hue.hosts=*
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting fs.defaultFS=hdfs://namenode:9000
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting hadoop.http.staticuser.user=root
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting hadoop.proxyuser.hue.groups=*
hadoop_datanode.0.nb2mz1252yxe@master.com | Configuring hdfs
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting dfs.webhdfs.enabled=true
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting dfs.permissions.enabled=false
hadoop_datanode.0.nb2mz1252yxe@master.com | Configuring yarn
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.timeline-service.enabled=true
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.timeline-service.generic-application-history.enabled=true
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.log-aggregation-enable=true
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.resourcemanager.hostname=resourcemanager
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.timeline-service.hostname=historyserver
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.resourcemanager.address=resourcemanager:8032
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting mapreduce.map.output.compress=true
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.nodemanager.resource.memory-mb=16384
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.resourcemanager.recovery.enabled=true
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.nodemanager.resource.cpu-vcores=8
hadoop_datanode.0.nb2mz1252yxe@master.com | Configuring httpfs
hadoop_datanode.0.nb2mz1252yxe@master.com | Configuring kms
hadoop_datanode.0.nb2mz1252yxe@master.com | Configuring mapred
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting mapreduce.map.java.opts=-Xmx3072m
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting mapreduce.reduce.memory.mb=8192
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting mapreduce.reduce.java.opts=-Xmx6144m
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting mapreduce.map.memory.mb=4096
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting mapred.child.java.opts=-Xmx4096m
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting mapreduce.framework.name=yarn
hadoop_datanode.0.nb2mz1252yxe@master.com | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.nb2mz1252yxe@master.com | Configuring for multihomed network
hadoop_datanode.0.nb2mz1252yxe@master.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.nb2mz1252yxe@master.com | [1/100] check for namenode:50070...
hadoop_datanode.0.nb2mz1252yxe@master.com | [1/100] namenode:50070 is not available yet
hadoop_datanode.0.nb2mz1252yxe@master.com | [1/100] try in 5s once again ...
hadoop_datanode.0.nb2mz1252yxe@master.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.nb2mz1252yxe@master.com | [2/100] check for namenode:50070...
hadoop_datanode.0.nb2mz1252yxe@master.com | [2/100] namenode:50070 is not available yet
hadoop_datanode.0.nb2mz1252yxe@master.com | [2/100] try in 5s once again ...
hadoop_datanode.0.wu4aund0z4zh@master.com | Configuring core
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting hadoop.proxyuser.hue.hosts=*
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting fs.defaultFS=hdfs://namenode:9000
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting hadoop.http.staticuser.user=root
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting hadoop.proxyuser.hue.groups=*
hadoop_datanode.0.wu4aund0z4zh@master.com | Configuring hdfs
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting dfs.webhdfs.enabled=true
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting dfs.permissions.enabled=false
hadoop_datanode.0.wu4aund0z4zh@master.com | Configuring yarn
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.timeline-service.enabled=true
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.timeline-service.generic-application-history.enabled=true
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.log-aggregation-enable=true
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.resourcemanager.hostname=resourcemanager
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.timeline-service.hostname=historyserver
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.resourcemanager.address=resourcemanager:8032
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting mapreduce.map.output.compress=true
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.nodemanager.resource.memory-mb=16384
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.resourcemanager.recovery.enabled=true
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.nodemanager.resource.cpu-vcores=8
hadoop_datanode.0.wu4aund0z4zh@master.com | Configuring httpfs
hadoop_datanode.0.wu4aund0z4zh@master.com | Configuring kms
hadoop_datanode.0.wu4aund0z4zh@master.com | Configuring mapred
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting mapreduce.map.java.opts=-Xmx3072m
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting mapreduce.reduce.memory.mb=8192
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting mapreduce.reduce.java.opts=-Xmx6144m
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting mapreduce.map.memory.mb=4096
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting mapred.child.java.opts=-Xmx4096m
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting mapreduce.framework.name=yarn
hadoop_datanode.0.wu4aund0z4zh@master.com | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.wu4aund0z4zh@master.com | Configuring for multihomed network
hadoop_datanode.0.wu4aund0z4zh@master.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.wu4aund0z4zh@master.com | [1/100] check for namenode:50070...
hadoop_datanode.0.wu4aund0z4zh@master.com | [1/100] namenode:50070 is not available yet
hadoop_datanode.0.wu4aund0z4zh@master.com | [1/100] try in 5s once again ...
hadoop_datanode.0.wu4aund0z4zh@master.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.wu4aund0z4zh@master.com | [2/100] check for namenode:50070...
hadoop_datanode.0.wu4aund0z4zh@master.com | [2/100] namenode:50070 is not available yet
hadoop_datanode.0.wu4aund0z4zh@master.com | [2/100] try in 5s once again ...
hadoop_datanode.0.wtykq1q8gwof@master.com | Configuring core
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting hadoop.proxyuser.hue.hosts=*
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting fs.defaultFS=hdfs://namenode:9000
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting hadoop.http.staticuser.user=root
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting hadoop.proxyuser.hue.groups=*
hadoop_datanode.0.wtykq1q8gwof@master.com | Configuring hdfs
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting dfs.webhdfs.enabled=true
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting dfs.permissions.enabled=false
hadoop_datanode.0.wtykq1q8gwof@master.com | Configuring yarn
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.timeline-service.enabled=true
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.timeline-service.generic-application-history.enabled=true
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.log-aggregation-enable=true
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.resourcemanager.hostname=resourcemanager
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.timeline-service.hostname=historyserver
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.resourcemanager.address=resourcemanager:8032
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting mapreduce.map.output.compress=true
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.nodemanager.resource.memory-mb=16384
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.resourcemanager.recovery.enabled=true
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.nodemanager.resource.cpu-vcores=8
hadoop_datanode.0.wtykq1q8gwof@master.com | Configuring httpfs
hadoop_datanode.0.wtykq1q8gwof@master.com | Configuring kms
hadoop_datanode.0.wtykq1q8gwof@master.com | Configuring mapred
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting mapreduce.map.java.opts=-Xmx3072m
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting mapreduce.reduce.memory.mb=8192
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting mapreduce.reduce.java.opts=-Xmx6144m
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting mapreduce.map.memory.mb=4096
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting mapred.child.java.opts=-Xmx4096m
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting mapreduce.framework.name=yarn
hadoop_datanode.0.wtykq1q8gwof@master.com | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.wtykq1q8gwof@master.com | Configuring for multihomed network
hadoop_datanode.0.wtykq1q8gwof@master.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.wtykq1q8gwof@master.com | [1/100] check for namenode:50070...
hadoop_datanode.0.wtykq1q8gwof@master.com | [1/100] namenode:50070 is not available yet
hadoop_datanode.0.wtykq1q8gwof@master.com | [1/100] try in 5s once again ...
hadoop_datanode.0.wtykq1q8gwof@master.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.wtykq1q8gwof@master.com | [2/100] check for namenode:50070...
hadoop_datanode.0.wtykq1q8gwof@master.com | [2/100] namenode:50070 is not available yet
hadoop_datanode.0.wtykq1q8gwof@master.com | [2/100] try in 5s once again ...
hadoop_datanode.0.znvx2o75fcr4@worker2.com | Configuring core
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting hadoop.proxyuser.hue.hosts=*
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting fs.defaultFS=hdfs://namenode:9000
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting hadoop.http.staticuser.user=root
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting hadoop.proxyuser.hue.groups=*
hadoop_datanode.0.znvx2o75fcr4@worker2.com | Configuring hdfs
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting dfs.webhdfs.enabled=true
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting dfs.permissions.enabled=false
hadoop_datanode.0.znvx2o75fcr4@worker2.com | Configuring yarn
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.timeline-service.enabled=true
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.timeline-service.generic-application-history.enabled=true
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.log-aggregation-enable=true
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.resourcemanager.hostname=resourcemanager
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.timeline-service.hostname=historyserver
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.resourcemanager.address=resourcemanager:8032
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting mapreduce.map.output.compress=true
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.nodemanager.resource.memory-mb=16384
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.resourcemanager.recovery.enabled=true
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.nodemanager.resource.cpu-vcores=8
hadoop_datanode.0.znvx2o75fcr4@worker2.com | Configuring httpfs
hadoop_datanode.0.znvx2o75fcr4@worker2.com | Configuring kms
hadoop_datanode.0.znvx2o75fcr4@worker2.com | Configuring mapred
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting mapreduce.map.java.opts=-Xmx3072m
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting mapreduce.reduce.memory.mb=8192
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting mapreduce.reduce.java.opts=-Xmx6144m
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting mapreduce.map.memory.mb=4096
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting mapred.child.java.opts=-Xmx4096m
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting mapreduce.framework.name=yarn
hadoop_datanode.0.znvx2o75fcr4@worker2.com | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.znvx2o75fcr4@worker2.com | Configuring for multihomed network
hadoop_datanode.0.znvx2o75fcr4@worker2.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.znvx2o75fcr4@worker2.com | [1/100] check for namenode:50070...
hadoop_datanode.0.znvx2o75fcr4@worker2.com | [1/100] namenode:50070 is not available yet
hadoop_datanode.0.znvx2o75fcr4@worker2.com | [1/100] try in 5s once again ...
hadoop_datanode.0.znvx2o75fcr4@worker2.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.znvx2o75fcr4@worker2.com | [2/100] check for namenode:50070...
hadoop_datanode.0.znvx2o75fcr4@worker2.com | [2/100] namenode:50070 is not available yet
hadoop_datanode.0.znvx2o75fcr4@worker2.com | [2/100] try in 5s once again ...
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | Configuring core
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting hadoop.proxyuser.hue.hosts=*
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting fs.defaultFS=hdfs://namenode:9000
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting hadoop.http.staticuser.user=root
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting hadoop.proxyuser.hue.groups=*
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | Configuring hdfs
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting dfs.webhdfs.enabled=true
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting dfs.permissions.enabled=false
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | Configuring yarn
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.timeline-service.enabled=true
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.timeline-service.generic-application-history.enabled=true
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.log-aggregation-enable=true
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.resourcemanager.hostname=resourcemanager
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.timeline-service.hostname=historyserver
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.resourcemanager.address=resourcemanager:8032
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting mapreduce.map.output.compress=true
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.nodemanager.resource.memory-mb=16384
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.resourcemanager.recovery.enabled=true
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.nodemanager.resource.cpu-vcores=8
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | Configuring httpfs
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | Configuring kms
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | Configuring mapred
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting mapreduce.map.java.opts=-Xmx3072m
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting mapreduce.reduce.memory.mb=8192
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting mapreduce.reduce.java.opts=-Xmx6144m
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting mapreduce.map.memory.mb=4096
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting mapred.child.java.opts=-Xmx4096m
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting mapreduce.framework.name=yarn
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | Configuring for multihomed network
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | [1/100] check for namenode:50070...
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | [1/100] namenode:50070 is not available yet
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | [1/100] try in 5s once again ...
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | [2/100] check for namenode:50070...
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | [2/100] namenode:50070 is not available yet
hadoop_datanode.0.xz6nvjd0o64s@worker2.com | [2/100] try in 5s once again ...
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | Configuring core
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting hadoop.proxyuser.hue.hosts=*
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting fs.defaultFS=hdfs://namenode:9000
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting hadoop.http.staticuser.user=root
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting hadoop.proxyuser.hue.groups=*
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | Configuring hdfs
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting dfs.webhdfs.enabled=true
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting dfs.permissions.enabled=false
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | Configuring yarn
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.timeline-service.enabled=true
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.timeline-service.generic-application-history.enabled=true
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.log-aggregation-enable=true
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.resourcemanager.hostname=resourcemanager
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.timeline-service.hostname=historyserver
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.resourcemanager.address=resourcemanager:8032
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting mapreduce.map.output.compress=true
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.nodemanager.resource.memory-mb=16384
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.resourcemanager.recovery.enabled=true
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.nodemanager.resource.cpu-vcores=8
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | Configuring httpfs
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | Configuring kms
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | Configuring mapred
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting mapreduce.map.java.opts=-Xmx3072m
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting mapreduce.reduce.memory.mb=8192
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting mapreduce.reduce.java.opts=-Xmx6144m
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting mapreduce.map.memory.mb=4096
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting mapred.child.java.opts=-Xmx4096m
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting mapreduce.framework.name=yarn
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | Configuring for multihomed network
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | [1/100] check for namenode:50070...
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | [1/100] namenode:50070 is not available yet
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | [1/100] try in 5s once again ...
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | [2/100] check for namenode:50070...
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | [2/100] namenode:50070 is not available yet
hadoop_datanode.0.qs8kv84n3ijy@worker2.com | [2/100] try in 5s once again ...
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | Configuring core
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting hadoop.proxyuser.hue.hosts=*
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting fs.defaultFS=hdfs://namenode:9000
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting hadoop.http.staticuser.user=root
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting hadoop.proxyuser.hue.groups=*
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | Configuring hdfs
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting dfs.webhdfs.enabled=true
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting dfs.permissions.enabled=false
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | Configuring yarn
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.timeline-service.enabled=true
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.timeline-service.generic-application-history.enabled=true
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.log-aggregation-enable=true
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.resourcemanager.hostname=resourcemanager
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.timeline-service.hostname=historyserver
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.resourcemanager.address=resourcemanager:8032
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting mapreduce.map.output.compress=true
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.nodemanager.resource.memory-mb=16384
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.resourcemanager.recovery.enabled=true
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.nodemanager.resource.cpu-vcores=8
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | Configuring httpfs
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | Configuring kms
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | Configuring mapred
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting mapreduce.map.java.opts=-Xmx3072m
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting mapreduce.reduce.memory.mb=8192
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting mapreduce.reduce.java.opts=-Xmx6144m
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting mapreduce.map.memory.mb=4096
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting mapred.child.java.opts=-Xmx4096m
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting mapreduce.framework.name=yarn
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.ttbz9ab9cfzl@worker2.com | Configuring for multihomed network
hadoop_datanode.0.10xk1n5zqd68@worker2.com | Configuring core
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting hadoop.proxyuser.hue.hosts=*
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting fs.defaultFS=hdfs://namenode:9000
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting hadoop.http.staticuser.user=root
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting hadoop.proxyuser.hue.groups=*
hadoop_datanode.0.10xk1n5zqd68@worker2.com | Configuring hdfs
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting dfs.webhdfs.enabled=true
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting dfs.permissions.enabled=false
hadoop_datanode.0.10xk1n5zqd68@worker2.com | Configuring yarn
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.timeline-service.enabled=true
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.timeline-service.generic-application-history.enabled=true
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.log-aggregation-enable=true
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.resourcemanager.hostname=resourcemanager
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.timeline-service.hostname=historyserver
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.resourcemanager.address=resourcemanager:8032
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting mapreduce.map.output.compress=true
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.nodemanager.resource.memory-mb=16384
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.resourcemanager.recovery.enabled=true
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.nodemanager.resource.cpu-vcores=8
hadoop_datanode.0.10xk1n5zqd68@worker2.com | Configuring httpfs
hadoop_datanode.0.10xk1n5zqd68@worker2.com | Configuring kms
hadoop_datanode.0.10xk1n5zqd68@worker2.com | Configuring mapred
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting mapreduce.map.java.opts=-Xmx3072m
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting mapreduce.reduce.memory.mb=8192
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting mapreduce.reduce.java.opts=-Xmx6144m
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting mapreduce.map.memory.mb=4096
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting mapred.child.java.opts=-Xmx4096m
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting mapreduce.framework.name=yarn
hadoop_datanode.0.10xk1n5zqd68@worker2.com | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.1.2/
hadoop_datanode.0.10xk1n5zqd68@worker2.com | Configuring for multihomed network
hadoop_datanode.0.10xk1n5zqd68@worker2.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.10xk1n5zqd68@worker2.com | [1/100] check for namenode:50070...
hadoop_datanode.0.10xk1n5zqd68@worker2.com | [1/100] namenode:50070 is not available yet
hadoop_datanode.0.10xk1n5zqd68@worker2.com | [1/100] try in 5s once again ...
hadoop_datanode.0.10xk1n5zqd68@worker2.com | namenode: forward host lookup failed: Host name lookup failure : Resource temporarily unavailable
hadoop_datanode.0.10xk1n5zqd68@worker2.com | [2/100] check for namenode:50070...
hadoop_datanode.0.10xk1n5zqd68@worker2.com | [2/100] namenode:50070 is not available yet
hadoop_datanode.0.10xk1n5zqd68@worker2.com | [2/100] try in 5s once again ...
error from daemon in stream: Error grabbing logs: rpc error: code = Unknown desc = warning: incomplete log stream. some logs could not be retrieved for the following reasons: node g6mug58tld4aikobdv3ic9p9y is not available, node c8uhfvyhhlmuxp2p3ei3ymrj5 is not available
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment