Skip to content

Instantly share code, notes, and snippets.

@nathan815
Last active November 6, 2023 23:50
Show Gist options
  • Save nathan815/a938b3f7a4d06b2811cf2b1a917800e1 to your computer and use it in GitHub Desktop.
Save nathan815/a938b3f7a4d06b2811cf2b1a917800e1 to your computer and use it in GitHub Desktop.
Updated docker-compose.yml for big-data-europe/docker-hadoop
version: "2.1"
services:
namenode:
build: ./namenode
container_name: namenode
volumes:
- hadoop_namenode:/hadoop/dfs/name
environment:
- CLUSTER_NAME=test
env_file:
- ./hadoop.env
ports:
- "9870:9870"
resourcemanager:
build: ./resourcemanager
container_name: resourcemanager
restart: on-failure
depends_on:
- namenode
- datanode1
- datanode2
- datanode3
env_file:
- ./hadoop.env
ports:
- "8089:8088"
historyserver:
build: ./historyserver
container_name: historyserver
depends_on:
- namenode
- datanode1
- datanode2
volumes:
- hadoop_historyserver:/hadoop/yarn/timeline
env_file:
- ./hadoop.env
ports:
- "8188:8188"
nodemanager1:
build: ./nodemanager
container_name: nodemanager1
depends_on:
- namenode
- datanode1
- datanode2
env_file:
- ./hadoop.env
ports:
- "8042:8042"
datanode1:
build: ./datanode
container_name: datanode1
depends_on:
- namenode
volumes:
- hadoop_datanode1:/hadoop/dfs/data
env_file:
- ./hadoop.env
datanode2:
build: ./datanode
container_name: datanode2
depends_on:
- namenode
volumes:
- hadoop_datanode2:/hadoop/dfs/data
env_file:
- ./hadoop.env
datanode3:
build: ./datanode
container_name: datanode3
depends_on:
- namenode
volumes:
- hadoop_datanode3:/hadoop/dfs/data
env_file:
- ./hadoop.env
volumes:
hadoop_namenode:
hadoop_datanode1:
hadoop_datanode2:
hadoop_datanode3:
hadoop_historyserver:
@pmsoltani
Copy link

pmsoltani commented Sep 3, 2020

🙏
I've used it with this tutorial:
How to set up a Hadoop cluster in Docker

@Fyroze
Copy link

Fyroze commented Sep 15, 2020

Hi,

For some reason containers are exiting few seconds after creation?. Is there something i am missing?.

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a630b40ef2a9 docker-hadoop_resourcemanager "/entrypoint.sh /run…" 22 seconds ago Restarting (126) 4 seconds ago resourcemanager
2db515b7a7d6 docker-hadoop_historyserver "/entrypoint.sh /run…" 22 seconds ago Exited (126) 19 seconds ago historyserver
50e680cfecf4 docker-hadoop_nodemanager1 "/entrypoint.sh /run…" 22 seconds ago Exited (126) 19 seconds ago nodemanager1
25edc5a50f0e docker-hadoop_datanode3 "/entrypoint.sh /run…" 24 seconds ago Exited (126) 20 seconds ago datanode3
48356ca3ca74 docker-hadoop_datanode2 "/entrypoint.sh /run…" 24 seconds ago Exited (126) 20 seconds ago datanode2
868e08b14f99 docker-hadoop_datanode1 "/entrypoint.sh /run…" 24 seconds ago Exited (126) 20 seconds ago datanode1
6587316b15cf docker-hadoop_namenode "/entrypoint.sh /run…" 24 seconds ago Exited (126) 23 seconds ago namenode
c380a3aa233d debian "nsenter -t 1 -m -u …" 41 minutes ago Up 41 minutes compassionate_carson

@nathan815
Copy link
Author

Hi @Fyroze, you can try looking at logs of all containers with:

docker-compose logs

@Fyroze
Copy link

Fyroze commented Sep 17, 2020

Thank you Nathan.

Following error is seen in all the logs for NN, DN, RN etc. could it be due to copyng the files to windows and running these from windows?.
/entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory

PS C:\Users\Tasmiya\Documents\GitHub\docker-hadoop> docker-compose logs
Attaching to historyserver, resourcemanager, nodemanager1, datanode1, datanode2, datanode3, namenode
datanode3 | Configuring core
datanode3 | - Setting hadoop.proxyuser.hue.hosts=*
datanode3 | - Setting fs.defaultFS=hdfs://namenode:9000
datanode3 | - Setting hadoop.http.staticuser.user=root
datanode3 | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
datanode3 | - Setting hadoop.proxyuser.hue.groups=*
datanode3 | Configuring hdfs
datanode3 | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
datanode3 | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
datanode3 | - Setting dfs.webhdfs.enabled=true
datanode3 | - Setting dfs.permissions.enabled=false
datanode3 | Configuring yarn
datanode3 | - Setting yarn.timeline-service.enabled=true
datanode3 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
datanode3 | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
datanode3 | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
datanode3 | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
datanode3 | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
datanode3 | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
datanode3 | - Setting yarn.timeline-service.generic-application-history.enabled=true
datanode3 | - Setting yarn.log-aggregation-enable=true
datanode3 | - Setting yarn.resourcemanager.hostname=resourcemanager
datanode3 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
datanode3 | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
datanode3 | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
datanode3 | - Setting yarn.timeline-service.hostname=historyserver
datanode3 | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
datanode3 | - Setting yarn.resourcemanager.address=resourcemanager:8032
datanode3 | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
datanode3 | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
datanode3 | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
datanode3 | - Setting mapreduce.map.output.compress=true
datanode3 | - Setting yarn.nodemanager.resource.memory-mb=16384
datanode3 | - Setting yarn.resourcemanager.recovery.enabled=true
datanode3 | - Setting yarn.nodemanager.resource.cpu-vcores=8
datanode3 | Configuring httpfs
datanode3 | Configuring kms
datanode3 | Configuring mapred
datanode3 | - Setting mapreduce.map.java.opts=-Xmx3072m
datanode3 | - Setting mapreduce.reduce.java.opts=-Xmx6144m
datanode3 | - Setting mapreduce.reduce.memory.mb=8192
datanode3 | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode3 | - Setting mapreduce.map.memory.mb=4096
datanode3 | - Setting mapred.child.java.opts=-Xmx4096m
datanode3 | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode3 | - Setting mapreduce.framework.name=yarn
datanode3 | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode3 | Configuring for multihomed network
datanode3 | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
datanode3 | /entrypoint.sh: line 117: /run.sh: Success
datanode3 | Configuring core
datanode3 | - Setting hadoop.proxyuser.hue.hosts=*
datanode3 | - Setting fs.defaultFS=hdfs://namenode:9000
datanode3 | - Setting hadoop.http.staticuser.user=root
datanode3 | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
datanode3 | - Setting hadoop.proxyuser.hue.groups=*
datanode3 | Configuring hdfs
datanode3 | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
datanode3 | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
datanode3 | - Setting dfs.webhdfs.enabled=true
datanode3 | - Setting dfs.permissions.enabled=false
datanode3 | Configuring yarn
datanode3 | - Setting yarn.timeline-service.enabled=true
datanode3 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
datanode3 | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
datanode3 | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
datanode3 | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
datanode3 | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
datanode3 | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
datanode3 | - Setting yarn.timeline-service.generic-application-history.enabled=true
datanode3 | - Setting yarn.log-aggregation-enable=true
datanode3 | - Setting yarn.resourcemanager.hostname=resourcemanager
datanode3 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
datanode3 | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
datanode3 | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
datanode3 | - Setting yarn.timeline-service.hostname=historyserver
datanode3 | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
datanode3 | - Setting yarn.resourcemanager.address=resourcemanager:8032
datanode3 | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
datanode3 | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
datanode3 | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
datanode3 | - Setting mapreduce.map.output.compress=true
datanode3 | - Setting yarn.nodemanager.resource.memory-mb=16384
datanode3 | - Setting yarn.resourcemanager.recovery.enabled=true
datanode3 | - Setting yarn.nodemanager.resource.cpu-vcores=8
datanode3 | Configuring httpfs
datanode3 | Configuring kms
datanode3 | Configuring mapred
datanode3 | - Setting mapreduce.map.java.opts=-Xmx3072m
datanode3 | - Setting mapreduce.reduce.java.opts=-Xmx6144m
datanode3 | - Setting mapreduce.reduce.memory.mb=8192
datanode3 | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode3 | - Setting mapreduce.map.memory.mb=4096
datanode3 | - Setting mapred.child.java.opts=-Xmx4096m
datanode3 | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode3 | - Setting mapreduce.framework.name=yarn
datanode3 | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode3 | Configuring for multihomed network
datanode3 | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
datanode3 | /entrypoint.sh: line 117: /run.sh: Success
datanode1 | Configuring core
datanode1 | - Setting hadoop.proxyuser.hue.hosts=*
datanode1 | - Setting fs.defaultFS=hdfs://namenode:9000
datanode1 | - Setting hadoop.http.staticuser.user=root
datanode1 | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
datanode1 | - Setting hadoop.proxyuser.hue.groups=*
datanode1 | Configuring hdfs
datanode1 | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
datanode1 | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
datanode1 | - Setting dfs.webhdfs.enabled=true
datanode1 | - Setting dfs.permissions.enabled=false
datanode1 | Configuring yarn
datanode1 | - Setting yarn.timeline-service.enabled=true
datanode1 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
datanode1 | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
datanode1 | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
datanode1 | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
datanode1 | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
datanode1 | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
datanode1 | - Setting yarn.timeline-service.generic-application-history.enabled=true
datanode1 | - Setting yarn.log-aggregation-enable=true
datanode1 | - Setting yarn.resourcemanager.hostname=resourcemanager
datanode1 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
datanode1 | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
datanode1 | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
datanode1 | - Setting yarn.timeline-service.hostname=historyserver
datanode1 | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
datanode1 | - Setting yarn.resourcemanager.address=resourcemanager:8032
datanode1 | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
datanode1 | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
datanode1 | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
datanode1 | - Setting mapreduce.map.output.compress=true
datanode1 | - Setting yarn.nodemanager.resource.memory-mb=16384
datanode1 | - Setting yarn.resourcemanager.recovery.enabled=true
datanode1 | - Setting yarn.nodemanager.resource.cpu-vcores=8
datanode1 | Configuring httpfs
datanode1 | Configuring kms
datanode1 | Configuring mapred
datanode1 | - Setting mapreduce.map.java.opts=-Xmx3072m
datanode1 | - Setting mapreduce.reduce.java.opts=-Xmx6144m
datanode1 | - Setting mapreduce.reduce.memory.mb=8192
datanode1 | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode1 | - Setting mapreduce.map.memory.mb=4096
datanode1 | - Setting mapred.child.java.opts=-Xmx4096m
datanode1 | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode1 | - Setting mapreduce.framework.name=yarn
datanode1 | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode1 | Configuring for multihomed network
datanode1 | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
datanode1 | /entrypoint.sh: line 117: /run.sh: Success
datanode1 | Configuring core
datanode1 | - Setting hadoop.proxyuser.hue.hosts=*
datanode1 | - Setting fs.defaultFS=hdfs://namenode:9000
datanode1 | - Setting hadoop.http.staticuser.user=root
datanode1 | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
datanode1 | - Setting hadoop.proxyuser.hue.groups=*
datanode1 | Configuring hdfs
datanode1 | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
datanode1 | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
datanode1 | - Setting dfs.webhdfs.enabled=true
datanode1 | - Setting dfs.permissions.enabled=false
datanode1 | Configuring yarn
datanode1 | - Setting yarn.timeline-service.enabled=true
datanode1 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
datanode1 | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
datanode1 | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
datanode1 | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
datanode1 | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
datanode1 | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
datanode1 | - Setting yarn.timeline-service.generic-application-history.enabled=true
datanode1 | - Setting yarn.log-aggregation-enable=true
datanode1 | - Setting yarn.resourcemanager.hostname=resourcemanager
datanode1 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
datanode1 | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
datanode1 | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
datanode1 | - Setting yarn.timeline-service.hostname=historyserver
datanode1 | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
datanode1 | - Setting yarn.resourcemanager.address=resourcemanager:8032
datanode1 | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
datanode1 | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
datanode1 | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
datanode1 | - Setting mapreduce.map.output.compress=true
datanode1 | - Setting yarn.nodemanager.resource.memory-mb=16384
datanode1 | - Setting yarn.resourcemanager.recovery.enabled=true
datanode1 | - Setting yarn.nodemanager.resource.cpu-vcores=8
datanode1 | Configuring httpfs
datanode1 | Configuring kms
datanode1 | Configuring mapred
datanode1 | - Setting mapreduce.map.java.opts=-Xmx3072m
datanode1 | - Setting mapreduce.reduce.java.opts=-Xmx6144m
datanode1 | - Setting mapreduce.reduce.memory.mb=8192
datanode1 | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode1 | - Setting mapreduce.map.memory.mb=4096
datanode1 | - Setting mapred.child.java.opts=-Xmx4096m
datanode1 | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode1 | - Setting mapreduce.framework.name=yarn
datanode1 | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode1 | Configuring for multihomed network
datanode1 | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
datanode1 | /entrypoint.sh: line 117: /run.sh: Success
datanode2 | Configuring core
datanode2 | - Setting hadoop.proxyuser.hue.hosts=*
datanode2 | - Setting fs.defaultFS=hdfs://namenode:9000
datanode2 | - Setting hadoop.http.staticuser.user=root
datanode2 | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
datanode2 | - Setting hadoop.proxyuser.hue.groups=*
datanode2 | Configuring hdfs
datanode2 | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
datanode2 | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
datanode2 | - Setting dfs.webhdfs.enabled=true
datanode2 | - Setting dfs.permissions.enabled=false
datanode2 | Configuring yarn
datanode2 | - Setting yarn.timeline-service.enabled=true
datanode2 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
datanode2 | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
datanode2 | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
datanode2 | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
datanode2 | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
datanode2 | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
datanode2 | - Setting yarn.timeline-service.generic-application-history.enabled=true
datanode2 | - Setting yarn.log-aggregation-enable=true
datanode2 | - Setting yarn.resourcemanager.hostname=resourcemanager
datanode2 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
datanode2 | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
datanode2 | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
datanode2 | - Setting yarn.timeline-service.hostname=historyserver
datanode2 | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
datanode2 | - Setting yarn.resourcemanager.address=resourcemanager:8032
datanode2 | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
datanode2 | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
datanode2 | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
datanode2 | - Setting mapreduce.map.output.compress=true
datanode2 | - Setting yarn.nodemanager.resource.memory-mb=16384
datanode2 | - Setting yarn.resourcemanager.recovery.enabled=true
datanode2 | - Setting yarn.nodemanager.resource.cpu-vcores=8
datanode2 | Configuring httpfs
datanode2 | Configuring kms
datanode2 | Configuring mapred
datanode2 | - Setting mapreduce.map.java.opts=-Xmx3072m
datanode2 | - Setting mapreduce.reduce.java.opts=-Xmx6144m
datanode2 | - Setting mapreduce.reduce.memory.mb=8192
datanode2 | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode2 | - Setting mapreduce.map.memory.mb=4096
datanode2 | - Setting mapred.child.java.opts=-Xmx4096m
datanode2 | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode2 | - Setting mapreduce.framework.name=yarn
datanode2 | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode2 | Configuring for multihomed network
datanode2 | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
datanode2 | /entrypoint.sh: line 117: /run.sh: Success
datanode2 | Configuring core
datanode2 | - Setting hadoop.proxyuser.hue.hosts=*
datanode2 | - Setting fs.defaultFS=hdfs://namenode:9000
datanode2 | - Setting hadoop.http.staticuser.user=root
datanode2 | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
datanode2 | - Setting hadoop.proxyuser.hue.groups=*
datanode2 | Configuring hdfs
datanode2 | - Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
datanode2 | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
datanode2 | - Setting dfs.webhdfs.enabled=true
datanode2 | - Setting dfs.permissions.enabled=false
datanode2 | Configuring yarn
datanode2 | - Setting yarn.timeline-service.enabled=true
datanode2 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
datanode2 | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
datanode2 | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
datanode2 | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
datanode2 | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
datanode2 | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
datanode2 | - Setting yarn.timeline-service.generic-application-history.enabled=true
datanode2 | - Setting yarn.log-aggregation-enable=true
datanode2 | - Setting yarn.resourcemanager.hostname=resourcemanager
datanode2 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
datanode2 | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
datanode2 | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
datanode2 | - Setting yarn.timeline-service.hostname=historyserver
datanode2 | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
datanode2 | - Setting yarn.resourcemanager.address=resourcemanager:8032
datanode2 | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
datanode2 | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
datanode2 | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
datanode2 | - Setting mapreduce.map.output.compress=true
datanode2 | - Setting yarn.nodemanager.resource.memory-mb=16384
datanode2 | - Setting yarn.resourcemanager.recovery.enabled=true
datanode2 | - Setting yarn.nodemanager.resource.cpu-vcores=8
datanode2 | Configuring httpfs
datanode2 | Configuring kms
datanode2 | Configuring mapred
datanode2 | - Setting mapreduce.map.java.opts=-Xmx3072m
datanode2 | - Setting mapreduce.reduce.java.opts=-Xmx6144m
datanode2 | - Setting mapreduce.reduce.memory.mb=8192
datanode2 | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode2 | - Setting mapreduce.map.memory.mb=4096
datanode2 | - Setting mapred.child.java.opts=-Xmx4096m
datanode2 | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode2 | - Setting mapreduce.framework.name=yarn
datanode2 | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
datanode2 | Configuring for multihomed network
datanode2 | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
datanode2 | /entrypoint.sh: line 117: /run.sh: Success
historyserver | Configuring core
historyserver | - Setting hadoop.proxyuser.hue.hosts=*
historyserver | - Setting fs.defaultFS=hdfs://namenode:9000
historyserver | - Setting hadoop.http.staticuser.user=root
historyserver | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
historyserver | - Setting hadoop.proxyuser.hue.groups=*
historyserver | Configuring hdfs
historyserver | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
historyserver | - Setting dfs.webhdfs.enabled=true
historyserver | - Setting dfs.permissions.enabled=false
historyserver | Configuring yarn
historyserver | - Setting yarn.timeline-service.enabled=true
historyserver | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
historyserver | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
historyserver | - Setting yarn.timeline-service.leveldb-timeline-store.path=/hadoop/yarn/timeline
historyserver | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
historyserver | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
historyserver | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
historyserver | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
historyserver | - Setting yarn.timeline-service.generic-application-history.enabled=true
historyserver | - Setting yarn.log-aggregation-enable=true
historyserver | - Setting yarn.resourcemanager.hostname=resourcemanager
historyserver | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
historyserver | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
historyserver | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
historyserver | - Setting yarn.timeline-service.hostname=historyserver
historyserver | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
historyserver | - Setting yarn.resourcemanager.address=resourcemanager:8032
historyserver | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
historyserver | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
historyserver | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
historyserver | - Setting mapreduce.map.output.compress=true
historyserver | - Setting yarn.nodemanager.resource.memory-mb=16384
historyserver | - Setting yarn.resourcemanager.recovery.enabled=true
historyserver | - Setting yarn.nodemanager.resource.cpu-vcores=8
historyserver | Configuring httpfs
historyserver | Configuring kms
historyserver | Configuring mapred
historyserver | - Setting mapreduce.map.java.opts=-Xmx3072m
historyserver | - Setting mapreduce.reduce.java.opts=-Xmx6144m
historyserver | - Setting mapreduce.reduce.memory.mb=8192
historyserver | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
historyserver | - Setting mapreduce.map.memory.mb=4096
historyserver | - Setting mapred.child.java.opts=-Xmx4096m
historyserver | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
historyserver | - Setting mapreduce.framework.name=yarn
historyserver | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
historyserver | Configuring for multihomed network
historyserver | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
historyserver | /entrypoint.sh: line 117: /run.sh: Success
historyserver | Configuring core
historyserver | - Setting hadoop.proxyuser.hue.hosts=*
historyserver | - Setting fs.defaultFS=hdfs://namenode:9000
historyserver | - Setting hadoop.http.staticuser.user=root
historyserver | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
historyserver | - Setting hadoop.proxyuser.hue.groups=*
historyserver | Configuring hdfs
historyserver | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
historyserver | - Setting dfs.webhdfs.enabled=true
historyserver | - Setting dfs.permissions.enabled=false
historyserver | Configuring yarn
historyserver | - Setting yarn.timeline-service.enabled=true
historyserver | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
historyserver | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
historyserver | - Setting yarn.timeline-service.leveldb-timeline-store.path=/hadoop/yarn/timeline
historyserver | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
historyserver | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
historyserver | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
historyserver | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
historyserver | - Setting yarn.timeline-service.generic-application-history.enabled=true
historyserver | - Setting yarn.log-aggregation-enable=true
historyserver | - Setting yarn.resourcemanager.hostname=resourcemanager
historyserver | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
historyserver | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
historyserver | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
historyserver | - Setting yarn.timeline-service.hostname=historyserver
historyserver | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
historyserver | - Setting yarn.resourcemanager.address=resourcemanager:8032
historyserver | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
historyserver | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
historyserver | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
historyserver | - Setting mapreduce.map.output.compress=true
historyserver | - Setting yarn.nodemanager.resource.memory-mb=16384
historyserver | - Setting yarn.resourcemanager.recovery.enabled=true
historyserver | - Setting yarn.nodemanager.resource.cpu-vcores=8
historyserver | Configuring httpfs
historyserver | Configuring kms
historyserver | Configuring mapred
historyserver | - Setting mapreduce.map.java.opts=-Xmx3072m
historyserver | - Setting mapreduce.reduce.java.opts=-Xmx6144m
historyserver | - Setting mapreduce.reduce.memory.mb=8192
historyserver | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
historyserver | - Setting mapreduce.map.memory.mb=4096
historyserver | - Setting mapred.child.java.opts=-Xmx4096m
historyserver | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
historyserver | - Setting mapreduce.framework.name=yarn
historyserver | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
historyserver | Configuring for multihomed network
historyserver | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
historyserver | /entrypoint.sh: line 117: /run.sh: Success
namenode | Configuring core
namenode | - Setting hadoop.proxyuser.hue.hosts=*
namenode | - Setting fs.defaultFS=hdfs://namenode:9000
namenode | - Setting hadoop.http.staticuser.user=root
namenode | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
namenode | - Setting hadoop.proxyuser.hue.groups=*
namenode | Configuring hdfs
namenode | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
namenode | - Setting dfs.webhdfs.enabled=true
namenode | - Setting dfs.permissions.enabled=false
namenode | - Setting dfs.namenode.name.dir=file:///hadoop/dfs/name
namenode | Configuring yarn
namenode | - Setting yarn.timeline-service.enabled=true
namenode | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
namenode | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
namenode | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
namenode | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
namenode | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
namenode | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
namenode | - Setting yarn.timeline-service.generic-application-history.enabled=true
namenode | - Setting yarn.log-aggregation-enable=true
namenode | - Setting yarn.resourcemanager.hostname=resourcemanager
namenode | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
namenode | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
namenode | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
namenode | - Setting yarn.timeline-service.hostname=historyserver
namenode | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
namenode | - Setting yarn.resourcemanager.address=resourcemanager:8032
namenode | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
namenode | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
namenode | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
namenode | - Setting mapreduce.map.output.compress=true
namenode | - Setting yarn.nodemanager.resource.memory-mb=16384
namenode | - Setting yarn.resourcemanager.recovery.enabled=true
namenode | - Setting yarn.nodemanager.resource.cpu-vcores=8
namenode | Configuring httpfs
namenode | Configuring kms
namenode | Configuring mapred
namenode | - Setting mapreduce.map.java.opts=-Xmx3072m
namenode | - Setting mapreduce.reduce.java.opts=-Xmx6144m
namenode | - Setting mapreduce.reduce.memory.mb=8192
namenode | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
namenode | - Setting mapreduce.map.memory.mb=4096
namenode | - Setting mapred.child.java.opts=-Xmx4096m
namenode | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
namenode | - Setting mapreduce.framework.name=yarn
namenode | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
namenode | Configuring for multihomed network
namenode | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
namenode | /entrypoint.sh: line 117: /run.sh: Success
namenode | Configuring core
namenode | - Setting hadoop.proxyuser.hue.hosts=*
namenode | - Setting fs.defaultFS=hdfs://namenode:9000
namenode | - Setting hadoop.http.staticuser.user=root
namenode | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
namenode | - Setting hadoop.proxyuser.hue.groups=*
namenode | Configuring hdfs
namenode | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
namenode | - Setting dfs.webhdfs.enabled=true
namenode | - Setting dfs.permissions.enabled=false
namenode | - Setting dfs.namenode.name.dir=file:///hadoop/dfs/name
namenode | Configuring yarn
namenode | - Setting yarn.timeline-service.enabled=true
namenode | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
namenode | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
namenode | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
namenode | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
namenode | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
namenode | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
namenode | - Setting yarn.timeline-service.generic-application-history.enabled=true
namenode | - Setting yarn.log-aggregation-enable=true
namenode | - Setting yarn.resourcemanager.hostname=resourcemanager
namenode | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
namenode | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
namenode | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
namenode | - Setting yarn.timeline-service.hostname=historyserver
namenode | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
namenode | - Setting yarn.resourcemanager.address=resourcemanager:8032
namenode | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
namenode | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
namenode | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
namenode | - Setting mapreduce.map.output.compress=true
namenode | - Setting yarn.nodemanager.resource.memory-mb=16384
namenode | - Setting yarn.resourcemanager.recovery.enabled=true
namenode | - Setting yarn.nodemanager.resource.cpu-vcores=8
namenode | Configuring httpfs
namenode | Configuring kms
namenode | Configuring mapred
namenode | - Setting mapreduce.map.java.opts=-Xmx3072m
namenode | - Setting mapreduce.reduce.java.opts=-Xmx6144m
namenode | - Setting mapreduce.reduce.memory.mb=8192
namenode | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
namenode | - Setting mapreduce.map.memory.mb=4096
namenode | - Setting mapred.child.java.opts=-Xmx4096m
namenode | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
namenode | - Setting mapreduce.framework.name=yarn
namenode | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
namenode | Configuring for multihomed network
namenode | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
namenode | /entrypoint.sh: line 117: /run.sh: Success
nodemanager1 | Configuring core
nodemanager1 | - Setting hadoop.proxyuser.hue.hosts=*
nodemanager1 | - Setting fs.defaultFS=hdfs://namenode:9000
nodemanager1 | - Setting hadoop.http.staticuser.user=root
nodemanager1 | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
nodemanager1 | - Setting hadoop.proxyuser.hue.groups=*
nodemanager1 | Configuring hdfs
nodemanager1 | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
nodemanager1 | - Setting dfs.webhdfs.enabled=true
nodemanager1 | - Setting dfs.permissions.enabled=false
nodemanager1 | Configuring yarn
nodemanager1 | - Setting yarn.timeline-service.enabled=true
nodemanager1 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
nodemanager1 | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
nodemanager1 | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
nodemanager1 | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
nodemanager1 | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
nodemanager1 | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
nodemanager1 | - Setting yarn.timeline-service.generic-application-history.enabled=true
nodemanager1 | - Setting yarn.log-aggregation-enable=true
nodemanager1 | - Setting yarn.resourcemanager.hostname=resourcemanager
nodemanager1 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
nodemanager1 | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
nodemanager1 | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
nodemanager1 | - Setting yarn.timeline-service.hostname=historyserver
nodemanager1 | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
nodemanager1 | - Setting yarn.resourcemanager.address=resourcemanager:8032
nodemanager1 | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
nodemanager1 | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
nodemanager1 | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
nodemanager1 | - Setting mapreduce.map.output.compress=true
nodemanager1 | - Setting yarn.nodemanager.resource.memory-mb=16384
nodemanager1 | - Setting yarn.resourcemanager.recovery.enabled=true
nodemanager1 | - Setting yarn.nodemanager.resource.cpu-vcores=8
nodemanager1 | Configuring httpfs
nodemanager1 | Configuring kms
nodemanager1 | Configuring mapred
nodemanager1 | - Setting mapreduce.map.java.opts=-Xmx3072m
nodemanager1 | - Setting mapreduce.reduce.java.opts=-Xmx6144m
nodemanager1 | - Setting mapreduce.reduce.memory.mb=8192
nodemanager1 | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
nodemanager1 | - Setting mapreduce.map.memory.mb=4096
nodemanager1 | - Setting mapred.child.java.opts=-Xmx4096m
nodemanager1 | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
nodemanager1 | - Setting mapreduce.framework.name=yarn
nodemanager1 | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
nodemanager1 | Configuring for multihomed network
nodemanager1 | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
nodemanager1 | /entrypoint.sh: line 117: /run.sh: Success
nodemanager1 | Configuring core
nodemanager1 | - Setting hadoop.proxyuser.hue.hosts=*
nodemanager1 | - Setting fs.defaultFS=hdfs://namenode:9000
nodemanager1 | - Setting hadoop.http.staticuser.user=root
nodemanager1 | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
nodemanager1 | - Setting hadoop.proxyuser.hue.groups=*
nodemanager1 | Configuring hdfs
nodemanager1 | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
nodemanager1 | - Setting dfs.webhdfs.enabled=true
nodemanager1 | - Setting dfs.permissions.enabled=false
nodemanager1 | Configuring yarn
nodemanager1 | - Setting yarn.timeline-service.enabled=true
nodemanager1 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
nodemanager1 | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
nodemanager1 | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
nodemanager1 | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
nodemanager1 | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
nodemanager1 | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
nodemanager1 | - Setting yarn.timeline-service.generic-application-history.enabled=true
nodemanager1 | - Setting yarn.log-aggregation-enable=true
nodemanager1 | - Setting yarn.resourcemanager.hostname=resourcemanager
nodemanager1 | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
nodemanager1 | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
nodemanager1 | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
nodemanager1 | - Setting yarn.timeline-service.hostname=historyserver
nodemanager1 | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
nodemanager1 | - Setting yarn.resourcemanager.address=resourcemanager:8032
nodemanager1 | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
nodemanager1 | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
nodemanager1 | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
nodemanager1 | - Setting mapreduce.map.output.compress=true
nodemanager1 | - Setting yarn.nodemanager.resource.memory-mb=16384
nodemanager1 | - Setting yarn.resourcemanager.recovery.enabled=true
nodemanager1 | - Setting yarn.nodemanager.resource.cpu-vcores=8
nodemanager1 | Configuring httpfs
nodemanager1 | Configuring kms
nodemanager1 | Configuring mapred
nodemanager1 | - Setting mapreduce.map.java.opts=-Xmx3072m
nodemanager1 | - Setting mapreduce.reduce.java.opts=-Xmx6144m
nodemanager1 | - Setting mapreduce.reduce.memory.mb=8192
nodemanager1 | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
nodemanager1 | - Setting mapreduce.map.memory.mb=4096
nodemanager1 | - Setting mapred.child.java.opts=-Xmx4096m
nodemanager1 | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
nodemanager1 | - Setting mapreduce.framework.name=yarn
nodemanager1 | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
nodemanager1 | Configuring for multihomed network
nodemanager1 | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
nodemanager1 | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success
resourcemanager | Configuring core
resourcemanager | - Setting hadoop.proxyuser.hue.hosts=*
resourcemanager | - Setting fs.defaultFS=hdfs://namenode:9000
resourcemanager | - Setting hadoop.http.staticuser.user=root
resourcemanager | - Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting hadoop.proxyuser.hue.groups=*
resourcemanager | Configuring hdfs
resourcemanager | - Setting dfs.namenode.datanode.registration.ip-hostname-check=false
resourcemanager | - Setting dfs.webhdfs.enabled=true
resourcemanager | - Setting dfs.permissions.enabled=false
resourcemanager | Configuring yarn
resourcemanager | - Setting yarn.timeline-service.enabled=true
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
resourcemanager | - Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
resourcemanager | - Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
resourcemanager | - Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
resourcemanager | - Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
resourcemanager | - Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
resourcemanager | - Setting yarn.timeline-service.generic-application-history.enabled=true
resourcemanager | - Setting yarn.log-aggregation-enable=true
resourcemanager | - Setting yarn.resourcemanager.hostname=resourcemanager
resourcemanager | - Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
resourcemanager | - Setting yarn.nodemanager.aux-services=mapreduce_shuffle
resourcemanager | - Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
resourcemanager | - Setting yarn.timeline-service.hostname=historyserver
resourcemanager | - Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
resourcemanager | - Setting yarn.resourcemanager.address=resourcemanager:8032
resourcemanager | - Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
resourcemanager | - Setting yarn.nodemanager.remote-app-log-dir=/app-logs
resourcemanager | - Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
resourcemanager | - Setting mapreduce.map.output.compress=true
resourcemanager | - Setting yarn.nodemanager.resource.memory-mb=16384
resourcemanager | - Setting yarn.resourcemanager.recovery.enabled=true
resourcemanager | - Setting yarn.nodemanager.resource.cpu-vcores=8
resourcemanager | Configuring httpfs
resourcemanager | Configuring kms
resourcemanager | Configuring mapred
resourcemanager | - Setting mapreduce.map.java.opts=-Xmx3072m
resourcemanager | - Setting mapreduce.reduce.java.opts=-Xmx6144m
resourcemanager | - Setting mapreduce.reduce.memory.mb=8192
resourcemanager | - Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.map.memory.mb=4096
resourcemanager | - Setting mapred.child.java.opts=-Xmx4096m
resourcemanager | - Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | - Setting mapreduce.framework.name=yarn
resourcemanager | - Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
resourcemanager | Configuring for multihomed network
resourcemanager | /entrypoint.sh: /run.sh: /bin/bash^M: bad interpreter: No such file or directory
resourcemanager | /entrypoint.sh: line 117: /run.sh: Success

@nathan815
Copy link
Author

@Fyroze Ah, I have seen that error before. Yep it is due to Windows line endings in the file. You could either use an IDE/text editor or a tool like dos2unix to convert line endings to unix style :)

@sasdahab
Copy link

Hello, I have the same problem as Fyroze, and even if I apply the dos2unix command, the problem persists.
I'm working on windows 10.
Is it possible to help me?

Best regards

@Fyroze
Copy link

Fyroze commented Nov 25, 2020 via email

@sasdahab
Copy link

It works ! Thank you very much @Fyroze !!

@sasdahab
Copy link

apologize @Fyroze, it doesn't work with three datanodes. And i have the same logs errors. But with one datanode, it is well working.
Do you have any idea of the cause of the problem ?

Best regards

@Fyroze
Copy link

Fyroze commented Nov 25, 2020 via email

@sasdahab
Copy link

Ok, thank you !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment