Skip to content

Instantly share code, notes, and snippets.

@sascala
Last active June 22, 2017 17:01
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save sascala/ba6589e04e8aa8ffc5be73efdc056463 to your computer and use it in GitHub Desktop.
Save sascala/ba6589e04e8aa8ffc5be73efdc056463 to your computer and use it in GitHub Desktop.
beta-hdfs docs

API Reference

The DC/OS HDFS Service implements a REST API that may be accessed from outside the cluster. The <dcos_url> parameter referenced below indicates the base URL of the DC/OS cluster on which the HDFS Service is deployed.

REST API Authentication

REST API requests must be authenticated. This authentication is only applicable for interacting with the HDFS REST API directly. You do not need the token to access the HDFS nodes themselves.

If you are using Enterprise DC/OS, follow these instructions to create a service account and an authentication token. You can then configure your service to automatically refresh the authentication token when it expires. To get started more quickly, you can also get the authentication token without a service account, but you will need to manually refresh the token.

If you are using open source DC/OS, follow these instructions to pass your HTTP API token to the DC/OS endpoint.

Once you have the authentication token, you can store it in an environment variable and reference it in your REST API calls:

$ export auth_token=uSeR_t0k3n

The curl examples in this document assume that an auth token has been stored in an environment variable named auth_token.

If you are using Enterprise DC/OS, the security mode of your installation may also require the --ca-cert flag when making REST calls. Refer to Obtaining and passing the DC/OS certificate in cURL requests for information on how to use the --cacert flag. If your security mode is disabled, do not use the --ca-cert flag.

Plan API

The Plan API provides endpoints for monitoring and controlling service installation and configuration updates.

$ curl -H "Authorization:token=$auth_token" <dcos_url>/service/hdfs/v1/plans/deploy

Pause Installation

The installation will pause after completing installation of the current node and wait for user input.

$ curl -X POST -H "Authorization:token=$auth_token" <dcos_url>/service/hdfs/v1/plans/deploy/interrupt

Resume Installation

The REST API request below will resume installation at the next pending node.

$ curl -X PUT <dcos_surl>/service/hdfs/v1/plans/deploy/continue

Connection API

$ curl -H "Authorization:token=$auth_token" dcos_url/service/hdfs/v1/endpoints/hdfs-site.xml
$ curl -H "Authorization:token=$auth_token" <dcos_url>/service/hdfs/v1/endpoints/core-site.xml

You will see a response similar to the following:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>dfs.nameservice.id</name>
        <value>hdfs</value>
    </property>
    <property>
        <name>dfs.nameservices</name>
        <value>hdfs</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.hdfs</name>
        <value>name-0-node,name-1-node</value>
    </property>

    <!-- namenode -->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://journal-0-node.hdfs.autoip.dcos.thisdcos.directory:8485;journal-1-node.hdfs.autoip.dcos.thisdcos.directory:8485;journal-2-node.hdfs.autoip.dcos.thisdcos.directory:8485/hdfs</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/name-data</value>
    </property>
    <property>
        <name>dfs.namenode.safemode.threshold-pct</name>
        <value>0.9</value>
    </property>
    <property>
        <name>dfs.namenode.heartbeat.recheck-interval</name>
        <value>60000</value>
    </property>
    <property>
        <name>dfs.namenode.handler.count</name>
        <value>20</value>
    </property>
    <property>
        <name>dfs.namenode.invalidate.work.pct.per.iteration</name>
        <value>0.95</value>
    </property>
    <property>
        <name>dfs.namenode.replication.work.multiplier.per.iteration</name>
        <value>4</value>
    </property>
    <property>
        <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
        <value>false</value>
    </property>


    <!-- name-0-node -->
    <property>
        <name>dfs.namenode.rpc-address.hdfs.name-0-node</name>
        <value>name-0-node.hdfs.autoip.dcos.thisdcos.directory:9001</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-bind-host.hdfs.name-0-node</name>
        <value>0.0.0.0</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.hdfs.name-0-node</name>
        <value>name-0-node.hdfs.autoip.dcos.thisdcos.directory:9002</value>
    </property>
    <property>
        <name>dfs.namenode.http-bind-host.hdfs.name-0-node</name>
        <value>0.0.0.0</value>
    </property>


    <!-- name-1-node -->
    <property>
        <name>dfs.namenode.rpc-address.hdfs.name-1-node</name>
        <value>name-1-node.hdfs.autoip.dcos.thisdcos.directory:9001</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-bind-host.hdfs.name-1-node</name>
        <value>0.0.0.0</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.hdfs.name-1-node</name>
        <value>name-1-node.hdfs.autoip.dcos.thisdcos.directory:9002</value>
    </property>
    <property>
        <name>dfs.namenode.http-bind-host.hdfs.name-1-node</name>
        <value>0.0.0.0</value>
    </property>

    <!-- journalnode -->
    <property>
        <name>dfs.journalnode.rpc-address</name>
        <value>0.0.0.0:8485</value>
    </property>
    <property>
        <name>dfs.journalnode.http-address</name>
        <value>0.0.0.0:8480</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/journal-data</value>
    </property>

    <!-- datanode -->
    <property>
        <name>dfs.datanode.address</name>
        <value>0.0.0.0:9003</value>
    </property>
    <property>
        <name>dfs.datanode.http.address</name>
        <value>0.0.0.0:9004</value>
    </property>
    <property>
        <name>dfs.datanode.ipc.address</name>
        <value>0.0.0.0:9005</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/data-data</value>
    </property>
    <property>
        <name>dfs.datanode.balance.bandwidthPerSec</name>
        <value>41943040</value>
    </property>
    <property>
        <name>dfs.datanode.handler.count</name>
        <value>10</value>
    </property>

    <!-- HA -->
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>master.mesos:2181</value>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>shell(/bin/true)</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>


    <property>
        <name>dfs.image.compress</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.image.compression.codec</name>
        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
    </property>
    <property>
        <name>dfs.client.read.shortcircuit</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.client.read.shortcircuit.streams.cache.size</name>
        <value>1000</value>
    </property>
    <property>
        <name>dfs.client.read.shortcircuit.streams.cache.size.expiry.ms</name>
        <value>1000</value>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.hdfs</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
        <name>dfs.domain.socket.path</name>
        <value>/var/lib/hadoop-hdfs/dn_socket</value>
    </property>
    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>

</configuration>
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://hdfs</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hue.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hue.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.httpfs.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.httpfs.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>ha.zookeeper.parent-znode</name>
        <value>/dcos-service-hdfs/hadoop-ha</value>
    </property>

</configuration>

The contents of the responses represent valid hdfs-site.xml and core-site.xml that can be used by clients to connect to the service.

Nodes API

The pods API provides endpoints for retrieving information about nodes, restarting them, and replacing them.

List Nodes

A list of available node ids can be retrieved by sending a GET request to /v1/pods:

CLI Example

$ dcos hdfs pods list

HTTP Example

$ curl  -H "Authorization:token=$auth_token" <dcos_url>/service/hdfs/v1/pods
[
    "data-0",
    "data-1",
    "data-2",
    "journal-0",
    "journal-1",
    "journal-2",
    "name-0",
    "name-1",
    "zkfc-0",
    "zkfc-1"
]

Node Info

You can retrieve node information by sending a GET request to /v1/pods/<node-id>/info:

$ curl  -H "Authorization:token=$auth_token" <dcos_url>/service/hdfs/v1/pods/<node-id>/info

CLI Example

$ dcos hdfs pods info journalnode-0

HTTP Example

$ curl  -H "Authorization:token=$auth_token" <dcos_url>/service/hdfs/v1/pods/journalnode-0/info
[{
	info: {
		name: "journal-0-node",
		taskId: {
			value: "journal-0-node__b31a70f4-73c5-4065-990c-76c0c704b8e4"
		},
		slaveId: {
			value: "0060634a-aa2b-4fcc-afa6-5569716b533a-S5"
		},
		resources: [{
			name: "cpus",
			type: "SCALAR",
			scalar: {
				value: 0.3
			},
			ranges: null,
			set: null,
			role: "hdfs-role",
			reservation: {
				principal: "hdfs-principal",
				labels: {
					labels: [{
						key: "resource_id",
						value: "4208f1ea-586f-4157-81fd-dfa0877e7472"
					}]
				}
			},
			disk: null,
			revocable: null,
			shared: null
		}, {
			name: "mem",
			type: "SCALAR",
			scalar: {
				value: 512
			},
			ranges: null,
			set: null,
			role: "hdfs-role",
			reservation: {
				principal: "hdfs-principal",
				labels: {
					labels: [{
						key: "resource_id",
						value: "a0be3c2c-3c7c-47ad-baa9-be81fb5d5f2e"
					}]
				}
			},
			disk: null,
			revocable: null,
			shared: null
		}, {
			name: "ports",
			type: "RANGES",
			scalar: null,
			ranges: {
				range: [{
					begin: 8480,
					end: 8480
				}, {
					begin: 8485,
					end: 8485
				}]
			},
			set: null,
			role: "hdfs-role",
			reservation: {
				principal: "hdfs-principal",
				labels: {
					labels: [{
						key: "resource_id",
						value: "d50b3deb-97c7-4960-89e5-ac4e508e4564"
					}]
				}
			},
			disk: null,
			revocable: null,
			shared: null
		}, {
			name: "disk",
			type: "SCALAR",
			scalar: {
				value: 5000
			},
			ranges: null,
			set: null,
			role: "hdfs-role",
			reservation: {
				principal: "hdfs-principal",
				labels: {
					labels: [{
						key: "resource_id",
						value: "3e624468-11fb-4fcf-9e67-ddb883b1718e"
					}]
				}
			},
			disk: {
				persistence: {
					id: "6bf7fcf1-ccdf-41a3-87ba-459162da1f03",
					principal: "hdfs-principal"
				},
				volume: {
					mode: "RW",
					containerPath: "journal-data",
					hostPath: null,
					image: null,
					source: null
				},
				source: null
			},
			revocable: null,
			shared: null
		}],
		executor: {
			type: null,
			executorId: {
				value: "journal__e42893b5-9d96-4dfb-8e85-8360d483a122"
			},
			frameworkId: null,
			command: {
				uris: [{
					value: "https://downloads.mesosphere.com/hdfs/assets/1.0.0-2.6.0/executor.zip",
					executable: null,
					extract: null,
					cache: null,
					outputFile: null
				}, {
					value: "https://downloads.mesosphere.com/libmesos-bundle/libmesos-bundle-1.9-argus-1.1.x-2.tar.gz",
					executable: null,
					extract: null,
					cache: null,
					outputFile: null
				}, {
					value: "https://downloads.mesosphere.com/java/jre-8u112-linux-x64-jce-unlimited.tar.gz",
					executable: null,
					extract: null,
					cache: null,
					outputFile: null
				}, {
					value: "https://downloads.mesosphere.com/hdfs/assets/hadoop-2.6.0-cdh5.9.1-dcos.tar.gz",
					executable: null,
					extract: null,
					cache: null,
					outputFile: null
				}, {
					value: "https://downloads.mesosphere.com/hdfs/assets/1.0.0-2.6.0/bootstrap.zip",
					executable: null,
					extract: null,
					cache: null,
					outputFile: null
				}, {
					value: "http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/artifacts/template/25f791d8-4d42-458f-84fb-9d82842ffb3e/journal/node/core-site",
					executable: null,
					extract: false,
					cache: null,
					outputFile: "config-templates/core-site"
				}, {
					value: "http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/artifacts/template/25f791d8-4d42-458f-84fb-9d82842ffb3e/journal/node/hdfs-site",
					executable: null,
					extract: false,
					cache: null,
					outputFile: "config-templates/hdfs-site"
				}, {
					value: "http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/artifacts/template/25f791d8-4d42-458f-84fb-9d82842ffb3e/journal/node/hadoop-metrics2",
					executable: null,
					extract: false,
					cache: null,
					outputFile: "config-templates/hadoop-metrics2"
				}],
				environment: null,
				shell: null,
				value: "export LD_LIBRARY_PATH=$MESOS_SANDBOX/libmesos-bundle/lib:$LD_LIBRARY_PATH && export MESOS_NATIVE_JAVA_LIBRARY=$(ls $MESOS_SANDBOX/libmesos-bundle/lib/libmesos-*.so) && export JAVA_HOME=$(ls -d $MESOS_SANDBOX/jre*/) && ./executor/bin/executor",
				arguments: [],
				user: null
			},
			container: null,
			resources: [],
			name: "journal",
			source: null,
			data: null,
			discovery: null,
			shutdownGracePeriod: null,
			labels: null
		},
		command: {
			uris: [],
			environment: {
				variables: [{
					name: "PERMISSIONS_ENABLED",
					value: "false"
				}, {
					name: "DATA_NODE_BALANCE_BANDWIDTH_PER_SEC",
					value: "41943040"
				}, {
					name: "NAME_NODE_HANDLER_COUNT",
					value: "20"
				}, {
					name: "CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE",
					value: "1000"
				}, {
					name: "HADOOP_ROOT_LOGGER",
					value: "INFO,console"
				}, {
					name: "HA_FENCING_METHODS",
					value: "shell(/bin/true)"
				}, {
					name: "SERVICE_ZK_ROOT",
					value: "dcos-service-hdfs"
				}, {
					name: "HADOOP_PROXYUSER_HUE_GROUPS",
					value: "*"
				}, {
					name: "NAME_NODE_HEARTBEAT_RECHECK_INTERVAL",
					value: "60000"
				}, {
					name: "HADOOP_PROXYUSER_HUE_HOSTS",
					value: "*"
				}, {
					name: "CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS",
					value: "1000"
				}, {
					name: "JOURNAL_NODE_RPC_PORT",
					value: "8485"
				}, {
					name: "CLIENT_FAILOVER_PROXY_PROVIDER_HDFS",
					value: "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
				}, {
					name: "DATA_NODE_HANDLER_COUNT",
					value: "10"
				}, {
					name: "HA_AUTOMATIC_FAILURE",
					value: "true"
				}, {
					name: "JOURNALNODE",
					value: "true"
				}, {
					name: "NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION",
					value: "4"
				}, {
					name: "HADOOP_PROXYUSER_HTTPFS_HOSTS",
					value: "*"
				}, {
					name: "POD_INSTANCE_INDEX",
					value: "0"
				}, {
					name: "DATA_NODE_IPC_PORT",
					value: "9005"
				}, {
					name: "JOURNAL_NODE_HTTP_PORT",
					value: "8480"
				}, {
					name: "NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK",
					value: "false"
				}, {
					name: "TASK_USER",
					value: "root"
				}, {
					name: "journal-0-node",
					value: "true"
				}, {
					name: "HADOOP_PROXYUSER_ROOT_GROUPS",
					value: "*"
				}, {
					name: "TASK_NAME",
					value: "journal-0-node"
				}, {
					name: "HADOOP_PROXYUSER_ROOT_HOSTS",
					value: "*"
				}, {
					name: "IMAGE_COMPRESS",
					value: "true"
				}, {
					name: "CLIENT_READ_SHORTCIRCUIT",
					value: "true"
				}, {
					name: "FRAMEWORK_NAME",
					value: "hdfs"
				}, {
					name: "IMAGE_COMPRESSION_CODEC",
					value: "org.apache.hadoop.io.compress.SnappyCodec"
				}, {
					name: "NAME_NODE_SAFEMODE_THRESHOLD_PCT",
					value: "0.9"
				}, {
					name: "NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION",
					value: "0.95"
				}, {
					name: "HADOOP_PROXYUSER_HTTPFS_GROUPS",
					value: "*"
				}, {
					name: "CLIENT_READ_SHORTCIRCUIT_PATH",
					value: "/var/lib/hadoop-hdfs/dn_socket"
				}, {
					name: "DATA_NODE_HTTP_PORT",
					value: "9004"
				}, {
					name: "DATA_NODE_RPC_PORT",
					value: "9003"
				}, {
					name: "NAME_NODE_HTTP_PORT",
					value: "9002"
				}, {
					name: "NAME_NODE_RPC_PORT",
					value: "9001"
				}, {
					name: "CONFIG_TEMPLATE_CORE_SITE",
					value: "config-templates/core-site,hadoop-2.6.0-cdh5.9.1/etc/hadoop/core-site.xml"
				}, {
					name: "CONFIG_TEMPLATE_HDFS_SITE",
					value: "config-templates/hdfs-site,hadoop-2.6.0-cdh5.9.1/etc/hadoop/hdfs-site.xml"
				}, {
					name: "CONFIG_TEMPLATE_HADOOP_METRICS2",
					value: "config-templates/hadoop-metrics2,hadoop-2.6.0-cdh5.9.1/etc/hadoop/hadoop-metrics2.properties"
				}, {
					name: "PORT_JOURNAL_RPC",
					value: "8485"
				}, {
					name: "PORT_JOURNAL_HTTP",
					value: "8480"
				}]
			},
			shell: null,
			value: "./bootstrap && ./hadoop-2.6.0-cdh5.9.1/bin/hdfs journalnode",
			arguments: [],
			user: null
		},
		container: null,
		healthCheck: null,
		killPolicy: null,
		data: null,
		labels: {
			labels: [{
				key: "goal_state",
				value: "RUNNING"
			}, {
				key: "offer_attributes",
				value: ""
			}, {
				key: "task_type",
				value: "journal"
			}, {
				key: "index",
				value: "0"
			}, {
				key: "offer_hostname",
				value: "10.0.1.23"
			}, {
				key: "target_configuration",
				value: "4bdb3f97-96b0-4e78-8d47-f39edc33f6e3"
			}]
		},
		discovery: null
	},
	status: {
		taskId: {
			value: "journal-0-node__b31a70f4-73c5-4065-990c-76c0c704b8e4"
		},
		state: "TASK_RUNNING",
		message: "Reconciliation: Latest task state",
		source: "SOURCE_MASTER",
		reason: "REASON_RECONCILIATION",
		data: null,
		slaveId: {
			value: "0060634a-aa2b-4fcc-afa6-5569716b533a-S5"
		},
		executorId: null,
		timestamp: 1486694618.923135,
		uuid: null,
		healthy: null,
		labels: null,
		containerStatus: {
			containerId: {
				value: "a4c8433f-2648-4ba7-a8b8-5fe5df20e8af",
				parent: null
			},
			networkInfos: [{
				ipAddresses: [{
					protocol: null,
					ipAddress: "10.0.1.23"
				}],
				name: null,
				groups: [],
				labels: null,
				portMappings: []
			}],
			cgroupInfo: null,
			executorPid: 5594
		},
		unreachableTime: null
	}
}]

Replace a Node

The replace endpoint can be used to replace a node with an instance running on another agent node.

CLI Example

$ dcos hdfs pods replace <node-id>

HTTP Example

$ curl -X POST -H "Authorization:token=$auth_token" <dcos_url>/service/hdfs/v1/pods/<node-id>/replace

If the operation succeeds, a 200 OK is returned.

Restart a Node

The restart endpoint can be used to restart a node in place on the same agent node.

CLI Example

$ dcos hdfs pods restart <node-id>

HTTP Example

$ curl -X POST -H "Authorization:token=$auth_token" <dcos_url>/service/hdfs/v1/pods/<node-id>/restart

If the operation succeeds a 200 OK is returned.

Configuration API

The configuration API provides an endpoint to view current and previous configurations of the cluster.

View Target Config

You can view the current target configuration by sending a GET request to /v1/configurations/target.

CLI Example

$ dcos hdfs config target

HTTP Example

$ curl -H "Authorization:token=$auth_token" <dcos_url>/service/hdfs/v1/configurations/target
{
	name: "hdfs",
	role: "hdfs-role",
	principal: "hdfs-principal",
	api - port: 10002,
	web - url: null,
	zookeeper: "master.mesos:2181",
	pod - specs: [{
		type: "journal",
		user: null,
		count: 3,
		container: null,
		uris: [
			"https://downloads.mesosphere.com/hdfs/assets/hadoop-2.6.0-cdh5.9.1-dcos.tar.gz",
			"https://downloads.mesosphere.com/hdfs/assets/1.0.0-2.6.0/bootstrap.zip"
		],
		task - specs: [{
			name: "node",
			goal: "RUNNING",
			resource - set: {
				id: "node-resource-set",
				resource - specifications: [{
					@type: "DefaultResourceSpec",
					name: "cpus",
					value: {
						type: "SCALAR",
						scalar: {
							value: 0.3
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}, {
					@type: "DefaultResourceSpec",
					name: "mem",
					value: {
						type: "SCALAR",
						scalar: {
							value: 512
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}, {
					@type: "PortsSpec",
					name: "ports",
					value: {
						type: "RANGES",
						scalar: null,
						ranges: {
							range: [{
								begin: 8485,
								end: 8485
							}, {
								begin: 8480,
								end: 8480
							}]
						},
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					port - specs: [{
						@type: "PortSpec",
						name: "ports",
						value: {
							type: "RANGES",
							scalar: null,
							ranges: {
								range: [{
									begin: 8485,
									end: 8485
								}]
							},
							set: null,
							text: null
						},
						role: "hdfs-role",
						principal: "hdfs-principal",
						port - name: "journal-rpc",
						envKey: null
					}, {
						@type: "PortSpec",
						name: "ports",
						value: {
							type: "RANGES",
							scalar: null,
							ranges: {
								range: [{
									begin: 8480,
									end: 8480
								}]
							},
							set: null,
							text: null
						},
						role: "hdfs-role",
						principal: "hdfs-principal",
						port - name: "journal-http",
						envKey: null
					}],
					envKey: null
				}],
				volume - specifications: [{
					@type: "DefaultVolumeSpec",
					type: "ROOT",
					container - path: "journal-data",
					name: "disk",
					value: {
						type: "SCALAR",
						scalar: {
							value: 5000
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: "DISK_SIZE"
				}],
				role: "hdfs-role",
				principal: "hdfs-principal"
			},
			command - spec: {
				value: "./bootstrap && ./hadoop-2.6.0-cdh5.9.1/bin/hdfs journalnode",
				environment: {
					CLIENT_FAILOVER_PROXY_PROVIDER_HDFS: "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
					CLIENT_READ_SHORTCIRCUIT: "true",
					CLIENT_READ_SHORTCIRCUIT_PATH: "/var/lib/hadoop-hdfs/dn_socket",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE: "1000",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS: "1000",
					DATA_NODE_BALANCE_BANDWIDTH_PER_SEC: "41943040",
					DATA_NODE_HANDLER_COUNT: "10",
					DATA_NODE_HTTP_PORT: "9004",
					DATA_NODE_IPC_PORT: "9005",
					DATA_NODE_RPC_PORT: "9003",
					HADOOP_PROXYUSER_HTTPFS_GROUPS: "*",
					HADOOP_PROXYUSER_HTTPFS_HOSTS: "*",
					HADOOP_PROXYUSER_HUE_GROUPS: "*",
					HADOOP_PROXYUSER_HUE_HOSTS: "*",
					HADOOP_PROXYUSER_ROOT_GROUPS: "*",
					HADOOP_PROXYUSER_ROOT_HOSTS: "*",
					HADOOP_ROOT_LOGGER: "INFO,console",
					HA_AUTOMATIC_FAILURE: "true",
					HA_FENCING_METHODS: "shell(/bin/true)",
					IMAGE_COMPRESS: "true",
					IMAGE_COMPRESSION_CODEC: "org.apache.hadoop.io.compress.SnappyCodec",
					JOURNALNODE: "true",
					JOURNAL_NODE_HTTP_PORT: "8480",
					JOURNAL_NODE_RPC_PORT: "8485",
					NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK: "false",
					NAME_NODE_HANDLER_COUNT: "20",
					NAME_NODE_HEARTBEAT_RECHECK_INTERVAL: "60000",
					NAME_NODE_HTTP_PORT: "9002",
					NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION: "0.95",
					NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION: "4",
					NAME_NODE_RPC_PORT: "9001",
					NAME_NODE_SAFEMODE_THRESHOLD_PCT: "0.9",
					PERMISSIONS_ENABLED: "false",
					SERVICE_ZK_ROOT: "dcos-service-hdfs",
					TASK_USER: "root"
				}
			},
			health - check - spec: null,
			readiness - check - spec: null,
			config - files: [{
				name: "core-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/core-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?><configuration> <property> <name>fs.default.name</name> <value>hdfs://hdfs</value> </property> <property> <name>hadoop.proxyuser.hue.hosts</name> <value>{{HADOOP_PROXYUSER_HUE_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.hue.groups</name> <value>{{HADOOP_PROXYUSER_HUE_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>{{HADOOP_PROXYUSER_ROOT_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>{{HADOOP_PROXYUSER_ROOT_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.groups</name> <value>{{HADOOP_PROXYUSER_HTTPFS_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.hosts</name> <value>{{HADOOP_PROXYUSER_HTTPFS_HOSTS}}</value> </property> <property> <name>ha.zookeeper.parent-znode</name> <value>/{{SERVICE_ZK_ROOT}}/hadoop-ha</value> </property> {{#SECURE_MODE}} <property> <!-- The ZKFC nodes use this property to verify they are connecting to the namenode with the expected principal. --> <name>hadoop.security.service.user.name.key.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property> <property> <name>hadoop.security.authorization</name> <value>true</value> </property> {{/SECURE_MODE}} </configuration> "
			}, {
				name: "hdfs-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/hdfs-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?> <configuration> <property> <name>dfs.nameservice.id</name> <value>hdfs</value> </property> <property> <name>dfs.nameservices</name> <value>hdfs</value> </property> <property> <name>dfs.ha.namenodes.hdfs</name> <value>name-0-node,name-1-node</value> </property> <!-- namenode --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://journal-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-2-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}}/hdfs</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>{{MESOS_SANDBOX}}/name-data</value> </property> <property> <name>dfs.namenode.safemode.threshold-pct</name> <value>{{NAME_NODE_SAFEMODE_THRESHOLD_PCT}}</value> </property> <property> <name>dfs.namenode.heartbeat.recheck-interval</name> <value>{{NAME_NODE_HEARTBEAT_RECHECK_INTERVAL}}</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>{{NAME_NODE_HANDLER_COUNT}}</value> </property> <property> <name>dfs.namenode.invalidate.work.pct.per.iteration</name> <value>{{NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.replication.work.multiplier.per.iteration</name> <value>{{NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.datanode.registration.ip-hostname-check</name> <value>{{NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK}}</value> </property> <!-- name-0-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <!-- name-1-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <!-- journalnode --> <property> <name>dfs.journalnode.rpc-address</name> <value>0.0.0.0:{{JOURNAL_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.journalnode.http-address</name> <value>0.0.0.0:{{JOURNAL_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>{{MESOS_SANDBOX}}/journal-data</value> </property> <!-- datanode --> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:{{DATA_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:{{DATA_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:{{DATA_NODE_IPC_PORT}}</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>{{MESOS_SANDBOX}}/data-data</value> </property> <property> <name>dfs.datanode.balance.bandwidthPerSec</name> <value>41943040</value> </property> <property> <name>dfs.datanode.handler.count</name> <value>{{DATA_NODE_HANDLER_COUNT}}</value> </property> <!-- HA --> <property> <name>ha.zookeeper.quorum</name> <value>master.mesos:2181</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>{{HA_FENCING_METHODS}}</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>{{HA_AUTOMATIC_FAILURE}}</value> </property> {{#NAMENODE}} <property> <name>dfs.ha.namenode.id</name> <value>name-{{POD_INSTANCE_INDEX}}-node</value> </property> {{/NAMENODE}} <property> <name>dfs.image.compress</name> <value>{{IMAGE_COMPRESS}}</value> </property> <property> <name>dfs.image.compression.codec</name> <value>{{IMAGE_COMPRESSION_CODEC}}</value> </property> <property> <name>dfs.client.read.shortcircuit</name> <value>{{CLIENT_READ_SHORTCIRCUIT}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size.expiry.ms</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS}}</value> </property> <property> <name>dfs.client.failover.proxy.provider.hdfs</name> <value>{{CLIENT_FAILOVER_PROXY_PROVIDER_HDFS}}</value> </property> <property> <name>dfs.domain.socket.path</name> <value>{{CLIENT_READ_SHORTCIRCUIT_PATH}}</value> </property> <property> <name>dfs.permissions.enabled</name> <value>{{PERMISSIONS_ENABLED}}</value> </property> {{#SECURE_MODE}} <property> <name>ignore.secure.ports.for.testing</name> <value>true</value> </property> <!-- Security Configuration --> <property> <name>hadoop.security.auth_to_local</name> <value> RULE:[2:$1@$0](.*)s/.*/{{TASK_USER}}/ RULE:[1:$1@$0](.*)s/.*/{{TASK_USER}}/ </value> </property> <property> <name>dfs.block.access.token.enable</name> <value>true</value> </property> <property> <name>dfs.namenode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.cluster.administrators</name> <value>core,root,hdfs,nobody</value> </property> <property> <name>dfs.web.authentication.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.web.authentication.kerberos.keytab</name> <value>keytabs/{{KERBEROS_PRIMARY}}.{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> {{#DATANODE}} <!-- DataNode Security Configuration --> <property> <name>dfs.datanode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property> {{/DATANODE}} {{#NAMENODE}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/NAMENODE}} {{#ZKFC}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/ZKFC}} {{#JOURNALNODE}} <!-- JournalNode Security Configuration --> <property> <name>dfs.journalnode.keytab.file</name> <value>keytabs/hdfs.journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.journalnode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/JOURNALNODE}} {{/SECURE_MODE}} </configuration> "
			}, {
				name: "hadoop-metrics2",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/hadoop-metrics2.properties",
				template - content: "# Autogenerated by the Mesos Framework, DO NOT EDIT *.sink.statsd.class=org.apache.hadoop.metrics2.sink.StatsDSink journalnode.sink.statsd.period=10 journalnode.sink.statsd.server.host={{STATSD_UDP_HOST}} journalnode.sink.statsd.server.port={{STATSD_UDP_PORT}} journalnode.sink.statsd.skip.hostname=false"
			}]
		}],
		placement - rule: {
			@type: "AndRule",
			rules: [{
				@type: "TaskTypeRule",
				type: "journal",
				converter: {
					@type: "TaskTypeLabelConverter"
				},
				behavior: "AVOID"
			}, {
				@type: "TaskTypeRule",
				type: "name",
				converter: {
					@type: "TaskTypeLabelConverter"
				},
				behavior: "AVOID"
			}]
		}
	}, {
		type: "name",
		user: null,
		count: 2,
		container: null,
		uris: [
			"https://downloads.mesosphere.com/hdfs/assets/hadoop-2.6.0-cdh5.9.1-dcos.tar.gz",
			"https://downloads.mesosphere.com/hdfs/assets/1.0.0-2.6.0/bootstrap.zip"
		],
		task - specs: [{
			name: "node",
			goal: "RUNNING",
			resource - set: {
				id: "name-resources",
				resource - specifications: [{
					@type: "DefaultResourceSpec",
					name: "cpus",
					value: {
						type: "SCALAR",
						scalar: {
							value: 0.3
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}, {
					@type: "DefaultResourceSpec",
					name: "mem",
					value: {
						type: "SCALAR",
						scalar: {
							value: 512
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}, {
					@type: "PortsSpec",
					name: "ports",
					value: {
						type: "RANGES",
						scalar: null,
						ranges: {
							range: [{
								begin: 9001,
								end: 9001
							}, {
								begin: 9002,
								end: 9002
							}]
						},
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					port - specs: [{
						@type: "PortSpec",
						name: "ports",
						value: {
							type: "RANGES",
							scalar: null,
							ranges: {
								range: [{
									begin: 9001,
									end: 9001
								}]
							},
							set: null,
							text: null
						},
						role: "hdfs-role",
						principal: "hdfs-principal",
						port - name: "name-rpc",
						envKey: null
					}, {
						@type: "PortSpec",
						name: "ports",
						value: {
							type: "RANGES",
							scalar: null,
							ranges: {
								range: [{
									begin: 9002,
									end: 9002
								}]
							},
							set: null,
							text: null
						},
						role: "hdfs-role",
						principal: "hdfs-principal",
						port - name: "name-http",
						envKey: null
					}],
					envKey: null
				}],
				volume - specifications: [{
					@type: "DefaultVolumeSpec",
					type: "ROOT",
					container - path: "name-data",
					name: "disk",
					value: {
						type: "SCALAR",
						scalar: {
							value: 5000
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: "DISK_SIZE"
				}],
				role: "hdfs-role",
				principal: "hdfs-principal"
			},
			command - spec: {
				value: "./bootstrap && ./hadoop-2.6.0-cdh5.9.1/bin/hdfs namenode",
				environment: {
					CLIENT_FAILOVER_PROXY_PROVIDER_HDFS: "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
					CLIENT_READ_SHORTCIRCUIT: "true",
					CLIENT_READ_SHORTCIRCUIT_PATH: "/var/lib/hadoop-hdfs/dn_socket",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE: "1000",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS: "1000",
					DATA_NODE_BALANCE_BANDWIDTH_PER_SEC: "41943040",
					DATA_NODE_HANDLER_COUNT: "10",
					DATA_NODE_HTTP_PORT: "9004",
					DATA_NODE_IPC_PORT: "9005",
					DATA_NODE_RPC_PORT: "9003",
					FRAMEWORK_NAME: "",
					HADOOP_PROXYUSER_HTTPFS_GROUPS: "*",
					HADOOP_PROXYUSER_HTTPFS_HOSTS: "*",
					HADOOP_PROXYUSER_HUE_GROUPS: "*",
					HADOOP_PROXYUSER_HUE_HOSTS: "*",
					HADOOP_PROXYUSER_ROOT_GROUPS: "*",
					HADOOP_PROXYUSER_ROOT_HOSTS: "*",
					HADOOP_ROOT_LOGGER: "INFO,console",
					HA_AUTOMATIC_FAILURE: "true",
					HA_FENCING_METHODS: "shell(/bin/true)",
					IMAGE_COMPRESS: "true",
					IMAGE_COMPRESSION_CODEC: "org.apache.hadoop.io.compress.SnappyCodec",
					JOURNAL_NODE_HTTP_PORT: "8480",
					JOURNAL_NODE_RPC_PORT: "8485",
					NAMENODE: "true",
					NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK: "false",
					NAME_NODE_HANDLER_COUNT: "20",
					NAME_NODE_HEARTBEAT_RECHECK_INTERVAL: "60000",
					NAME_NODE_HTTP_PORT: "9002",
					NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION: "0.95",
					NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION: "4",
					NAME_NODE_RPC_PORT: "9001",
					NAME_NODE_SAFEMODE_THRESHOLD_PCT: "0.9",
					PERMISSIONS_ENABLED: "false",
					SERVICE_ZK_ROOT: "dcos-service-hdfs",
					TASK_USER: "root"
				}
			},
			health - check - spec: null,
			readiness - check - spec: {
				command: "./hadoop-2.6.0-cdh5.9.1/bin/hdfs haadmin -getServiceState name-$POD_INSTANCE_INDEX-node",
				delay: 0,
				interval: 5,
				timeout: 60
			},
			config - files: [{
				name: "core-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/core-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?><configuration> <property> <name>fs.default.name</name> <value>hdfs://hdfs</value> </property> <property> <name>hadoop.proxyuser.hue.hosts</name> <value>{{HADOOP_PROXYUSER_HUE_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.hue.groups</name> <value>{{HADOOP_PROXYUSER_HUE_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>{{HADOOP_PROXYUSER_ROOT_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>{{HADOOP_PROXYUSER_ROOT_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.groups</name> <value>{{HADOOP_PROXYUSER_HTTPFS_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.hosts</name> <value>{{HADOOP_PROXYUSER_HTTPFS_HOSTS}}</value> </property> <property> <name>ha.zookeeper.parent-znode</name> <value>/{{SERVICE_ZK_ROOT}}/hadoop-ha</value> </property> {{#SECURE_MODE}} <property> <!-- The ZKFC nodes use this property to verify they are connecting to the namenode with the expected principal. --> <name>hadoop.security.service.user.name.key.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property> <property> <name>hadoop.security.authorization</name> <value>true</value> </property> {{/SECURE_MODE}} </configuration> "
			}, {
				name: "hdfs-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/hdfs-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?> <configuration> <property> <name>dfs.nameservice.id</name> <value>hdfs</value> </property> <property> <name>dfs.nameservices</name> <value>hdfs</value> </property> <property> <name>dfs.ha.namenodes.hdfs</name> <value>name-0-node,name-1-node</value> </property> <!-- namenode --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://journal-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-2-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}}/hdfs</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>{{MESOS_SANDBOX}}/name-data</value> </property> <property> <name>dfs.namenode.safemode.threshold-pct</name> <value>{{NAME_NODE_SAFEMODE_THRESHOLD_PCT}}</value> </property> <property> <name>dfs.namenode.heartbeat.recheck-interval</name> <value>{{NAME_NODE_HEARTBEAT_RECHECK_INTERVAL}}</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>{{NAME_NODE_HANDLER_COUNT}}</value> </property> <property> <name>dfs.namenode.invalidate.work.pct.per.iteration</name> <value>{{NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.replication.work.multiplier.per.iteration</name> <value>{{NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.datanode.registration.ip-hostname-check</name> <value>{{NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK}}</value> </property> <!-- name-0-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <!-- name-1-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <!-- journalnode --> <property> <name>dfs.journalnode.rpc-address</name> <value>0.0.0.0:{{JOURNAL_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.journalnode.http-address</name> <value>0.0.0.0:{{JOURNAL_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>{{MESOS_SANDBOX}}/journal-data</value> </property> <!-- datanode --> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:{{DATA_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:{{DATA_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:{{DATA_NODE_IPC_PORT}}</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>{{MESOS_SANDBOX}}/data-data</value> </property> <property> <name>dfs.datanode.balance.bandwidthPerSec</name> <value>41943040</value> </property> <property> <name>dfs.datanode.handler.count</name> <value>{{DATA_NODE_HANDLER_COUNT}}</value> </property> <!-- HA --> <property> <name>ha.zookeeper.quorum</name> <value>master.mesos:2181</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>{{HA_FENCING_METHODS}}</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>{{HA_AUTOMATIC_FAILURE}}</value> </property> {{#NAMENODE}} <property> <name>dfs.ha.namenode.id</name> <value>name-{{POD_INSTANCE_INDEX}}-node</value> </property> {{/NAMENODE}} <property> <name>dfs.image.compress</name> <value>{{IMAGE_COMPRESS}}</value> </property> <property> <name>dfs.image.compression.codec</name> <value>{{IMAGE_COMPRESSION_CODEC}}</value> </property> <property> <name>dfs.client.read.shortcircuit</name> <value>{{CLIENT_READ_SHORTCIRCUIT}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size.expiry.ms</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS}}</value> </property> <property> <name>dfs.client.failover.proxy.provider.hdfs</name> <value>{{CLIENT_FAILOVER_PROXY_PROVIDER_HDFS}}</value> </property> <property> <name>dfs.domain.socket.path</name> <value>{{CLIENT_READ_SHORTCIRCUIT_PATH}}</value> </property> <property> <name>dfs.permissions.enabled</name> <value>{{PERMISSIONS_ENABLED}}</value> </property> {{#SECURE_MODE}} <property> <name>ignore.secure.ports.for.testing</name> <value>true</value> </property> <!-- Security Configuration --> <property> <name>hadoop.security.auth_to_local</name> <value> RULE:[2:$1@$0](.*)s/.*/{{TASK_USER}}/ RULE:[1:$1@$0](.*)s/.*/{{TASK_USER}}/ </value> </property> <property> <name>dfs.block.access.token.enable</name> <value>true</value> </property> <property> <name>dfs.namenode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.cluster.administrators</name> <value>core,root,hdfs,nobody</value> </property> <property> <name>dfs.web.authentication.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.web.authentication.kerberos.keytab</name> <value>keytabs/{{KERBEROS_PRIMARY}}.{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> {{#DATANODE}} <!-- DataNode Security Configuration --> <property> <name>dfs.datanode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property> {{/DATANODE}} {{#NAMENODE}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/NAMENODE}} {{#ZKFC}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/ZKFC}} {{#JOURNALNODE}} <!-- JournalNode Security Configuration --> <property> <name>dfs.journalnode.keytab.file</name> <value>keytabs/hdfs.journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.journalnode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/JOURNALNODE}} {{/SECURE_MODE}} </configuration> "
			}, {
				name: "hadoop-metrics2",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/hadoop-metrics2.properties",
				template - content: "# Autogenerated by the Mesos Framework, DO NOT EDIT *.sink.statsd.class=org.apache.hadoop.metrics2.sink.StatsDSink namenode.sink.statsd.period=10 namenode.sink.statsd.server.host={{STATSD_UDP_HOST}} namenode.sink.statsd.server.port={{STATSD_UDP_PORT}} namenode.sink.statsd.skip.hostname=false"
			}]
		}, {
			name: "format",
			goal: "FINISHED",
			resource - set: {
				id: "name-resources",
				resource - specifications: [{
					@type: "DefaultResourceSpec",
					name: "cpus",
					value: {
						type: "SCALAR",
						scalar: {
							value: 0.3
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}, {
					@type: "DefaultResourceSpec",
					name: "mem",
					value: {
						type: "SCALAR",
						scalar: {
							value: 512
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}, {
					@type: "PortsSpec",
					name: "ports",
					value: {
						type: "RANGES",
						scalar: null,
						ranges: {
							range: [{
								begin: 9001,
								end: 9001
							}, {
								begin: 9002,
								end: 9002
							}]
						},
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					port - specs: [{
						@type: "PortSpec",
						name: "ports",
						value: {
							type: "RANGES",
							scalar: null,
							ranges: {
								range: [{
									begin: 9001,
									end: 9001
								}]
							},
							set: null,
							text: null
						},
						role: "hdfs-role",
						principal: "hdfs-principal",
						port - name: "name-rpc",
						envKey: null
					}, {
						@type: "PortSpec",
						name: "ports",
						value: {
							type: "RANGES",
							scalar: null,
							ranges: {
								range: [{
									begin: 9002,
									end: 9002
								}]
							},
							set: null,
							text: null
						},
						role: "hdfs-role",
						principal: "hdfs-principal",
						port - name: "name-http",
						envKey: null
					}],
					envKey: null
				}],
				volume - specifications: [{
					@type: "DefaultVolumeSpec",
					type: "ROOT",
					container - path: "name-data",
					name: "disk",
					value: {
						type: "SCALAR",
						scalar: {
							value: 5000
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: "DISK_SIZE"
				}],
				role: "hdfs-role",
				principal: "hdfs-principal"
			},
			command - spec: {
				value: "./bootstrap && ./hadoop-2.6.0-cdh5.9.1/bin/hdfs namenode -format",
				environment: {
					CLIENT_FAILOVER_PROXY_PROVIDER_HDFS: "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
					CLIENT_READ_SHORTCIRCUIT: "true",
					CLIENT_READ_SHORTCIRCUIT_PATH: "/var/lib/hadoop-hdfs/dn_socket",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE: "1000",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS: "1000",
					DATA_NODE_BALANCE_BANDWIDTH_PER_SEC: "41943040",
					DATA_NODE_HANDLER_COUNT: "10",
					DATA_NODE_HTTP_PORT: "9004",
					DATA_NODE_IPC_PORT: "9005",
					DATA_NODE_RPC_PORT: "9003",
					FRAMEWORK_NAME: "",
					HADOOP_PROXYUSER_HTTPFS_GROUPS: "*",
					HADOOP_PROXYUSER_HTTPFS_HOSTS: "*",
					HADOOP_PROXYUSER_HUE_GROUPS: "*",
					HADOOP_PROXYUSER_HUE_HOSTS: "*",
					HADOOP_PROXYUSER_ROOT_GROUPS: "*",
					HADOOP_PROXYUSER_ROOT_HOSTS: "*",
					HADOOP_ROOT_LOGGER: "INFO,console",
					HA_AUTOMATIC_FAILURE: "true",
					HA_FENCING_METHODS: "shell(/bin/true)",
					IMAGE_COMPRESS: "true",
					IMAGE_COMPRESSION_CODEC: "org.apache.hadoop.io.compress.SnappyCodec",
					JOURNAL_NODE_HTTP_PORT: "8480",
					JOURNAL_NODE_RPC_PORT: "8485",
					NAMENODE: "true",
					NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK: "false",
					NAME_NODE_HANDLER_COUNT: "20",
					NAME_NODE_HEARTBEAT_RECHECK_INTERVAL: "60000",
					NAME_NODE_HTTP_PORT: "9002",
					NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION: "0.95",
					NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION: "4",
					NAME_NODE_RPC_PORT: "9001",
					NAME_NODE_SAFEMODE_THRESHOLD_PCT: "0.9",
					PERMISSIONS_ENABLED: "false",
					SERVICE_ZK_ROOT: "dcos-service-hdfs",
					TASK_USER: "root"
				}
			},
			health - check - spec: null,
			readiness - check - spec: null,
			config - files: [{
				name: "core-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/core-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?><configuration> <property> <name>fs.default.name</name> <value>hdfs://hdfs</value> </property> <property> <name>hadoop.proxyuser.hue.hosts</name> <value>{{HADOOP_PROXYUSER_HUE_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.hue.groups</name> <value>{{HADOOP_PROXYUSER_HUE_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>{{HADOOP_PROXYUSER_ROOT_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>{{HADOOP_PROXYUSER_ROOT_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.groups</name> <value>{{HADOOP_PROXYUSER_HTTPFS_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.hosts</name> <value>{{HADOOP_PROXYUSER_HTTPFS_HOSTS}}</value> </property> <property> <name>ha.zookeeper.parent-znode</name> <value>/{{SERVICE_ZK_ROOT}}/hadoop-ha</value> </property> {{#SECURE_MODE}} <property> <!-- The ZKFC nodes use this property to verify they are connecting to the namenode with the expected principal. --> <name>hadoop.security.service.user.name.key.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property> <property> <name>hadoop.security.authorization</name> <value>true</value> </property> {{/SECURE_MODE}} </configuration> "
			}, {
				name: "hdfs-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/hdfs-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?> <configuration> <property> <name>dfs.nameservice.id</name> <value>hdfs</value> </property> <property> <name>dfs.nameservices</name> <value>hdfs</value> </property> <property> <name>dfs.ha.namenodes.hdfs</name> <value>name-0-node,name-1-node</value> </property> <!-- namenode --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://journal-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-2-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}}/hdfs</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>{{MESOS_SANDBOX}}/name-data</value> </property> <property> <name>dfs.namenode.safemode.threshold-pct</name> <value>{{NAME_NODE_SAFEMODE_THRESHOLD_PCT}}</value> </property> <property> <name>dfs.namenode.heartbeat.recheck-interval</name> <value>{{NAME_NODE_HEARTBEAT_RECHECK_INTERVAL}}</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>{{NAME_NODE_HANDLER_COUNT}}</value> </property> <property> <name>dfs.namenode.invalidate.work.pct.per.iteration</name> <value>{{NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.replication.work.multiplier.per.iteration</name> <value>{{NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.datanode.registration.ip-hostname-check</name> <value>{{NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK}}</value> </property> <!-- name-0-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <!-- name-1-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <!-- journalnode --> <property> <name>dfs.journalnode.rpc-address</name> <value>0.0.0.0:{{JOURNAL_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.journalnode.http-address</name> <value>0.0.0.0:{{JOURNAL_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>{{MESOS_SANDBOX}}/journal-data</value> </property> <!-- datanode --> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:{{DATA_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:{{DATA_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:{{DATA_NODE_IPC_PORT}}</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>{{MESOS_SANDBOX}}/data-data</value> </property> <property> <name>dfs.datanode.balance.bandwidthPerSec</name> <value>41943040</value> </property> <property> <name>dfs.datanode.handler.count</name> <value>{{DATA_NODE_HANDLER_COUNT}}</value> </property> <!-- HA --> <property> <name>ha.zookeeper.quorum</name> <value>master.mesos:2181</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>{{HA_FENCING_METHODS}}</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>{{HA_AUTOMATIC_FAILURE}}</value> </property> {{#NAMENODE}} <property> <name>dfs.ha.namenode.id</name> <value>name-{{POD_INSTANCE_INDEX}}-node</value> </property> {{/NAMENODE}} <property> <name>dfs.image.compress</name> <value>{{IMAGE_COMPRESS}}</value> </property> <property> <name>dfs.image.compression.codec</name> <value>{{IMAGE_COMPRESSION_CODEC}}</value> </property> <property> <name>dfs.client.read.shortcircuit</name> <value>{{CLIENT_READ_SHORTCIRCUIT}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size.expiry.ms</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS}}</value> </property> <property> <name>dfs.client.failover.proxy.provider.hdfs</name> <value>{{CLIENT_FAILOVER_PROXY_PROVIDER_HDFS}}</value> </property> <property> <name>dfs.domain.socket.path</name> <value>{{CLIENT_READ_SHORTCIRCUIT_PATH}}</value> </property> <property> <name>dfs.permissions.enabled</name> <value>{{PERMISSIONS_ENABLED}}</value> </property> {{#SECURE_MODE}} <property> <name>ignore.secure.ports.for.testing</name> <value>true</value> </property> <!-- Security Configuration --> <property> <name>hadoop.security.auth_to_local</name> <value> RULE:[2:$1@$0](.*)s/.*/{{TASK_USER}}/ RULE:[1:$1@$0](.*)s/.*/{{TASK_USER}}/ </value> </property> <property> <name>dfs.block.access.token.enable</name> <value>true</value> </property> <property> <name>dfs.namenode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.cluster.administrators</name> <value>core,root,hdfs,nobody</value> </property> <property> <name>dfs.web.authentication.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.web.authentication.kerberos.keytab</name> <value>keytabs/{{KERBEROS_PRIMARY}}.{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> {{#DATANODE}} <!-- DataNode Security Configuration --> <property> <name>dfs.datanode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property> {{/DATANODE}} {{#NAMENODE}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/NAMENODE}} {{#ZKFC}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/ZKFC}} {{#JOURNALNODE}} <!-- JournalNode Security Configuration --> <property> <name>dfs.journalnode.keytab.file</name> <value>keytabs/hdfs.journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.journalnode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/JOURNALNODE}} {{/SECURE_MODE}} </configuration> "
			}]
		}, {
			name: "bootstrap",
			goal: "FINISHED",
			resource - set: {
				id: "name-resources",
				resource - specifications: [{
					@type: "DefaultResourceSpec",
					name: "cpus",
					value: {
						type: "SCALAR",
						scalar: {
							value: 0.3
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}, {
					@type: "DefaultResourceSpec",
					name: "mem",
					value: {
						type: "SCALAR",
						scalar: {
							value: 512
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}, {
					@type: "PortsSpec",
					name: "ports",
					value: {
						type: "RANGES",
						scalar: null,
						ranges: {
							range: [{
								begin: 9001,
								end: 9001
							}, {
								begin: 9002,
								end: 9002
							}]
						},
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					port - specs: [{
						@type: "PortSpec",
						name: "ports",
						value: {
							type: "RANGES",
							scalar: null,
							ranges: {
								range: [{
									begin: 9001,
									end: 9001
								}]
							},
							set: null,
							text: null
						},
						role: "hdfs-role",
						principal: "hdfs-principal",
						port - name: "name-rpc",
						envKey: null
					}, {
						@type: "PortSpec",
						name: "ports",
						value: {
							type: "RANGES",
							scalar: null,
							ranges: {
								range: [{
									begin: 9002,
									end: 9002
								}]
							},
							set: null,
							text: null
						},
						role: "hdfs-role",
						principal: "hdfs-principal",
						port - name: "name-http",
						envKey: null
					}],
					envKey: null
				}],
				volume - specifications: [{
					@type: "DefaultVolumeSpec",
					type: "ROOT",
					container - path: "name-data",
					name: "disk",
					value: {
						type: "SCALAR",
						scalar: {
							value: 5000
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: "DISK_SIZE"
				}],
				role: "hdfs-role",
				principal: "hdfs-principal"
			},
			command - spec: {
				value: "./bootstrap && ./hadoop-2.6.0-cdh5.9.1/bin/hdfs namenode -bootstrapStandby",
				environment: {
					CLIENT_FAILOVER_PROXY_PROVIDER_HDFS: "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
					CLIENT_READ_SHORTCIRCUIT: "true",
					CLIENT_READ_SHORTCIRCUIT_PATH: "/var/lib/hadoop-hdfs/dn_socket",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE: "1000",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS: "1000",
					DATA_NODE_BALANCE_BANDWIDTH_PER_SEC: "41943040",
					DATA_NODE_HANDLER_COUNT: "10",
					DATA_NODE_HTTP_PORT: "9004",
					DATA_NODE_IPC_PORT: "9005",
					DATA_NODE_RPC_PORT: "9003",
					FRAMEWORK_NAME: "",
					HADOOP_PROXYUSER_HTTPFS_GROUPS: "*",
					HADOOP_PROXYUSER_HTTPFS_HOSTS: "*",
					HADOOP_PROXYUSER_HUE_GROUPS: "*",
					HADOOP_PROXYUSER_HUE_HOSTS: "*",
					HADOOP_PROXYUSER_ROOT_GROUPS: "*",
					HADOOP_PROXYUSER_ROOT_HOSTS: "*",
					HADOOP_ROOT_LOGGER: "INFO,console",
					HA_AUTOMATIC_FAILURE: "true",
					HA_FENCING_METHODS: "shell(/bin/true)",
					IMAGE_COMPRESS: "true",
					IMAGE_COMPRESSION_CODEC: "org.apache.hadoop.io.compress.SnappyCodec",
					JOURNAL_NODE_HTTP_PORT: "8480",
					JOURNAL_NODE_RPC_PORT: "8485",
					NAMENODE: "true",
					NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK: "false",
					NAME_NODE_HANDLER_COUNT: "20",
					NAME_NODE_HEARTBEAT_RECHECK_INTERVAL: "60000",
					NAME_NODE_HTTP_PORT: "9002",
					NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION: "0.95",
					NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION: "4",
					NAME_NODE_RPC_PORT: "9001",
					NAME_NODE_SAFEMODE_THRESHOLD_PCT: "0.9",
					PERMISSIONS_ENABLED: "false",
					SERVICE_ZK_ROOT: "dcos-service-hdfs",
					TASK_USER: "root"
				}
			},
			health - check - spec: null,
			readiness - check - spec: null,
			config - files: [{
				name: "core-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/core-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?><configuration> <property> <name>fs.default.name</name> <value>hdfs://hdfs</value> </property> <property> <name>hadoop.proxyuser.hue.hosts</name> <value>{{HADOOP_PROXYUSER_HUE_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.hue.groups</name> <value>{{HADOOP_PROXYUSER_HUE_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>{{HADOOP_PROXYUSER_ROOT_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>{{HADOOP_PROXYUSER_ROOT_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.groups</name> <value>{{HADOOP_PROXYUSER_HTTPFS_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.hosts</name> <value>{{HADOOP_PROXYUSER_HTTPFS_HOSTS}}</value> </property> <property> <name>ha.zookeeper.parent-znode</name> <value>/{{SERVICE_ZK_ROOT}}/hadoop-ha</value> </property> {{#SECURE_MODE}} <property> <!-- The ZKFC nodes use this property to verify they are connecting to the namenode with the expected principal. --> <name>hadoop.security.service.user.name.key.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property> <property> <name>hadoop.security.authorization</name> <value>true</value> </property> {{/SECURE_MODE}} </configuration> "
			}, {
				name: "hdfs-bootstrap-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/hdfs-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?> <configuration> <property> <name>dfs.nameservice.id</name> <value>hdfs</value> </property> <property> <name>dfs.nameservices</name> <value>hdfs</value> </property> <property> <name>dfs.ha.namenodes.hdfs</name> <value>name-0-node,name-1-node</value> </property> <!-- namenode --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://journal-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-2-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}}/hdfs</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>{{MESOS_SANDBOX}}/name-data</value> </property> <property> <name>dfs.namenode.safemode.threshold-pct</name> <value>{{NAME_NODE_SAFEMODE_THRESHOLD_PCT}}</value> </property> <property> <name>dfs.namenode.heartbeat.recheck-interval</name> <value>{{NAME_NODE_HEARTBEAT_RECHECK_INTERVAL}}</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>{{NAME_NODE_HANDLER_COUNT}}</value> </property> <property> <name>dfs.namenode.invalidate.work.pct.per.iteration</name> <value>{{NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.replication.work.multiplier.per.iteration</name> <value>{{NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.datanode.registration.ip-hostname-check</name> <value>{{NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK}}</value> </property> <!-- name-0-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <!-- name-1-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <!-- journalnode --> <property> <name>dfs.journalnode.rpc-address</name> <value>0.0.0.0:{{JOURNAL_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.journalnode.http-address</name> <value>0.0.0.0:{{JOURNAL_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>{{MESOS_SANDBOX}}/journal-data</value> </property> <!-- datanode --> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:{{DATA_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:{{DATA_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:{{DATA_NODE_IPC_PORT}}</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>{{MESOS_SANDBOX}}/data-data</value> </property> <property> <name>dfs.datanode.balance.bandwidthPerSec</name> <value>41943040</value> </property> <property> <name>dfs.datanode.handler.count</name> <value>{{DATA_NODE_HANDLER_COUNT}}</value> </property> <!-- HA --> <property> <name>ha.zookeeper.quorum</name> <value>master.mesos:2181</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>{{HA_FENCING_METHODS}}</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>{{HA_AUTOMATIC_FAILURE}}</value> </property> {{#NAMENODE}} <property> <name>dfs.ha.namenode.id</name> <value>name-{{POD_INSTANCE_INDEX}}-node</value> </property> {{/NAMENODE}} <property> <name>dfs.image.compress</name> <value>{{IMAGE_COMPRESS}}</value> </property> <property> <name>dfs.image.compression.codec</name> <value>{{IMAGE_COMPRESSION_CODEC}}</value> </property> <property> <name>dfs.client.read.shortcircuit</name> <value>{{CLIENT_READ_SHORTCIRCUIT}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size.expiry.ms</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS}}</value> </property> <property> <name>dfs.client.failover.proxy.provider.hdfs</name> <value>{{CLIENT_FAILOVER_PROXY_PROVIDER_HDFS}}</value> </property> <property> <name>dfs.domain.socket.path</name> <value>{{CLIENT_READ_SHORTCIRCUIT_PATH}}</value> </property> <property> <name>dfs.permissions.enabled</name> <value>{{PERMISSIONS_ENABLED}}</value> </property> {{#SECURE_MODE}} <property> <name>ignore.secure.ports.for.testing</name> <value>true</value> </property> <!-- Security Configuration --> <property> <name>hadoop.security.auth_to_local</name> <value> RULE:[2:$1@$0](.*)s/.*/{{TASK_USER}}/ RULE:[1:$1@$0](.*)s/.*/{{TASK_USER}}/ </value> </property> <property> <name>dfs.block.access.token.enable</name> <value>true</value> </property> <property> <name>dfs.namenode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.cluster.administrators</name> <value>core,root,hdfs,nobody</value> </property> <property> <name>dfs.web.authentication.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.web.authentication.kerberos.keytab</name> <value>keytabs/{{KERBEROS_PRIMARY}}.{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> {{#DATANODE}} <!-- DataNode Security Configuration --> <property> <name>dfs.datanode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property> {{/DATANODE}} {{#NAMENODE}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/NAMENODE}} {{#ZKFC}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/ZKFC}} {{#JOURNALNODE}} <!-- JournalNode Security Configuration --> <property> <name>dfs.journalnode.keytab.file</name> <value>keytabs/hdfs.journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.journalnode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/JOURNALNODE}} {{/SECURE_MODE}} </configuration> "
			}]
		}],
		placement - rule: {
			@type: "AndRule",
			rules: [{
				@type: "TaskTypeRule",
				type: "name",
				converter: {
					@type: "TaskTypeLabelConverter"
				},
				behavior: "AVOID"
			}, {
				@type: "TaskTypeRule",
				type: "journal",
				converter: {
					@type: "TaskTypeLabelConverter"
				},
				behavior: "AVOID"
			}]
		}
	}, {
		type: "zkfc",
		user: null,
		count: 2,
		container: null,
		uris: [
			"https://downloads.mesosphere.com/hdfs/assets/hadoop-2.6.0-cdh5.9.1-dcos.tar.gz",
			"https://downloads.mesosphere.com/hdfs/assets/1.0.0-2.6.0/bootstrap.zip"
		],
		task - specs: [{
			name: "node",
			goal: "RUNNING",
			resource - set: {
				id: "zkfc-resources",
				resource - specifications: [{
					@type: "DefaultResourceSpec",
					name: "cpus",
					value: {
						type: "SCALAR",
						scalar: {
							value: 0.3
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}, {
					@type: "DefaultResourceSpec",
					name: "mem",
					value: {
						type: "SCALAR",
						scalar: {
							value: 512
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}],
				volume - specifications: [],
				role: "hdfs-role",
				principal: "hdfs-principal"
			},
			command - spec: {
				value: "./bootstrap && ./hadoop-2.6.0-cdh5.9.1/bin/hdfs zkfc",
				environment: {
					CLIENT_FAILOVER_PROXY_PROVIDER_HDFS: "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
					CLIENT_READ_SHORTCIRCUIT: "true",
					CLIENT_READ_SHORTCIRCUIT_PATH: "/var/lib/hadoop-hdfs/dn_socket",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE: "1000",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS: "1000",
					DATA_NODE_BALANCE_BANDWIDTH_PER_SEC: "41943040",
					DATA_NODE_HANDLER_COUNT: "10",
					DATA_NODE_HTTP_PORT: "9004",
					DATA_NODE_IPC_PORT: "9005",
					DATA_NODE_RPC_PORT: "9003",
					HADOOP_PROXYUSER_HTTPFS_GROUPS: "*",
					HADOOP_PROXYUSER_HTTPFS_HOSTS: "*",
					HADOOP_PROXYUSER_HUE_GROUPS: "*",
					HADOOP_PROXYUSER_HUE_HOSTS: "*",
					HADOOP_PROXYUSER_ROOT_GROUPS: "*",
					HADOOP_PROXYUSER_ROOT_HOSTS: "*",
					HADOOP_ROOT_LOGGER: "INFO,console",
					HA_AUTOMATIC_FAILURE: "true",
					HA_FENCING_METHODS: "shell(/bin/true)",
					IMAGE_COMPRESS: "true",
					IMAGE_COMPRESSION_CODEC: "org.apache.hadoop.io.compress.SnappyCodec",
					JOURNAL_NODE_HTTP_PORT: "8480",
					JOURNAL_NODE_RPC_PORT: "8485",
					NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK: "false",
					NAME_NODE_HANDLER_COUNT: "20",
					NAME_NODE_HEARTBEAT_RECHECK_INTERVAL: "60000",
					NAME_NODE_HTTP_PORT: "9002",
					NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION: "0.95",
					NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION: "4",
					NAME_NODE_RPC_PORT: "9001",
					NAME_NODE_SAFEMODE_THRESHOLD_PCT: "0.9",
					PERMISSIONS_ENABLED: "false",
					SERVICE_ZK_ROOT: "dcos-service-hdfs",
					TASK_USER: "root",
					ZKFC: "true"
				}
			},
			health - check - spec: null,
			readiness - check - spec: null,
			config - files: [{
				name: "core-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/core-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?><configuration> <property> <name>fs.default.name</name> <value>hdfs://hdfs</value> </property> <property> <name>hadoop.proxyuser.hue.hosts</name> <value>{{HADOOP_PROXYUSER_HUE_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.hue.groups</name> <value>{{HADOOP_PROXYUSER_HUE_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>{{HADOOP_PROXYUSER_ROOT_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>{{HADOOP_PROXYUSER_ROOT_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.groups</name> <value>{{HADOOP_PROXYUSER_HTTPFS_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.hosts</name> <value>{{HADOOP_PROXYUSER_HTTPFS_HOSTS}}</value> </property> <property> <name>ha.zookeeper.parent-znode</name> <value>/{{SERVICE_ZK_ROOT}}/hadoop-ha</value> </property> {{#SECURE_MODE}} <property> <!-- The ZKFC nodes use this property to verify they are connecting to the namenode with the expected principal. --> <name>hadoop.security.service.user.name.key.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property> <property> <name>hadoop.security.authorization</name> <value>true</value> </property> {{/SECURE_MODE}} </configuration> "
			}, {
				name: "hdfs-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/hdfs-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?> <configuration> <property> <name>dfs.nameservice.id</name> <value>hdfs</value> </property> <property> <name>dfs.nameservices</name> <value>hdfs</value> </property> <property> <name>dfs.ha.namenodes.hdfs</name> <value>name-0-node,name-1-node</value> </property> <!-- namenode --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://journal-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-2-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}}/hdfs</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>{{MESOS_SANDBOX}}/name-data</value> </property> <property> <name>dfs.namenode.safemode.threshold-pct</name> <value>{{NAME_NODE_SAFEMODE_THRESHOLD_PCT}}</value> </property> <property> <name>dfs.namenode.heartbeat.recheck-interval</name> <value>{{NAME_NODE_HEARTBEAT_RECHECK_INTERVAL}}</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>{{NAME_NODE_HANDLER_COUNT}}</value> </property> <property> <name>dfs.namenode.invalidate.work.pct.per.iteration</name> <value>{{NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.replication.work.multiplier.per.iteration</name> <value>{{NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.datanode.registration.ip-hostname-check</name> <value>{{NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK}}</value> </property> <!-- name-0-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <!-- name-1-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <!-- journalnode --> <property> <name>dfs.journalnode.rpc-address</name> <value>0.0.0.0:{{JOURNAL_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.journalnode.http-address</name> <value>0.0.0.0:{{JOURNAL_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>{{MESOS_SANDBOX}}/journal-data</value> </property> <!-- datanode --> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:{{DATA_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:{{DATA_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:{{DATA_NODE_IPC_PORT}}</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>{{MESOS_SANDBOX}}/data-data</value> </property> <property> <name>dfs.datanode.balance.bandwidthPerSec</name> <value>41943040</value> </property> <property> <name>dfs.datanode.handler.count</name> <value>{{DATA_NODE_HANDLER_COUNT}}</value> </property> <!-- HA --> <property> <name>ha.zookeeper.quorum</name> <value>master.mesos:2181</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>{{HA_FENCING_METHODS}}</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>{{HA_AUTOMATIC_FAILURE}}</value> </property> {{#NAMENODE}} <property> <name>dfs.ha.namenode.id</name> <value>name-{{POD_INSTANCE_INDEX}}-node</value> </property> {{/NAMENODE}} <property> <name>dfs.image.compress</name> <value>{{IMAGE_COMPRESS}}</value> </property> <property> <name>dfs.image.compression.codec</name> <value>{{IMAGE_COMPRESSION_CODEC}}</value> </property> <property> <name>dfs.client.read.shortcircuit</name> <value>{{CLIENT_READ_SHORTCIRCUIT}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size.expiry.ms</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS}}</value> </property> <property> <name>dfs.client.failover.proxy.provider.hdfs</name> <value>{{CLIENT_FAILOVER_PROXY_PROVIDER_HDFS}}</value> </property> <property> <name>dfs.domain.socket.path</name> <value>{{CLIENT_READ_SHORTCIRCUIT_PATH}}</value> </property> <property> <name>dfs.permissions.enabled</name> <value>{{PERMISSIONS_ENABLED}}</value> </property> {{#SECURE_MODE}} <property> <name>ignore.secure.ports.for.testing</name> <value>true</value> </property> <!-- Security Configuration --> <property> <name>hadoop.security.auth_to_local</name> <value> RULE:[2:$1@$0](.*)s/.*/{{TASK_USER}}/ RULE:[1:$1@$0](.*)s/.*/{{TASK_USER}}/ </value> </property> <property> <name>dfs.block.access.token.enable</name> <value>true</value> </property> <property> <name>dfs.namenode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.cluster.administrators</name> <value>core,root,hdfs,nobody</value> </property> <property> <name>dfs.web.authentication.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.web.authentication.kerberos.keytab</name> <value>keytabs/{{KERBEROS_PRIMARY}}.{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> {{#DATANODE}} <!-- DataNode Security Configuration --> <property> <name>dfs.datanode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property> {{/DATANODE}} {{#NAMENODE}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/NAMENODE}} {{#ZKFC}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/ZKFC}} {{#JOURNALNODE}} <!-- JournalNode Security Configuration --> <property> <name>dfs.journalnode.keytab.file</name> <value>keytabs/hdfs.journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.journalnode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/JOURNALNODE}} {{/SECURE_MODE}} </configuration> "
			}]
		}, {
			name: "format",
			goal: "FINISHED",
			resource - set: {
				id: "zkfc-resources",
				resource - specifications: [{
					@type: "DefaultResourceSpec",
					name: "cpus",
					value: {
						type: "SCALAR",
						scalar: {
							value: 0.3
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}, {
					@type: "DefaultResourceSpec",
					name: "mem",
					value: {
						type: "SCALAR",
						scalar: {
							value: 512
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}],
				volume - specifications: [],
				role: "hdfs-role",
				principal: "hdfs-principal"
			},
			command - spec: {
				value: "./bootstrap && ./hadoop-2.6.0-cdh5.9.1/bin/hdfs zkfc -formatZK",
				environment: {
					CLIENT_FAILOVER_PROXY_PROVIDER_HDFS: "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
					CLIENT_READ_SHORTCIRCUIT: "true",
					CLIENT_READ_SHORTCIRCUIT_PATH: "/var/lib/hadoop-hdfs/dn_socket",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE: "1000",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS: "1000",
					DATA_NODE_BALANCE_BANDWIDTH_PER_SEC: "41943040",
					DATA_NODE_HANDLER_COUNT: "10",
					DATA_NODE_HTTP_PORT: "9004",
					DATA_NODE_IPC_PORT: "9005",
					DATA_NODE_RPC_PORT: "9003",
					HADOOP_PROXYUSER_HTTPFS_GROUPS: "*",
					HADOOP_PROXYUSER_HTTPFS_HOSTS: "*",
					HADOOP_PROXYUSER_HUE_GROUPS: "*",
					HADOOP_PROXYUSER_HUE_HOSTS: "*",
					HADOOP_PROXYUSER_ROOT_GROUPS: "*",
					HADOOP_PROXYUSER_ROOT_HOSTS: "*",
					HADOOP_ROOT_LOGGER: "INFO,console",
					HA_AUTOMATIC_FAILURE: "true",
					HA_FENCING_METHODS: "shell(/bin/true)",
					IMAGE_COMPRESS: "true",
					IMAGE_COMPRESSION_CODEC: "org.apache.hadoop.io.compress.SnappyCodec",
					JOURNAL_NODE_HTTP_PORT: "8480",
					JOURNAL_NODE_RPC_PORT: "8485",
					NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK: "false",
					NAME_NODE_HANDLER_COUNT: "20",
					NAME_NODE_HEARTBEAT_RECHECK_INTERVAL: "60000",
					NAME_NODE_HTTP_PORT: "9002",
					NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION: "0.95",
					NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION: "4",
					NAME_NODE_RPC_PORT: "9001",
					NAME_NODE_SAFEMODE_THRESHOLD_PCT: "0.9",
					PERMISSIONS_ENABLED: "false",
					SERVICE_ZK_ROOT: "dcos-service-hdfs",
					TASK_USER: "root",
					ZKFC: "true"
				}
			},
			health - check - spec: null,
			readiness - check - spec: null,
			config - files: [{
				name: "core-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/core-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?><configuration> <property> <name>fs.default.name</name> <value>hdfs://hdfs</value> </property> <property> <name>hadoop.proxyuser.hue.hosts</name> <value>{{HADOOP_PROXYUSER_HUE_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.hue.groups</name> <value>{{HADOOP_PROXYUSER_HUE_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>{{HADOOP_PROXYUSER_ROOT_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>{{HADOOP_PROXYUSER_ROOT_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.groups</name> <value>{{HADOOP_PROXYUSER_HTTPFS_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.hosts</name> <value>{{HADOOP_PROXYUSER_HTTPFS_HOSTS}}</value> </property> <property> <name>ha.zookeeper.parent-znode</name> <value>/{{SERVICE_ZK_ROOT}}/hadoop-ha</value> </property> {{#SECURE_MODE}} <property> <!-- The ZKFC nodes use this property to verify they are connecting to the namenode with the expected principal. --> <name>hadoop.security.service.user.name.key.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property> <property> <name>hadoop.security.authorization</name> <value>true</value> </property> {{/SECURE_MODE}} </configuration> "
			}, {
				name: "hdfs-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/hdfs-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?> <configuration> <property> <name>dfs.nameservice.id</name> <value>hdfs</value> </property> <property> <name>dfs.nameservices</name> <value>hdfs</value> </property> <property> <name>dfs.ha.namenodes.hdfs</name> <value>name-0-node,name-1-node</value> </property> <!-- namenode --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://journal-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-2-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}}/hdfs</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>{{MESOS_SANDBOX}}/name-data</value> </property> <property> <name>dfs.namenode.safemode.threshold-pct</name> <value>{{NAME_NODE_SAFEMODE_THRESHOLD_PCT}}</value> </property> <property> <name>dfs.namenode.heartbeat.recheck-interval</name> <value>{{NAME_NODE_HEARTBEAT_RECHECK_INTERVAL}}</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>{{NAME_NODE_HANDLER_COUNT}}</value> </property> <property> <name>dfs.namenode.invalidate.work.pct.per.iteration</name> <value>{{NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.replication.work.multiplier.per.iteration</name> <value>{{NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.datanode.registration.ip-hostname-check</name> <value>{{NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK}}</value> </property> <!-- name-0-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <!-- name-1-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <!-- journalnode --> <property> <name>dfs.journalnode.rpc-address</name> <value>0.0.0.0:{{JOURNAL_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.journalnode.http-address</name> <value>0.0.0.0:{{JOURNAL_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>{{MESOS_SANDBOX}}/journal-data</value> </property> <!-- datanode --> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:{{DATA_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:{{DATA_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:{{DATA_NODE_IPC_PORT}}</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>{{MESOS_SANDBOX}}/data-data</value> </property> <property> <name>dfs.datanode.balance.bandwidthPerSec</name> <value>41943040</value> </property> <property> <name>dfs.datanode.handler.count</name> <value>{{DATA_NODE_HANDLER_COUNT}}</value> </property> <!-- HA --> <property> <name>ha.zookeeper.quorum</name> <value>master.mesos:2181</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>{{HA_FENCING_METHODS}}</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>{{HA_AUTOMATIC_FAILURE}}</value> </property> {{#NAMENODE}} <property> <name>dfs.ha.namenode.id</name> <value>name-{{POD_INSTANCE_INDEX}}-node</value> </property> {{/NAMENODE}} <property> <name>dfs.image.compress</name> <value>{{IMAGE_COMPRESS}}</value> </property> <property> <name>dfs.image.compression.codec</name> <value>{{IMAGE_COMPRESSION_CODEC}}</value> </property> <property> <name>dfs.client.read.shortcircuit</name> <value>{{CLIENT_READ_SHORTCIRCUIT}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size.expiry.ms</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS}}</value> </property> <property> <name>dfs.client.failover.proxy.provider.hdfs</name> <value>{{CLIENT_FAILOVER_PROXY_PROVIDER_HDFS}}</value> </property> <property> <name>dfs.domain.socket.path</name> <value>{{CLIENT_READ_SHORTCIRCUIT_PATH}}</value> </property> <property> <name>dfs.permissions.enabled</name> <value>{{PERMISSIONS_ENABLED}}</value> </property> {{#SECURE_MODE}} <property> <name>ignore.secure.ports.for.testing</name> <value>true</value> </property> <!-- Security Configuration --> <property> <name>hadoop.security.auth_to_local</name> <value> RULE:[2:$1@$0](.*)s/.*/{{TASK_USER}}/ RULE:[1:$1@$0](.*)s/.*/{{TASK_USER}}/ </value> </property> <property> <name>dfs.block.access.token.enable</name> <value>true</value> </property> <property> <name>dfs.namenode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.cluster.administrators</name> <value>core,root,hdfs,nobody</value> </property> <property> <name>dfs.web.authentication.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.web.authentication.kerberos.keytab</name> <value>keytabs/{{KERBEROS_PRIMARY}}.{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> {{#DATANODE}} <!-- DataNode Security Configuration --> <property> <name>dfs.datanode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property> {{/DATANODE}} {{#NAMENODE}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/NAMENODE}} {{#ZKFC}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/ZKFC}} {{#JOURNALNODE}} <!-- JournalNode Security Configuration --> <property> <name>dfs.journalnode.keytab.file</name> <value>keytabs/hdfs.journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.journalnode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/JOURNALNODE}} {{/SECURE_MODE}} </configuration> "
			}]
		}],
		placement - rule: {
			@type: "AndRule",
			rules: [{
				@type: "TaskTypeRule",
				type: "zkfc",
				converter: {
					@type: "TaskTypeLabelConverter"
				},
				behavior: "AVOID"
			}, {
				@type: "TaskTypeRule",
				type: "name",
				converter: {
					@type: "TaskTypeLabelConverter"
				},
				behavior: "COLOCATE"
			}]
		}
	}, {
		type: "data",
		user: null,
		count: 3,
		container: null,
		uris: [
			"https://downloads.mesosphere.com/hdfs/assets/hadoop-2.6.0-cdh5.9.1-dcos.tar.gz",
			"https://downloads.mesosphere.com/hdfs/assets/1.0.0-2.6.0/bootstrap.zip"
		],
		task - specs: [{
			name: "node",
			goal: "RUNNING",
			resource - set: {
				id: "node-resource-set",
				resource - specifications: [{
					@type: "DefaultResourceSpec",
					name: "cpus",
					value: {
						type: "SCALAR",
						scalar: {
							value: 0.3
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}, {
					@type: "DefaultResourceSpec",
					name: "mem",
					value: {
						type: "SCALAR",
						scalar: {
							value: 512
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: null
				}, {
					@type: "PortsSpec",
					name: "ports",
					value: {
						type: "RANGES",
						scalar: null,
						ranges: {
							range: [{
								begin: 9003,
								end: 9003
							}, {
								begin: 9004,
								end: 9004
							}, {
								begin: 9005,
								end: 9005
							}]
						},
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					port - specs: [{
						@type: "PortSpec",
						name: "ports",
						value: {
							type: "RANGES",
							scalar: null,
							ranges: {
								range: [{
									begin: 9003,
									end: 9003
								}]
							},
							set: null,
							text: null
						},
						role: "hdfs-role",
						principal: "hdfs-principal",
						port - name: "data-rpc",
						envKey: null
					}, {
						@type: "PortSpec",
						name: "ports",
						value: {
							type: "RANGES",
							scalar: null,
							ranges: {
								range: [{
									begin: 9004,
									end: 9004
								}]
							},
							set: null,
							text: null
						},
						role: "hdfs-role",
						principal: "hdfs-principal",
						port - name: "data-http",
						envKey: null
					}, {
						@type: "PortSpec",
						name: "ports",
						value: {
							type: "RANGES",
							scalar: null,
							ranges: {
								range: [{
									begin: 9005,
									end: 9005
								}]
							},
							set: null,
							text: null
						},
						role: "hdfs-role",
						principal: "hdfs-principal",
						port - name: "data-ipc",
						envKey: null
					}],
					envKey: null
				}],
				volume - specifications: [{
					@type: "DefaultVolumeSpec",
					type: "ROOT",
					container - path: "data-data",
					name: "disk",
					value: {
						type: "SCALAR",
						scalar: {
							value: 5000
						},
						ranges: null,
						set: null,
						text: null
					},
					role: "hdfs-role",
					principal: "hdfs-principal",
					envKey: "DISK_SIZE"
				}],
				role: "hdfs-role",
				principal: "hdfs-principal"
			},
			command - spec: {
				value: "./bootstrap && mkdir -p /var/lib/hadoop-hdfs && chown root /var/lib/hadoop-hdfs && ./hadoop-2.6.0-cdh5.9.1/bin/hdfs datanode ",
				environment: {
					CLIENT_FAILOVER_PROXY_PROVIDER_HDFS: "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
					CLIENT_READ_SHORTCIRCUIT: "true",
					CLIENT_READ_SHORTCIRCUIT_PATH: "/var/lib/hadoop-hdfs/dn_socket",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE: "1000",
					CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS: "1000",
					DATANODE: "true",
					DATA_NODE_BALANCE_BANDWIDTH_PER_SEC: "41943040",
					DATA_NODE_HANDLER_COUNT: "10",
					DATA_NODE_HTTP_PORT: "9004",
					DATA_NODE_IPC_PORT: "9005",
					DATA_NODE_RPC_PORT: "9003",
					FRAMEWORK_NAME: "",
					HADOOP_PROXYUSER_HTTPFS_GROUPS: "*",
					HADOOP_PROXYUSER_HTTPFS_HOSTS: "*",
					HADOOP_PROXYUSER_HUE_GROUPS: "*",
					HADOOP_PROXYUSER_HUE_HOSTS: "*",
					HADOOP_PROXYUSER_ROOT_GROUPS: "*",
					HADOOP_PROXYUSER_ROOT_HOSTS: "*",
					HADOOP_ROOT_LOGGER: "INFO,console",
					HA_AUTOMATIC_FAILURE: "true",
					HA_FENCING_METHODS: "shell(/bin/true)",
					IMAGE_COMPRESS: "true",
					IMAGE_COMPRESSION_CODEC: "org.apache.hadoop.io.compress.SnappyCodec",
					JOURNAL_NODE_HTTP_PORT: "8480",
					JOURNAL_NODE_RPC_PORT: "8485",
					NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK: "false",
					NAME_NODE_HANDLER_COUNT: "20",
					NAME_NODE_HEARTBEAT_RECHECK_INTERVAL: "60000",
					NAME_NODE_HTTP_PORT: "9002",
					NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION: "0.95",
					NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION: "4",
					NAME_NODE_RPC_PORT: "9001",
					NAME_NODE_SAFEMODE_THRESHOLD_PCT: "0.9",
					PERMISSIONS_ENABLED: "false",
					SERVICE_ZK_ROOT: "dcos-service-hdfs",
					TASK_USER: "root"
				}
			},
			health - check - spec: null,
			readiness - check - spec: null,
			config - files: [{
				name: "core-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/core-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?><configuration> <property> <name>fs.default.name</name> <value>hdfs://hdfs</value> </property> <property> <name>hadoop.proxyuser.hue.hosts</name> <value>{{HADOOP_PROXYUSER_HUE_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.hue.groups</name> <value>{{HADOOP_PROXYUSER_HUE_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>{{HADOOP_PROXYUSER_ROOT_HOSTS}}</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>{{HADOOP_PROXYUSER_ROOT_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.groups</name> <value>{{HADOOP_PROXYUSER_HTTPFS_GROUPS}}</value> </property> <property> <name>hadoop.proxyuser.httpfs.hosts</name> <value>{{HADOOP_PROXYUSER_HTTPFS_HOSTS}}</value> </property> <property> <name>ha.zookeeper.parent-znode</name> <value>/{{SERVICE_ZK_ROOT}}/hadoop-ha</value> </property> {{#SECURE_MODE}} <property> <!-- The ZKFC nodes use this property to verify they are connecting to the namenode with the expected principal. --> <name>hadoop.security.service.user.name.key.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property> <property> <name>hadoop.security.authorization</name> <value>true</value> </property> {{/SECURE_MODE}} </configuration> "
			}, {
				name: "hdfs-site",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/hdfs-site.xml",
				template - content: "<?xml version="
				1.0 " encoding="
				UTF - 8 " standalone="
				no "?> <?xml-stylesheet type="
				text / xsl " href="
				configuration.xsl "?> <configuration> <property> <name>dfs.nameservice.id</name> <value>hdfs</value> </property> <property> <name>dfs.nameservices</name> <value>hdfs</value> </property> <property> <name>dfs.ha.namenodes.hdfs</name> <value>name-0-node,name-1-node</value> </property> <!-- namenode --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://journal-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}};journal-2-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{JOURNAL_NODE_RPC_PORT}}/hdfs</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>{{MESOS_SANDBOX}}/name-data</value> </property> <property> <name>dfs.namenode.safemode.threshold-pct</name> <value>{{NAME_NODE_SAFEMODE_THRESHOLD_PCT}}</value> </property> <property> <name>dfs.namenode.heartbeat.recheck-interval</name> <value>{{NAME_NODE_HEARTBEAT_RECHECK_INTERVAL}}</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>{{NAME_NODE_HANDLER_COUNT}}</value> </property> <property> <name>dfs.namenode.invalidate.work.pct.per.iteration</name> <value>{{NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.replication.work.multiplier.per.iteration</name> <value>{{NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION}}</value> </property> <property> <name>dfs.namenode.datanode.registration.ip-hostname-check</name> <value>{{NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK}}</value> </property> <!-- name-0-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-0-node</name> <value>name-0-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-0-node</name> <value>0.0.0.0</value> </property> <!-- name-1-node --> <property> <name>dfs.namenode.rpc-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.namenode.rpc-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <property> <name>dfs.namenode.http-address.hdfs.name-1-node</name> <value>name-1-node.{{FRAMEWORK_NAME}}.autoip.dcos.thisdcos.directory:{{NAME_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.namenode.http-bind-host.hdfs.name-1-node</name> <value>0.0.0.0</value> </property> <!-- journalnode --> <property> <name>dfs.journalnode.rpc-address</name> <value>0.0.0.0:{{JOURNAL_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.journalnode.http-address</name> <value>0.0.0.0:{{JOURNAL_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>{{MESOS_SANDBOX}}/journal-data</value> </property> <!-- datanode --> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:{{DATA_NODE_RPC_PORT}}</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:{{DATA_NODE_HTTP_PORT}}</value> </property> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:{{DATA_NODE_IPC_PORT}}</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>{{MESOS_SANDBOX}}/data-data</value> </property> <property> <name>dfs.datanode.balance.bandwidthPerSec</name> <value>41943040</value> </property> <property> <name>dfs.datanode.handler.count</name> <value>{{DATA_NODE_HANDLER_COUNT}}</value> </property> <!-- HA --> <property> <name>ha.zookeeper.quorum</name> <value>master.mesos:2181</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>{{HA_FENCING_METHODS}}</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>{{HA_AUTOMATIC_FAILURE}}</value> </property> {{#NAMENODE}} <property> <name>dfs.ha.namenode.id</name> <value>name-{{POD_INSTANCE_INDEX}}-node</value> </property> {{/NAMENODE}} <property> <name>dfs.image.compress</name> <value>{{IMAGE_COMPRESS}}</value> </property> <property> <name>dfs.image.compression.codec</name> <value>{{IMAGE_COMPRESSION_CODEC}}</value> </property> <property> <name>dfs.client.read.shortcircuit</name> <value>{{CLIENT_READ_SHORTCIRCUIT}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE}}</value> </property> <property> <name>dfs.client.read.shortcircuit.streams.cache.size.expiry.ms</name> <value>{{CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS}}</value> </property> <property> <name>dfs.client.failover.proxy.provider.hdfs</name> <value>{{CLIENT_FAILOVER_PROXY_PROVIDER_HDFS}}</value> </property> <property> <name>dfs.domain.socket.path</name> <value>{{CLIENT_READ_SHORTCIRCUIT_PATH}}</value> </property> <property> <name>dfs.permissions.enabled</name> <value>{{PERMISSIONS_ENABLED}}</value> </property> {{#SECURE_MODE}} <property> <name>ignore.secure.ports.for.testing</name> <value>true</value> </property> <!-- Security Configuration --> <property> <name>hadoop.security.auth_to_local</name> <value> RULE:[2:$1@$0](.*)s/.*/{{TASK_USER}}/ RULE:[1:$1@$0](.*)s/.*/{{TASK_USER}}/ </value> </property> <property> <name>dfs.block.access.token.enable</name> <value>true</value> </property> <property> <name>dfs.namenode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.principal.pattern</name> <value>{{KERBEROS_PRIMARY}}/*@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.cluster.administrators</name> <value>core,root,hdfs,nobody</value> </property> <property> <name>dfs.web.authentication.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.web.authentication.kerberos.keytab</name> <value>keytabs/{{KERBEROS_PRIMARY}}.{{TASK_NAME}}.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> {{#DATANODE}} <!-- DataNode Security Configuration --> <property> <name>dfs.datanode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/data-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property> {{/DATANODE}} {{#NAMENODE}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/name-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/NAMENODE}} {{#ZKFC}} <!-- NameNode Security Configuration --> <property> <name>dfs.namenode.keytab.file</name> <value>keytabs/{{KERBEROS_PRIMARY}}.zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.namenode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/zkfc-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/ZKFC}} {{#JOURNALNODE}} <!-- JournalNode Security Configuration --> <property> <name>dfs.journalnode.keytab.file</name> <value>keytabs/hdfs.journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos.keytab</value> </property> <property> <name>dfs.journalnode.kerberos.principal</name> <value>{{KERBEROS_PRIMARY}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> <property> <name>dfs.journalnode.kerberos.internal.spnego.principal</name> <value>{{KERBEROS_PRIMARY_HTTP}}/journal-{{POD_INSTANCE_INDEX}}-node.{{FRAMEWORK_NAME}}.mesos@{{KERBEROS_REALM}}</value> </property> {{/JOURNALNODE}} {{/SECURE_MODE}} </configuration> "
			}, {
				name: "hadoop-metrics2",
				relative - path: "hadoop-2.6.0-cdh5.9.1/etc/hadoop/hadoop-metrics2.properties",
				template - content: "# Autogenerated by the Mesos Framework, DO NOT EDIT *.sink.statsd.class=org.apache.hadoop.metrics2.sink.StatsDSink datanode.sink.statsd.period=10 datanode.sink.statsd.server.host={{STATSD_UDP_HOST}} datanode.sink.statsd.server.port={{STATSD_UDP_PORT}} datanode.sink.statsd.skip.hostname=false"
			}]
		}],
		placement - rule: {
			@type: "TaskTypeRule",
			type: "data",
			converter: {
				@type: "TaskTypeLabelConverter"
			},
			behavior: "AVOID"
		}
	}],
	replacement - failure - policy: {
		permanentFailureTimoutMins: null,
		minReplaceDelayMins: 0
	}
}

List Configs

You can list all configuration IDs by sending a GET request to /v1/configurations.

CLI Example

$ dcos hdfs config list

HTTP Example

$ curl -H "Authorization:token=$auth_token" <dcos_url>/service/hdfs/v1/configurations
[
    "9a8d4308-ab9d-4121-b460-696ec3368ad6"
]

View Specified Config

You can view a specific configuration by sending a GET request to /v1/configurations/<config-id>.

CLI Example

$ dcos hdfs config show 9a8d4308-ab9d-4121-b460-696ec3368ad6

HTTP Example

$ curl -H "Authorization:token=$auth_token" <dcos_url>/service/hdfs/v1/configurations/9a8d4308-ab9d-4121-b460-696ec3368ad6
{
    ... same format as target config above ...
}

Service Status Info

Send a GET request to the /v1/state/properties/suppressed endpoint to learn if HDFS is in a suppressed state and not receiving offers. If a service does not need offers, Mesos can "suppress" it so that other services are not starved for resources. You can use this request to troubleshoot: if you think HDFS should be receiving resource offers, but is not, you can use this API call to see if HDFS is suppressed.

curl -H "Authorization: token=$auth_token" "<dcos_url>/service/hdfs/v1/state/properties/suppressed"

Changing Configuration at Runtime

You can customize your cluster in-place when it is up and running.

The HDFS scheduler runs as a Marathon process and can be reconfigured by changing values for the service from the DC/OS dashboard. These are the general steps to follow:

  1. Go to the DC/OS dashboard.
  2. Click the Services tab, then the name of the HDFS service to be updated.
  3. Within the HDFS instance details view, click the menu in the upper right, then choose Edit.
  4. click the menu in the upper right, then choose Edit. For example, to increase the number of nodes, edit the value for DATA_NODES.
  5. Click REVIEW & RUN to apply any changes and cleanly reload the HDFS scheduler. The HDFS cluster itself will persist across the change.

Configuration Deployment Strategy

Configuration updates are rolled out through execution of update plans. You can configure the way these plans are executed.

Configuration Update Plans

This configuration update strategy is analogous to the installation procedure above. If the configuration update is accepted, there will be no errors in the generated plan, and a rolling restart will be performed on all nodes to apply the updated configuration. However, the default strategy can be overridden by a strategy the user provides.

Configuration Update

Make the REST request below to view the current plan. See the REST API Authentication part of the REST API Reference section for information on how this request must be authenticated.

$ curl -v -H "Authorization: token=$(dcos config show core.dcos_acs_token)" http://<dcos_url>/service/hdfs/v1/plans/deploy

The response will look similar to this:

{
	phases: [{
		id: "77708b6f-52db-4361-a56f-4d2bd9d6bf09",
		name: "jn-deploy",
		steps: [{
			id: "fe96b235-bf26-4b36-ad0e-8dd3494b5b63",
			status: "COMPLETE",
			name: "journal-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'journal-0:[node] [fe96b235-bf26-4b36-ad0e-8dd3494b5b63]' has status: 'COMPLETE'."
		}, {
			id: "3a590b92-f2e8-439a-8951-4135b6c29b34",
			status: "COMPLETE",
			name: "journal-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'journal-1:[node] [3a590b92-f2e8-439a-8951-4135b6c29b34]' has status: 'COMPLETE'."
		}, {
			id: "c079bfcc-b620-4e0b-93d5-604223c0538d",
			status: "COMPLETE",
			name: "journal-2:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'journal-2:[node] [c079bfcc-b620-4e0b-93d5-604223c0538d]' has status: 'COMPLETE'."
		}],
		status: "COMPLETE"
	}, {
		id: "61a822e4-421d-4a46-b374-b0abaac6d2d4",
		name: "nn-deploy",
		steps: [{
			id: "0361e2f5-6e5d-42c8-9696-420f38ba5398",
			status: "COMPLETE",
			name: "name-0:[format]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-0:[format] [0361e2f5-6e5d-42c8-9696-420f38ba5398]' has status: 'COMPLETE'."
		}, {
			id: "a3a07cd4-828d-4161-b607-a745f3845abf",
			status: "COMPLETE",
			name: "name-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-0:[node] [a3a07cd4-828d-4161-b607-a745f3845abf]' has status: 'COMPLETE'."
		}, {
			id: "74ce8cf4-63cf-4ba2-ae6d-3ca6a389c66d",
			status: "COMPLETE",
			name: "name-1:[bootstrap]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-1:[bootstrap] [74ce8cf4-63cf-4ba2-ae6d-3ca6a389c66d]' has status: 'COMPLETE'."
		}, {
			id: "d46eb578-eb55-40b7-a683-7e44543ce63e",
			status: "COMPLETE",
			name: "name-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-1:[node] [d46eb578-eb55-40b7-a683-7e44543ce63e]' has status: 'COMPLETE'."
		}],
		status: "COMPLETE"
	}, {
		id: "f42c610b-e52b-4ce9-ab1c-887a123df234",
		name: "zkfc-deploy",
		steps: [{
			id: "9a5be25e-9997-4be9-bc67-1722056c8e8a",
			status: "COMPLETE",
			name: "zkfc-0:[format]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'zkfc-0:[format] [9a5be25e-9997-4be9-bc67-1722056c8e8a]' has status: 'COMPLETE'."
		}, {
			id: "2c8c9518-6d3c-48ed-bca1-58fd749da9c0",
			status: "COMPLETE",
			name: "zkfc-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'zkfc-0:[node] [2c8c9518-6d3c-48ed-bca1-58fd749da9c0]' has status: 'COMPLETE'."
		}, {
			id: "7e146767-21be-4d95-b83d-667a647b0503",
			status: "COMPLETE",
			name: "zkfc-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'zkfc-1:[node] [7e146767-21be-4d95-b83d-667a647b0503]' has status: 'COMPLETE'."
		}],
		status: "COMPLETE"
	}, {
		id: "774c5fec-7195-4ffd-850f-df62cf629fa9",
		name: "dn-deploy",
		steps: [{
			id: "f41b364f-5804-41ec-9d02-cd27f7b484ef",
			status: "COMPLETE",
			name: "data-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'data-0:[node] [f41b364f-5804-41ec-9d02-cd27f7b484ef]' has status: 'COMPLETE'."
		}, {
			id: "22d457dc-6ad4-4f6f-93f3-4c5071069503",
			status: "STARTING",
			name: "data-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'data-1:[node] [22d457dc-6ad4-4f6f-93f3-4c5071069503]' has status: 'STARTING'."
		}, {
			id: "a2798e72-83e2-4fad-a673-c5ff42ac9a0c",
			status: "STARTING",
			name: "data-2:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'data-2:[node] [a2798e72-83e2-4fad-a673-c5ff42ac9a0c]' has status: 'STARTING'."
		}],
		status: "STARTING"
	}],
	errors: [],
	status: "STARTING"
}

If you want to interrupt a configuration update that is in progress, enter the interrupt command.

$ curl -X -H "Authorization: token=$(dcos config show core.dcos_acs_token)" POST http:/<dcos_url>/service/hdfs/v1/plans/deploy/interrupt

If you query the plan again, the response will look like this (notice status: "Waiting"):

{
	phases: [{
		id: "77708b6f-52db-4361-a56f-4d2bd9d6bf09",
		name: "jn-deploy",
		steps: [{
			id: "fe96b235-bf26-4b36-ad0e-8dd3494b5b63",
			status: "COMPLETE",
			name: "journal-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'journal-0:[node] [fe96b235-bf26-4b36-ad0e-8dd3494b5b63]' has status: 'COMPLETE'."
		}, {
			id: "3a590b92-f2e8-439a-8951-4135b6c29b34",
			status: "COMPLETE",
			name: "journal-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'journal-1:[node] [3a590b92-f2e8-439a-8951-4135b6c29b34]' has status: 'COMPLETE'."
		}, {
			id: "c079bfcc-b620-4e0b-93d5-604223c0538d",
			status: "COMPLETE",
			name: "journal-2:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'journal-2:[node] [c079bfcc-b620-4e0b-93d5-604223c0538d]' has status: 'COMPLETE'."
		}],
		status: "COMPLETE"
	}, {
		id: "61a822e4-421d-4a46-b374-b0abaac6d2d4",
		name: "nn-deploy",
		steps: [{
			id: "0361e2f5-6e5d-42c8-9696-420f38ba5398",
			status: "COMPLETE",
			name: "name-0:[format]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-0:[format] [0361e2f5-6e5d-42c8-9696-420f38ba5398]' has status: 'COMPLETE'."
		}, {
			id: "a3a07cd4-828d-4161-b607-a745f3845abf",
			status: "COMPLETE",
			name: "name-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-0:[node] [a3a07cd4-828d-4161-b607-a745f3845abf]' has status: 'COMPLETE'."
		}, {
			id: "74ce8cf4-63cf-4ba2-ae6d-3ca6a389c66d",
			status: "COMPLETE",
			name: "name-1:[bootstrap]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-1:[bootstrap] [74ce8cf4-63cf-4ba2-ae6d-3ca6a389c66d]' has status: 'COMPLETE'."
		}, {
			id: "d46eb578-eb55-40b7-a683-7e44543ce63e",
			status: "COMPLETE",
			name: "name-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-1:[node] [d46eb578-eb55-40b7-a683-7e44543ce63e]' has status: 'COMPLETE'."
		}],
		status: "COMPLETE"
	}, {
		id: "f42c610b-e52b-4ce9-ab1c-887a123df234",
		name: "zkfc-deploy",
		steps: [{
			id: "9a5be25e-9997-4be9-bc67-1722056c8e8a",
			status: "COMPLETE",
			name: "zkfc-0:[format]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'zkfc-0:[format] [9a5be25e-9997-4be9-bc67-1722056c8e8a]' has status: 'COMPLETE'."
		}, {
			id: "2c8c9518-6d3c-48ed-bca1-58fd749da9c0",
			status: "COMPLETE",
			name: "zkfc-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'zkfc-0:[node] [2c8c9518-6d3c-48ed-bca1-58fd749da9c0]' has status: 'COMPLETE'."
		}, {
			id: "7e146767-21be-4d95-b83d-667a647b0503",
			status: "COMPLETE",
			name: "zkfc-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'zkfc-1:[node] [7e146767-21be-4d95-b83d-667a647b0503]' has status: 'COMPLETE'."
		}],
		status: "COMPLETE"
	}, {
		id: "774c5fec-7195-4ffd-850f-df62cf629fa9",
		name: "dn-deploy",
		steps: [{
			id: "f41b364f-5804-41ec-9d02-cd27f7b484ef",
			status: "COMPLETE",
			name: "data-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'data-0:[node] [f41b364f-5804-41ec-9d02-cd27f7b484ef]' has status: 'COMPLETE'."
		}, {
			id: "22d457dc-6ad4-4f6f-93f3-4c5071069503",
			status: "STARTING",
			name: "data-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'data-1:[node] [22d457dc-6ad4-4f6f-93f3-4c5071069503]' has status: 'STARTING'."
		}, {
			id: "a2798e72-83e2-4fad-a673-c5ff42ac9a0c",
			status: "PENDING",
			name: "data-2:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'data-2:[node] [a2798e72-83e2-4fad-a673-c5ff42ac9a0c]' has status: 'PENDING'."
		}],
		status: "WAITING"
	}],
	errors: [],
	status: "WAITING"
}

Note: The interrupt command can’t stop a block that is STARTING, but it will stop the change on the subsequent blocks.

Enter the continue command to resume the update process.

$ curl -X -H "Authorization: token=$(dcos config show core.dcos_acs_token)" POST http://<dcos_url>/service/hdfs/v1/plans/deploy/continue

After you execute the continue operation, the plan will look like this:

{
	phases: [{
		id: "77708b6f-52db-4361-a56f-4d2bd9d6bf09",
		name: "jn-deploy",
		steps: [{
			id: "fe96b235-bf26-4b36-ad0e-8dd3494b5b63",
			status: "COMPLETE",
			name: "journal-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'journal-0:[node] [fe96b235-bf26-4b36-ad0e-8dd3494b5b63]' has status: 'COMPLETE'."
		}, {
			id: "3a590b92-f2e8-439a-8951-4135b6c29b34",
			status: "COMPLETE",
			name: "journal-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'journal-1:[node] [3a590b92-f2e8-439a-8951-4135b6c29b34]' has status: 'COMPLETE'."
		}, {
			id: "c079bfcc-b620-4e0b-93d5-604223c0538d",
			status: "COMPLETE",
			name: "journal-2:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'journal-2:[node] [c079bfcc-b620-4e0b-93d5-604223c0538d]' has status: 'COMPLETE'."
		}],
		status: "COMPLETE"
	}, {
		id: "61a822e4-421d-4a46-b374-b0abaac6d2d4",
		name: "nn-deploy",
		steps: [{
			id: "0361e2f5-6e5d-42c8-9696-420f38ba5398",
			status: "COMPLETE",
			name: "name-0:[format]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-0:[format] [0361e2f5-6e5d-42c8-9696-420f38ba5398]' has status: 'COMPLETE'."
		}, {
			id: "a3a07cd4-828d-4161-b607-a745f3845abf",
			status: "COMPLETE",
			name: "name-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-0:[node] [a3a07cd4-828d-4161-b607-a745f3845abf]' has status: 'COMPLETE'."
		}, {
			id: "74ce8cf4-63cf-4ba2-ae6d-3ca6a389c66d",
			status: "COMPLETE",
			name: "name-1:[bootstrap]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-1:[bootstrap] [74ce8cf4-63cf-4ba2-ae6d-3ca6a389c66d]' has status: 'COMPLETE'."
		}, {
			id: "d46eb578-eb55-40b7-a683-7e44543ce63e",
			status: "COMPLETE",
			name: "name-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-1:[node] [d46eb578-eb55-40b7-a683-7e44543ce63e]' has status: 'COMPLETE'."
		}],
		status: "COMPLETE"
	}, {
		id: "f42c610b-e52b-4ce9-ab1c-887a123df234",
		name: "zkfc-deploy",
		steps: [{
			id: "9a5be25e-9997-4be9-bc67-1722056c8e8a",
			status: "COMPLETE",
			name: "zkfc-0:[format]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'zkfc-0:[format] [9a5be25e-9997-4be9-bc67-1722056c8e8a]' has status: 'COMPLETE'."
		}, {
			id: "2c8c9518-6d3c-48ed-bca1-58fd749da9c0",
			status: "COMPLETE",
			name: "zkfc-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'zkfc-0:[node] [2c8c9518-6d3c-48ed-bca1-58fd749da9c0]' has status: 'COMPLETE'."
		}, {
			id: "7e146767-21be-4d95-b83d-667a647b0503",
			status: "COMPLETE",
			name: "zkfc-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'zkfc-1:[node] [7e146767-21be-4d95-b83d-667a647b0503]' has status: 'COMPLETE'."
		}],
		status: "COMPLETE"
	}, {
		id: "774c5fec-7195-4ffd-850f-df62cf629fa9",
		name: "dn-deploy",
		steps: [{
			id: "f41b364f-5804-41ec-9d02-cd27f7b484ef",
			status: "COMPLETE",
			name: "data-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'data-0:[node] [f41b364f-5804-41ec-9d02-cd27f7b484ef]' has status: 'COMPLETE'."
		}, {
			id: "22d457dc-6ad4-4f6f-93f3-4c5071069503",
			status: "STARTING",
			name: "data-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'data-1:[node] [22d457dc-6ad4-4f6f-93f3-4c5071069503]' has status: 'STARTING'."
		}, {
			id: "a2798e72-83e2-4fad-a673-c5ff42ac9a0c",
			status: "STARTING",
			name: "data-2:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'data-2:[node] [a2798e72-83e2-4fad-a673-c5ff42ac9a0c]' has status: 'STARTING'."
		}],
		status: "STARTING"
	}],
	errors: [],
	status: "STARTING"
}

Configuration Options

The following describes the most commonly used features of DC/OS Apache HDFS and how to configure them via the DC/OS CLI and the DC/OS GUI. There are two methods of configuring a HDFS cluster. The configuration may be specified using a JSON file during installation via the DC/OS command line (See the Installation section) or via modification to the Service Scheduler’s DC/OS environment at runtime (See the Configuration Update section). Note that some configuration options may only be specified at installation time.

Service Configuration

The service configuration object contains properties that MUST be specified during installation and CANNOT be modified after installation is in progress. This configuration object is similar across all DC/OS Infinity services. Service configuration example:

{
    "service": {
        "name": "hdfs",
        "principal": "hdfs-principal",
    }
}
Property Type Description
name string The name of the HDFS service installation. This must be unique for each DC/OS HDFS service instance deployed on a DC/OS cluster. It will determine the ID of the HDFS nameservice, which must be unique within a DC/OS cluster.
principal string The authentication principal for the HDFS cluster.

Change the Service Name

  • In the DC/OS CLI, options.json: name = string (default: hdfs)

Node Configuration

The node configuration objects correspond to the configuration for nodes in the HDFS cluster. Node configuration MUST be specified during installation and MAY be modified during configuration updates. All of the properties except disk and disk_type MAY be modified during the configuration update process.

Example node configuration:

	"journal_node": {
		"cpus": 0.5,
		"mem": 4096,
		"disk": 10240,
		"disk_type": "ROOT",
		"strategy": "parallel"
	},
    "name_node": {
		"cpus": 0.5,
		"mem": 4096,
		"disk": 10240,
		"disk_type": "ROOT"
	},
    "zkfc_node": {
		"cpus": 0.5,
		"mem": 4096
	},
	"data_node": {
	    "count": 3,
		"cpus": 0.5,
		"mem": 4096,
		"disk": 10240,
		"disk_type": "ROOT",
		"strategy": "parallel"
	}
Property Type Description
cpus number The number of cpu shares allocated to the node's process.
mem integer The amount of memory, in MB, allocated to the node. This value MUST be larger than the specified max heap size. Make sure to allocate enough space for additional memory used by the JVM and other overhead. A good rule of thumb is allocate twice the heap size in MB for memory.
disk integer The amount of disk, in MB, allocated to node. **Note:** Once this value is configured, it can not be changed.
disk_type string The type of disk to use for storing data. Possible values: ROOT (default) and MOUNT. Note: Once this value is configured, it can not be changed.
  • ROOT: Data is stored on the same volume as the agent work directory and the node tasks use the configured amount of disk space.
  • MOUNT: Data will be stored on a dedicated, operator-formatted volume attached to the agent. Dedicated MOUNT volumes have performance advantages and a disk error on these MOUNT volumes will be correctly reported to HDFS.
strategy string The strategy used to deploy that node type. Possible values: parallel (default) and serial.
  • parallel: All nodes of that type are deployed at the same time.
  • serial: All nodes of that type are deployed in sequence.
count integer The number of nodes of that node type for the cluster. There are always exactly two name nodes, so the name_node object has no count property. Users may select either 3 or 5 journal nodes. The default value of 3 is sufficient for most deployments and should only be overridden after careful thought. At least 3 data nodes should be configured, but this value may be increased to meet the storage needs of the deployment.

HDFS File System Configuration

The HDFS file system network configuration, permissions, and compression is configured via the hdfs JSON object. Once these properties are set at installation time they can not be reconfigured. Example HDFS configuration:

{
    "hdfs": {
		"name_node_rpc_port": 9001,
		"name_node_http_port": 9002,
		"journal_node_rpc_port": 8485,
		"journal_node_http_port": 8480,
		"data_node_rpc_port": 9005,
		"data_node_http_port": 9006,
		"data_node_ipc_port": 9007,
		"permissions_enabled": false,
		"name_node_heartbeat_recheck_interval": 60000,
		"compress_image": true,
		"image_compression_codec": "org.apache.hadoop.io.compress.SnappyCodec"
   }
}
Property Type Description
name_node_rpc_port integer The port on which the name nodes will listen for RPC connections.
name_node_http_port integer The port on which the name nodes will listen for HTTP connections.
journal_node_rpc_port integer The port on which the journal nodes will listen for RPC connections.
journal_node_http_port integer The port on which the journal nodes will listen for HTTP connections.
data_node_rpc_port integer The port on which the data nodes will listen for RPC connections.
data_node_http_port integer The port on which the data nodes will listen for HTTP connections.
data_node_ipc_port integer The port on which the data nodes will listen for IPC connections. This property is useful if you deploy a service that colocates with HDFS data nodes. It provides domain socket communication instead of RPC

Operating System Configuration

In order for HDFS to function correctly, you must perform several important configuration modifications to the OS hosting the deployment.

Configuration Settings

HDFS requires OS-level configuration settings typical of a production storage server.

File Setting Value Reason
/etc/sysctl.conf vm.swappiness 0 If the OS swaps out the HDFS processes, they can fail to respond to RPC requests, resulting in the process being marked down by the cluster. This can be particularly troublesome for name nodes and journal nodes.
/etc/security/limits.conf nofile unlimited If this value is too low, a job that operate on the HDFS cluster may fail due to too may open file handles.
/etc/security/limits.conf, /etc/security/limits.d/90-nproc.conf nproc 32768 An HDFS node spawns many threads, which go towards kernel nproc count. If nproc is not set appropriately, the node will be killed.

Connecting Clients

Applications interface with HDFS like they would any POSIX file system. However, applications that will act as client nodes of the HDFS deployment require an hdfs-site.xml and core-site.xml file that provides the configuration information necessary to communicate with the cluster.

Connection Info Using the DC/OS CLI

Executed the following command from the DC/OS CLI to retrieve the hdfs-site.xml file that client applications can use to connect to the cluster.

$ dcos hdfs --name=<service-name> endpoints hdfs-site.xml
...
$ dcos hdfs --name=<service-name> endpoints core-site.xml
...

Connection Info Response

The responses are as below.

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>dfs.nameservice.id</name>
        <value>hdfs</value>
    </property>
    <property>
        <name>dfs.nameservices</name>
        <value>hdfs</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.hdfs</name>
        <value>name-0-node,name-1-node</value>
    </property>

    <!-- namenode -->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://journal-0-node.hdfs.autoip.dcos.thisdcos.directory:8485;journal-1-node.hdfs.autoip.dcos.thisdcos.directory:8485;journal-2-node.hdfs.autoip.dcos.thisdcos.directory:8485/hdfs</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/name-data</value>
    </property>
    <property>
        <name>dfs.namenode.safemode.threshold-pct</name>
        <value>0.9</value>
    </property>
    <property>
        <name>dfs.namenode.heartbeat.recheck-interval</name>
        <value>60000</value>
    </property>
    <property>
        <name>dfs.namenode.handler.count</name>
        <value>20</value>
    </property>
    <property>
        <name>dfs.namenode.invalidate.work.pct.per.iteration</name>
        <value>0.95</value>
    </property>
    <property>
        <name>dfs.namenode.replication.work.multiplier.per.iteration</name>
        <value>4</value>
    </property>
    <property>
        <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
        <value>false</value>
    </property>


    <!-- name-0-node -->
    <property>
        <name>dfs.namenode.rpc-address.hdfs.name-0-node</name>
        <value>name-0-node.hdfs.autoip.dcos.thisdcos.directory:9001</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-bind-host.hdfs.name-0-node</name>
        <value>0.0.0.0</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.hdfs.name-0-node</name>
        <value>name-0-node.hdfs.autoip.dcos.thisdcos.directory:9002</value>
    </property>
    <property>
        <name>dfs.namenode.http-bind-host.hdfs.name-0-node</name>
        <value>0.0.0.0</value>
    </property>


    <!-- name-1-node -->
    <property>
        <name>dfs.namenode.rpc-address.hdfs.name-1-node</name>
        <value>name-1-node.hdfs.autoip.dcos.thisdcos.directory:9001</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-bind-host.hdfs.name-1-node</name>
        <value>0.0.0.0</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.hdfs.name-1-node</name>
        <value>name-1-node.hdfs.autoip.dcos.thisdcos.directory:9002</value>
    </property>
    <property>
        <name>dfs.namenode.http-bind-host.hdfs.name-1-node</name>
        <value>0.0.0.0</value>
    </property>

    <!-- journalnode -->
    <property>
        <name>dfs.journalnode.rpc-address</name>
        <value>0.0.0.0:8485</value>
    </property>
    <property>
        <name>dfs.journalnode.http-address</name>
        <value>0.0.0.0:8480</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/journal-data</value>
    </property>

    <!-- datanode -->
    <property>
        <name>dfs.datanode.address</name>
        <value>0.0.0.0:9003</value>
    </property>
    <property>
        <name>dfs.datanode.http.address</name>
        <value>0.0.0.0:9004</value>
    </property>
    <property>
        <name>dfs.datanode.ipc.address</name>
        <value>0.0.0.0:9005</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/data-data</value>
    </property>
    <property>
        <name>dfs.datanode.balance.bandwidthPerSec</name>
        <value>41943040</value>
    </property>
    <property>
        <name>dfs.datanode.handler.count</name>
        <value>10</value>
    </property>

    <!-- HA -->
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>master.mesos:2181</value>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>shell(/bin/true)</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>


    <property>
        <name>dfs.image.compress</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.image.compression.codec</name>
        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
    </property>
    <property>
        <name>dfs.client.read.shortcircuit</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.client.read.shortcircuit.streams.cache.size</name>
        <value>1000</value>
    </property>
    <property>
        <name>dfs.client.read.shortcircuit.streams.cache.size.expiry.ms</name>
        <value>1000</value>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.hdfs</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
        <name>dfs.domain.socket.path</name>
        <value>/var/lib/hadoop-hdfs/dn_socket</value>
    </property>
    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>

</configuration>

core-site.xml

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://hdfs</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hue.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hue.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.httpfs.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.httpfs.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>ha.zookeeper.parent-znode</name>
        <value>/dcos-service-hdfs/hadoop-ha</value>
    </property>

</configuration>

DNS names used in this configuration file will remain accurate even if nodes in the HDFS cluster are moved to different agent nodes.

DC/OS Apache HDFS is a managed service that makes it easy to deploy and manage an HA Apache HDFS cluster on Mesosphere DC/OS. Apache HDFS (Hadoop Distributed File System) is an open source distributed file system based on Google's GFS (Google File System) paper. It is a replicated and distributed file system interface for use with "big data" and "fast data" applications.

DC/OS HDFS offers the following benefits:

  • Easy installation
  • Multiple HDFS clusters
  • Elastic scaling of data nodes
  • Integrated monitoring

Features

DC/OS HDFS provides the following features:

  • Single-command installation for rapid provisioning
  • Persistent storage volumes for enhanced data durability
  • Runtime configuration and software updates for high availability
  • Health checks and metrics for monitoring
  • Distributed storage scale out
  • HA name service with Quorum Journaling and ZooKeeper failure detection.

Related Services

Install and Customize

HDFS is available in the Universe and can be installed by using either the web interface or the DC/OS CLI.

Prerequisites

  • Depending on your security mode in Enterprise DC/OS, you may need to provision a service account before installing HDFS. Only someone with superuser permission can create the service account.
    • strict security mode requires a service account.
    • permissive security mode a service account is optional.
    • disabled security mode does not require a service account.
  • A minimum of five agent nodes with eight GiB of memory and ten GiB of disk available on each agent.
  • Each agent node must have these ports available: 8480, 8485, 9000, 9001, 9002, 9005, and 9006, and 9007.

Default Installation

Install HDFS from the DC/OS CLI.

$ dcos package install hdfs

This command creates a new HDFS cluster with two name nodes, three journal nodes, and five data nodes. Two clusters cannot share the same name. To install more than one HDFS cluster, customize the name at install time for each additional instance. See the Custom Installation section for more information.

The default installation may not be sufficient for a production deployment, but all cluster operations will work. If you are planning a production deployment with 3 replicas of each value and with local quorum consistency for read and write operations (a very common use case), this configuration is sufficient for development and testing purposes, and it can be scaled to a production deployment.

Note: Alternatively, you can install HDFS from the DC/OS web interface. If you install HDFS from the web interface, you must install the HDFS DC/OS CLI subcommands separately. From the DC/OS CLI, enter:

dcos package install hdfs --cli

Custom Installation

If you are ready to ship into production, you will likely need to customize the deployment to suit the workload requirements of your application(s). Customize the default deployment by creating a JSON file, then pass it to dcos package install using the --options parameter.

Sample JSON options file named sample-hdfs.json:

{
    "data_node": {
        "count": 10
    }
}

The command below creates a cluster using sample-hdfs.json:

$ dcos package install --options=sample-hdfs.json hdfs

This cluster will have 10 data nodes instead of the default value of 3. See the Configuration section for a list of fields that can be customized via a options JSON file when the HDFS cluster is created.

Minimal Installation

Many of the other Infinity services currently support DC/OS Vagrant deployment. However, DC/OS HDFS currently only supports deployment with an HA name service managed by a Quorum Journal. The resource requirements for such a deployment make it prohibitive to install on a local development machine. The default deployment, is the minimal safe deployment for a DC/OS HDFS cluster. Community contributions to support deployment of a non-HA cluster, e.g. a single name node and data node with no failure detector, would be welcome.

Multiple HDFS cluster Installation

Installing multiple HDFS clusters is identical to installing an HDFS cluster with a custom configuration, as described above. Use a JSON options file to specify a unique name for each installation:

$ cat hdfs1.json

{
   "service": {
       "name": "hdfs1"
   }
}

$ dcos package install hdfs --options=hdfs1.json

Use the --name argument after install time to specify which HDFS instance to query. All dcos hdfs CLI commands accept the --name argument. If you do not specify a service name, the CLI assumes the default value, hdfs.

Colocation

An individual HDFS deployment will colocate name nodes with journal nodes, but it will not colocate two name nodes or two journal nodes on the same agent node in the cluster. Data nodes may be colocated with both name nodes and journal nodes. If multiple clusters are installed, they may share the same agent nodes in the cluster provided that no ports specified in the service configurations conflict for those node types.

Installation Plan

When the DC/OS HDFS service is initially installed, it generates an installation plan as shown below.

{
	phases: [{
		id: "0c64b701-6b8b-440e-93e1-6c43b05abfc2",
		name: "jn-deploy",
		steps: [{
			id: "ba608f90-32c6-42e0-93c7-a6b51fc2e52b",
			status: "COMPLETE",
			name: "journal-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'journal-0:[node] [ba608f90-32c6-42e0-93c7-a6b51fc2e52b]' has status: 'COMPLETE'."
		}, {
			id: "f29c13d8-a477-4e2b-84ac-046cd8a7d283",
			status: "COMPLETE",
			name: "journal-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'journal-1:[node] [f29c13d8-a477-4e2b-84ac-046cd8a7d283]' has status: 'COMPLETE'."
		}, {
			id: "75da5719-9296-4872-887a-cc1ab157191b",
			status: "COMPLETE",
			name: "journal-2:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'journal-2:[node] [75da5719-9296-4872-887a-cc1ab157191b]' has status: 'COMPLETE'."
		}],
		status: "COMPLETE"
	}, {
		id: "967aadc8-ef25-402e-b81a-9fcab0e57463",
		name: "nn-deploy",
		steps: [{
			id: "33d02aa7-0927-428b-82d7-cec93cfde090",
			status: "COMPLETE",
			name: "name-0:[format]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-0:[format] [33d02aa7-0927-428b-82d7-cec93cfde090]' has status: 'COMPLETE'."
		}, {
			id: "62f048f4-7b6c-4ea0-8509-82b328d40e61",
			status: "STARTING",
			name: "name-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-0:[node] [62f048f4-7b6c-4ea0-8509-82b328d40e61]' has status: 'STARTING'."
		}, {
			id: "e7b8aa27-39b2-4aba-9d87-3b47c752878f",
			status: "PENDING",
			name: "name-1:[bootstrap]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-1:[bootstrap] [e7b8aa27-39b2-4aba-9d87-3b47c752878f]' has status: 'PENDING'."
		}, {
			id: "cd633e0d-6aac-45a9-b764-296676fc228d",
			status: "PENDING",
			name: "name-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'name-1:[node] [cd633e0d-6aac-45a9-b764-296676fc228d]' has status: 'PENDING'."
		}],
		status: "IN_PROGRESS"
	}, {
		id: "248fa719-dc57-4ef9-9c48-5e6e1218b6c2",
		name: "zkfc-deploy",
		steps: [{
			id: "812ae290-e6a4-4128-b486-7288e855bfe6",
			status: "PENDING",
			name: "zkfc-0:[format]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'zkfc-0:[format] [812ae290-e6a4-4128-b486-7288e855bfe6]' has status: 'PENDING'."
		}, {
			id: "8a401585-a5af-4540-94f0-dada95093329",
			status: "PENDING",
			name: "zkfc-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'zkfc-0:[node] [8a401585-a5af-4540-94f0-dada95093329]' has status: 'PENDING'."
		}, {
			id: "7eabc15d-7feb-4546-8699-0fadac1f303b",
			status: "PENDING",
			name: "zkfc-1:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'zkfc-1:[node] [7eabc15d-7feb-4546-8699-0fadac1f303b]' has status: 'PENDING'."
		}],
		status: "PENDING"
	}, {
		id: "ddb20a39-830b-417b-a901-5b9027e459f7",
		name: "dn-deploy",
		steps: [{
			id: "018deb17-8853-4d3b-820c-6b2caa55743f",
			status: "PENDING",
			name: "data-0:[node]",
			message: "com.mesosphere.sdk.scheduler.plan.DeploymentStep: 'data-0:[node] [018deb17-8853-4d3b-820c-6b2caa55743f]' has status: 'PENDING'."
		}],
		status: "PENDING"
	}],
	errors: [],
	status: "IN_PROGRESS"
}

Viewing the Installation Plan

The plan can be viewed from the API via the REST endpoint. A curl example is provided below. See the REST API Authentication part of the REST API Reference section for information on how this request must be authenticated.

curl -v -H "Authorization: token=$(dcos config show core.dcos_acs_token)" http://<dcos_url>/service/hdfs/v1/plans/deploy

Plan Errors

The plan will display any errors that prevent installation in the errors list. The presence of any error indicates that the installation cannot progress. See the Troubleshooting section for information on resolving errors.

Quorum Journal

The first phase of the installation is the Quorum Journal phase. This phase will deploy three journal nodes to provide a Quorum Journal for the HA name service. Each step in the phase represents an individual journal node.

Name Service

The second phase of the installation is deployment of the HA name service. This phase deploys two name nodes. Needed format and bootstrap operations occur as necessary.

ZKFC

The third phase of the installation is deployment of the ZKFC nodes. This phase deploys two ZKFC nodes to enable ZooKeeper failure detection. Each step represents an individual ZKFC node, and there are always exactly two.

Distributed Storage

The final phase of the installation is deployment of the distributed storage service. This phase deploys the data nodes that are configured to act as storage for the cluster. The number of data nodes can be reconfigured post installation.

Pausing Installation

To pause installation, issue a REST API request as shown below. The installation will pause after completing installation of the current node and wait for user input.

curl -v -H "Authorization: token=$(dcos config show core.dcos_acs_token)" -X POST http://<dcos_url>/service/hdfs/v1/plans/deploy/interrupt

Resuming Installation

If the installation has been paused, the REST API request below will resume installation at the next pending node.

curl -v -H "Authorization: token=$(dcos config show core.dcos_acs_token)" -X POST http://<dcos_url>/service/hdfs/v1/plans/deploy/continue

Overlay networks

HDFS supports deployment on the dcos overlay network, a virtual network on DC/OS that allows each container to have its own IP address and not use the ports resources on the agent. This can be specified by passing the following configuration during installation:

{
    "service": {
        "virtual_network": true
    }
}

As mentioned in the developer guide once the service is deployed on the overlay network, it cannot be updated to use the host network.

Limitations

Overlay networks

When HDFS is deployed on the dcos overlay network, the configuration cannot be updated to use the host network.

Managing

Add a Data Node

Increase the DATA_COUNT value from the DC/OS dashboard as described in the Configuring section. This creates an update plan as described in that section. An additional node will be added as the last step of that plan.

Node Info

Comprehensive information is available about every node. To list all nodes:

dcos hdfs --name=<service-name> pods list

Result:

[
  "data-0",
  "data-1",
  "data-2",
  "journal-0",
  "journal-1",
  "journal-2",
  "name-0",
  "name-1",
  "zkfc-0",
  "zkfc-1"
]

To view information about a node, run the following command from the CLI.

$ dcos hdfs --name=<service-name> pods info <node-id>

For example:

$ dcos hdfs pods info journal-0

Result:

[
  {
    "info": {
      "name": "journal-0-node",
      "taskId": {
        "value": "journal-0-node__b31a70f4-73c5-4065-990c-76c0c704b8e4"
      },
      "slaveId": {
        "value": "0060634a-aa2b-4fcc-afa6-5569716b533a-S5"
      },
      "resources": [
        {
          "name": "cpus",
          "type": "SCALAR",
          "scalar": {
            "value": 0.3
          },
          "ranges": null,
          "set": null,
          "role": "hdfs-role",
          "reservation": {
            "principal": "hdfs-principal",
            "labels": {
              "labels": [
                {
                  "key": "resource_id",
                  "value": "4208f1ea-586f-4157-81fd-dfa0877e7472"
                }
              ]
            }
          },
          "disk": null,
          "revocable": null,
          "shared": null
        },
        {
          "name": "mem",
          "type": "SCALAR",
          "scalar": {
            "value": 512.0
          },
          "ranges": null,
          "set": null,
          "role": "hdfs-role",
          "reservation": {
            "principal": "hdfs-principal",
            "labels": {
              "labels": [
                {
                  "key": "resource_id",
                  "value": "a0be3c2c-3c7c-47ad-baa9-be81fb5d5f2e"
                }
              ]
            }
          },
          "disk": null,
          "revocable": null,
          "shared": null
        },
        {
          "name": "ports",
          "type": "RANGES",
          "scalar": null,
          "ranges": {
            "range": [
              {
                "begin": 8480,
                "end": 8480
              },
              {
                "begin": 8485,
                "end": 8485
              }
            ]
          },
          "set": null,
          "role": "hdfs-role",
          "reservation": {
            "principal": "hdfs-principal",
            "labels": {
              "labels": [
                {
                  "key": "resource_id",
                  "value": "d50b3deb-97c7-4960-89e5-ac4e508e4564"
                }
              ]
            }
          },
          "disk": null,
          "revocable": null,
          "shared": null
        },
        {
          "name": "disk",
          "type": "SCALAR",
          "scalar": {
            "value": 5000.0
          },
          "ranges": null,
          "set": null,
          "role": "hdfs-role",
          "reservation": {
            "principal": "hdfs-principal",
            "labels": {
              "labels": [
                {
                  "key": "resource_id",
                  "value": "3e624468-11fb-4fcf-9e67-ddb883b1718e"
                }
              ]
            }
          },
          "disk": {
            "persistence": {
              "id": "6bf7fcf1-ccdf-41a3-87ba-459162da1f03",
              "principal": "hdfs-principal"
            },
            "volume": {
              "mode": "RW",
              "containerPath": "journal-data",
              "hostPath": null,
              "image": null,
              "source": null
            },
            "source": null
          },
          "revocable": null,
          "shared": null
        }
      ],
      "executor": {
        "type": null,
        "executorId": {
          "value": "journal__e42893b5-9d96-4dfb-8e85-8360d483a122"
        },
        "frameworkId": null,
        "command": {
          "uris": [
            {
              "value": "https://downloads.mesosphere.com/hdfs/assets/1.0.0-2.6.0/executor.zip",
              "executable": null,
              "extract": null,
              "cache": null,
              "outputFile": null
            },
            {
              "value": "https://downloads.mesosphere.com/libmesos-bundle/libmesos-bundle-1.9-argus-1.1.x-2.tar.gz",
              "executable": null,
              "extract": null,
              "cache": null,
              "outputFile": null
            },
            {
              "value": "https://downloads.mesosphere.com/java/jre-8u112-linux-x64-jce-unlimited.tar.gz",
              "executable": null,
              "extract": null,
              "cache": null,
              "outputFile": null
            },
            {
              "value": "https://downloads.mesosphere.com/hdfs/assets/hadoop-2.6.0-cdh5.9.1-dcos.tar.gz",
              "executable": null,
              "extract": null,
              "cache": null,
              "outputFile": null
            },
            {
              "value": "https://downloads.mesosphere.com/hdfs/assets/1.0.0-2.6.0/bootstrap.zip",
              "executable": null,
              "extract": null,
              "cache": null,
              "outputFile": null
            },
            {
              "value": "http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/artifacts/template/25f791d8-4d42-458f-84fb-9d82842ffb3e/journal/node/core-site",
              "executable": null,
              "extract": false,
              "cache": null,
              "outputFile": "config-templates/core-site"
            },
            {
              "value": "http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/artifacts/template/25f791d8-4d42-458f-84fb-9d82842ffb3e/journal/node/hdfs-site",
              "executable": null,
              "extract": false,
              "cache": null,
              "outputFile": "config-templates/hdfs-site"
            },
            {
              "value": "http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/artifacts/template/25f791d8-4d42-458f-84fb-9d82842ffb3e/journal/node/hadoop-metrics2",
              "executable": null,
              "extract": false,
              "cache": null,
              "outputFile": "config-templates/hadoop-metrics2"
            }
          ],
          "environment": null,
          "shell": null,
          "value": "export LD_LIBRARY_PATH=$MESOS_SANDBOX/libmesos-bundle/lib:$LD_LIBRARY_PATH && export MESOS_NATIVE_JAVA_LIBRARY=$(ls $MESOS_SANDBOX/libmesos-bundle/lib/libmesos-*.so) && export JAVA_HOME=$(ls -d $MESOS_SANDBOX/jre*/) && ./executor/bin/executor",
          "arguments": [],
          "user": null
        },
        "container": null,
        "resources": [],
        "name": "journal",
        "source": null,
        "data": null,
        "discovery": null,
        "shutdownGracePeriod": null,
        "labels": null
      },
      "command": {
        "uris": [],
        "environment": {
          "variables": [
            {
              "name": "PERMISSIONS_ENABLED",
              "value": "false"
            },
            {
              "name": "DATA_NODE_BALANCE_BANDWIDTH_PER_SEC",
              "value": "41943040"
            },
            {
              "name": "NAME_NODE_HANDLER_COUNT",
              "value": "20"
            },
            {
              "name": "CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE",
              "value": "1000"
            },
            {
              "name": "HADOOP_ROOT_LOGGER",
              "value": "INFO,console"
            },
            {
              "name": "HA_FENCING_METHODS",
              "value": "shell(/bin/true)"
            },
            {
              "name": "SERVICE_ZK_ROOT",
              "value": "dcos-service-hdfs"
            },
            {
              "name": "HADOOP_PROXYUSER_HUE_GROUPS",
              "value": "*"
            },
            {
              "name": "NAME_NODE_HEARTBEAT_RECHECK_INTERVAL",
              "value": "60000"
            },
            {
              "name": "HADOOP_PROXYUSER_HUE_HOSTS",
              "value": "*"
            },
            {
              "name": "CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_EXPIRY_MS",
              "value": "1000"
            },
            {
              "name": "JOURNAL_NODE_RPC_PORT",
              "value": "8485"
            },
            {
              "name": "CLIENT_FAILOVER_PROXY_PROVIDER_HDFS",
              "value": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
            },
            {
              "name": "DATA_NODE_HANDLER_COUNT",
              "value": "10"
            },
            {
              "name": "HA_AUTOMATIC_FAILURE",
              "value": "true"
            },
            {
              "name": "JOURNALNODE",
              "value": "true"
            },
            {
              "name": "NAME_NODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION",
              "value": "4"
            },
            {
              "name": "HADOOP_PROXYUSER_HTTPFS_HOSTS",
              "value": "*"
            },
            {
              "name": "POD_INSTANCE_INDEX",
              "value": "0"
            },
            {
              "name": "DATA_NODE_IPC_PORT",
              "value": "9005"
            },
            {
              "name": "JOURNAL_NODE_HTTP_PORT",
              "value": "8480"
            },
            {
              "name": "NAME_NODE_DATA_NODE_REGISTRATION_IP_HOSTNAME_CHECK",
              "value": "false"
            },
            {
              "name": "TASK_USER",
              "value": "root"
            },
            {
              "name": "journal-0-node",
              "value": "true"
            },
            {
              "name": "HADOOP_PROXYUSER_ROOT_GROUPS",
              "value": "*"
            },
            {
              "name": "TASK_NAME",
              "value": "journal-0-node"
            },
            {
              "name": "HADOOP_PROXYUSER_ROOT_HOSTS",
              "value": "*"
            },
            {
              "name": "IMAGE_COMPRESS",
              "value": "true"
            },
            {
              "name": "CLIENT_READ_SHORTCIRCUIT",
              "value": "true"
            },
            {
              "name": "FRAMEWORK_NAME",
              "value": "hdfs"
            },
            {
              "name": "IMAGE_COMPRESSION_CODEC",
              "value": "org.apache.hadoop.io.compress.SnappyCodec"
            },
            {
              "name": "NAME_NODE_SAFEMODE_THRESHOLD_PCT",
              "value": "0.9"
            },
            {
              "name": "NAME_NODE_INVALIDATE_WORK_PCT_PER_ITERATION",
              "value": "0.95"
            },
            {
              "name": "HADOOP_PROXYUSER_HTTPFS_GROUPS",
              "value": "*"
            },
            {
              "name": "CLIENT_READ_SHORTCIRCUIT_PATH",
              "value": "/var/lib/hadoop-hdfs/dn_socket"
            },
            {
              "name": "DATA_NODE_HTTP_PORT",
              "value": "9004"
            },
            {
              "name": "DATA_NODE_RPC_PORT",
              "value": "9003"
            },
            {
              "name": "NAME_NODE_HTTP_PORT",
              "value": "9002"
            },
            {
              "name": "NAME_NODE_RPC_PORT",
              "value": "9001"
            },
            {
              "name": "CONFIG_TEMPLATE_CORE_SITE",
              "value": "config-templates/core-site,hadoop-2.6.0-cdh5.9.1/etc/hadoop/core-site.xml"
            },
            {
              "name": "CONFIG_TEMPLATE_HDFS_SITE",
              "value": "config-templates/hdfs-site,hadoop-2.6.0-cdh5.9.1/etc/hadoop/hdfs-site.xml"
            },
            {
              "name": "CONFIG_TEMPLATE_HADOOP_METRICS2",
              "value": "config-templates/hadoop-metrics2,hadoop-2.6.0-cdh5.9.1/etc/hadoop/hadoop-metrics2.properties"
            },
            {
              "name": "PORT_JOURNAL_RPC",
              "value": "8485"
            },
            {
              "name": "PORT_JOURNAL_HTTP",
              "value": "8480"
            }
          ]
        },
        "shell": null,
        "value": "./bootstrap && ./hadoop-2.6.0-cdh5.9.1/bin/hdfs journalnode",
        "arguments": [],
        "user": null
      },
      "container": null,
      "healthCheck": null,
      "killPolicy": null,
      "data": null,
      "labels": {
        "labels": [
          {
            "key": "goal_state",
            "value": "RUNNING"
          },
          {
            "key": "offer_attributes",
            "value": ""
          },
          {
            "key": "task_type",
            "value": "journal"
          },
          {
            "key": "index",
            "value": "0"
          },
          {
            "key": "offer_hostname",
            "value": "10.0.1.23"
          },
          {
            "key": "target_configuration",
            "value": "4bdb3f97-96b0-4e78-8d47-f39edc33f6e3"
          }
        ]
      },
      "discovery": null
    },
    "status": {
      "taskId": {
        "value": "journal-0-node__b31a70f4-73c5-4065-990c-76c0c704b8e4"
      },
      "state": "TASK_RUNNING",
      "message": "Reconciliation: Latest task state",
      "source": "SOURCE_MASTER",
      "reason": "REASON_RECONCILIATION",
      "data": null,
      "slaveId": {
        "value": "0060634a-aa2b-4fcc-afa6-5569716b533a-S5"
      },
      "executorId": null,
      "timestamp": 1.486694618923135E9,
      "uuid": null,
      "healthy": null,
      "labels": null,
      "containerStatus": {
        "containerId": {
          "value": "a4c8433f-2648-4ba7-a8b8-5fe5df20e8af",
          "parent": null
        },
        "networkInfos": [
          {
            "ipAddresses": [
              {
                "protocol": null,
                "ipAddress": "10.0.1.23"
              }
            ],
            "name": null,
            "groups": [],
            "labels": null,
            "portMappings": []
          }
        ],
        "cgroupInfo": null,
        "executorPid": 5594
      },
      "unreachableTime": null
    }
  }
]

Node Status

Similarly, the status for any node may also be queried.

$ dcos hdfs --name=<service-name> pods info <node-id>

For example:

$ dcos hdfs pods info journal-0
[
  {
    "name": "journal-0-node",
    "id": "journal-0-node__b31a70f4-73c5-4065-990c-76c0c704b8e4",
    "state": "TASK_RUNNING"
  }
]

Quick Start

  1. Perform a default installation of HDFS by following the instructions in the Install and Customize section of this topic. Note: Your cluster must have a minimum of five agent nodes with eight GiB of memory and ten GiB of disk available on each agent.

  2. SSH into a DC/OS node.

    $ dcos node ssh --leader --master-proxy
  3. Run the Hadoop client.

    $ docker run -it mesosphere/hdfs-client:1.0.0-2.6.0 bash
    $ ./bin/hdfs dfs -ls /

    By default, the client is configured to be configured to connect to an HDFS service named hdfs and no further client configuration is required.

    $ ./bin/hdfs dfs -ls /

    If an HDFS cluster has been installed that does not use the default name of hdfs, you must configure the client before use.

    $ HDFS_SERVICE_NAME=<hdfs-alternate-name> /configure-hdfs.sh
    $ ./bin/hdfs dfs -ls /
  4. To configure other clients, return to the DC/OS CLI. Retrieve the hdfs-site.xml and core-site.xml files with the dcos hdfs endpoints command and the hdfs-site.xml and core-site.xml argument:

    $ dcos hdfs endpoints hdfs-site.xml
    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <configuration>
        <property>
            <name>dfs.nameservice.id</name>
            <value>hdfs</value>
        </property>
        <property>
            <name>dfs.nameservices</name>
            <value>hdfs</value>
        </property>
        <property>
            <name>dfs.ha.namenodes.hdfs</name>
            <value>name-0-node,name-1-node</value>
        </property>
    
        <!-- namenode -->
        <property>
            <name>dfs.namenode.shared.edits.dir</name>
            <value>qjournal://journal-0-node.hdfs.autoip.dcos.thisdcos.directory:8485;journal-1-node.hdfs.autoip.dcos.thisdcos.directory:8485;journal-2-node.hdfs.autoip.dcos.thisdcos.directory:8485/hdfs</value>
        </property>
        <property>
            <name>dfs.namenode.name.dir</name>
            <value>/name-data</value>
        </property>
        <property>
            <name>dfs.namenode.safemode.threshold-pct</name>
            <value>0.9</value>
        </property>
        <property>
            <name>dfs.namenode.heartbeat.recheck-interval</name>
            <value>60000</value>
        </property>
        <property>
            <name>dfs.namenode.handler.count</name>
            <value>20</value>
        </property>
        <property>
            <name>dfs.namenode.invalidate.work.pct.per.iteration</name>
            <value>0.95</value>
        </property>
        <property>
            <name>dfs.namenode.replication.work.multiplier.per.iteration</name>
            <value>4</value>
        </property>
        <property>
            <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
            <value>false</value>
        </property>
    
    
        <!-- name-0-node -->
        <property>
            <name>dfs.namenode.rpc-address.hdfs.name-0-node</name>
            <value>name-0-node.hdfs.autoip.dcos.thisdcos.directory:9001</value>
        </property>
        <property>
            <name>dfs.namenode.rpc-bind-host.hdfs.name-0-node</name>
            <value>0.0.0.0</value>
        </property>
        <property>
            <name>dfs.namenode.http-address.hdfs.name-0-node</name>
            <value>name-0-node.hdfs.autoip.dcos.thisdcos.directory:9002</value>
        </property>
        <property>
            <name>dfs.namenode.http-bind-host.hdfs.name-0-node</name>
            <value>0.0.0.0</value>
        </property>
    
    
        <!-- name-1-node -->
        <property>
            <name>dfs.namenode.rpc-address.hdfs.name-1-node</name>
            <value>name-1-node.hdfs.autoip.dcos.thisdcos.directory:9001</value>
        </property>
        <property>
            <name>dfs.namenode.rpc-bind-host.hdfs.name-1-node</name>
            <value>0.0.0.0</value>
        </property>
        <property>
            <name>dfs.namenode.http-address.hdfs.name-1-node</name>
            <value>name-1-node.hdfs.autoip.dcos.thisdcos.directory:9002</value>
        </property>
        <property>
            <name>dfs.namenode.http-bind-host.hdfs.name-1-node</name>
            <value>0.0.0.0</value>
        </property>
    
        <!-- journalnode -->
        <property>
            <name>dfs.journalnode.rpc-address</name>
            <value>0.0.0.0:8485</value>
        </property>
        <property>
            <name>dfs.journalnode.http-address</name>
            <value>0.0.0.0:8480</value>
        </property>
        <property>
            <name>dfs.journalnode.edits.dir</name>
            <value>/journal-data</value>
        </property>
    
        <!-- datanode -->
        <property>
            <name>dfs.datanode.address</name>
            <value>0.0.0.0:9003</value>
        </property>
        <property>
            <name>dfs.datanode.http.address</name>
            <value>0.0.0.0:9004</value>
        </property>
        <property>
            <name>dfs.datanode.ipc.address</name>
            <value>0.0.0.0:9005</value>
        </property>
        <property>
            <name>dfs.datanode.data.dir</name>
            <value>/data-data</value>
        </property>
        <property>
            <name>dfs.datanode.balance.bandwidthPerSec</name>
            <value>41943040</value>
        </property>
        <property>
            <name>dfs.datanode.handler.count</name>
            <value>10</value>
        </property>
    
        <!-- HA -->
        <property>
            <name>ha.zookeeper.quorum</name>
            <value>master.mesos:2181</value>
        </property>
        <property>
            <name>dfs.ha.fencing.methods</name>
            <value>shell(/bin/true)</value>
        </property>
        <property>
            <name>dfs.ha.automatic-failover.enabled</name>
            <value>true</value>
        </property>
    
    
        <property>
            <name>dfs.image.compress</name>
            <value>true</value>
        </property>
        <property>
            <name>dfs.image.compression.codec</name>
            <value>org.apache.hadoop.io.compress.SnappyCodec</value>
        </property>
        <property>
            <name>dfs.client.read.shortcircuit</name>
            <value>true</value>
        </property>
        <property>
            <name>dfs.client.read.shortcircuit.streams.cache.size</name>
            <value>1000</value>
        </property>
        <property>
            <name>dfs.client.read.shortcircuit.streams.cache.size.expiry.ms</name>
            <value>1000</value>
        </property>
        <property>
            <name>dfs.client.failover.proxy.provider.hdfs</name>
            <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        </property>
        <property>
            <name>dfs.domain.socket.path</name>
            <value>/var/lib/hadoop-hdfs/dn_socket</value>
        </property>
        <property>
            <name>dfs.permissions.enabled</name>
            <value>false</value>
        </property>
    
    </configuration>
    $ dcos hdfs endpoints hdfs-site.xml
    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>
        <property>
            <name>fs.default.name</name>
            <value>hdfs://hdfs</value>
        </property>
        <property>
            <name>hadoop.proxyuser.hue.hosts</name>
            <value>*</value>
        </property>
        <property>
            <name>hadoop.proxyuser.hue.groups</name>
            <value>*</value>
        </property>
        <property>
            <name>hadoop.proxyuser.root.hosts</name>
            <value>*</value>
        </property>
        <property>
            <name>hadoop.proxyuser.root.groups</name>
            <value>*</value>
        </property>
        <property>
            <name>hadoop.proxyuser.httpfs.groups</name>
            <value>*</value>
        </property>
        <property>
            <name>hadoop.proxyuser.httpfs.hosts</name>
            <value>*</value>
        </property>
        <property>
            <name>ha.zookeeper.parent-znode</name>
            <value>/dcos-service-hdfs/hadoop-ha</value>
        </property>
    
    </configuration>

    These commands returns XML files that can be used by clients of the HDFS cluster.

Troubleshooting

Replacing a Permanently Failed Node

The DC/OS HDFS Service is resilient to temporary node failures. However, if a DC/OS agent hosting a HDFS node is permanently lost, manual intervention is required to replace the failed node. The following command should be used to replace the node residing on the failed server.

$ dcos hdfs --name=<service-name> pods replace <node_id>

Restarting a Node

If you must forcibly restart a node, use the following command to restart the node on the same agent node where it currently resides. This will not result in an outage or loss of data.

$ dcos hdfs --name=<service-name> pods restart <node_id>

Uninstall

Uninstalling the service is straightforward. Replace hdfs with the name of the HDFS instance to be uninstalled.

$ dcos package uninstall --app-id=hdfs

Note: Alternatively, you can uninstall HDFS from the DC/OS GUI.

Then, use the framework cleaner script to remove your HDFS instance from ZooKeeper and destroy all data associated with it. The script requires several arguments. The default values are:

  • framework_role is hdfs-role.
  • framework_principal is hdfs-principal.
  • zk_path is dcos-service-<service-name>.

These values may vary if you customized them during installation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment