Skip to content

Instantly share code, notes, and snippets.

@goern
Created May 4, 2017 12:44
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save goern/74a693963d965b3e55e6d189664e16e6 to your computer and use it in GitHub Desktop.
Save goern/74a693963d965b3e55e6d189664e16e6 to your computer and use it in GitHub Desktop.
HADOOP on OpenShift
# A headless service to create DNS records.
apiVersion: v1
kind: Service
metadata:
name: hdfs-datanode
labels:
app: hdfs-datanode
spec:
ports:
- port: 50010
name: fs
clusterIP: None
selector:
app: hdfs-datanode
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: hdfs-datanode
spec:
serviceName: "hdfs-datanode"
replicas: 1
template:
metadata:
labels:
app: hdfs-datanode
spec:
containers:
- name: datanode
image: uhopper/hadoop-datanode:2.7.2
env:
- name: CLUSTER_NAME
value: hdfs-k8s
- name: CORE_CONF_fs_defaultFS
value: hdfs://hdfs-namenode-0.hdfs-namenode.hadoop.svc.cluster.local:8020
ports:
- containerPort: 50010
name: fs
volumeMounts:
- name: hadoop-data
mountPath: /hadoop/dfs/data
restartPolicy: Always
serviceAccount: hadoop
volumeClaimTemplates:
- metadata:
name: hadoop-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
```
oc cluster up
oc new-project hadoop
oc create sa hadoop
oc adm policy add-scc-to-user anyuid -z hadoop
oc create -f namenode.yaml
oc expose service hdfs-namenode --port=50070
oc create -f datanode.yaml
```
# A headless service to create DNS records.
apiVersion: v1
kind: Service
metadata:
name: hdfs-namenode
labels:
app: hdfs-namenode
spec:
ports:
- port: 8020
name: fs
- port: 50070
name: namenode-web
clusterIP: None
selector:
app: hdfs-namenode
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: hdfs-namenode
spec:
serviceName: "hdfs-namenode"
replicas: 1
template:
metadata:
labels:
app: hdfs-namenode
spec:
terminationGracePeriodSeconds: 0
containers:
- name: hdfs-namenode
image: uhopper/hadoop-namenode:2.7.2
env:
- name: CLUSTER_NAME
value: hdfs-k8s
- name: CORE_CONF_fs_defaultFS
value: hdfs://hdfs-namenode-0.hdfs-namenode.hadoop.svc.cluster.local:8020
ports:
- containerPort: 8020
name: fs
- containerPort: 50070
name: namenode-web
volumeMounts:
- name: hadoop-data
mountPath: /hadoop/dfs/data
restartPolicy: Always
serviceAccount: hadoop
volumeClaimTemplates:
- metadata:
name: hadoop-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment