Skip to content

Instantly share code, notes, and snippets.

Created Jul 20, 2018
What would you like to do?
$ docker swarm init --advertise-addr
Swarm initialized: current node (bvz81updecsj6wjz393c09vti) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
$ docker info
Swarm: active
NodeID: jpnzybmz6ckw3g9w3xad622jb
Is Manager: true
ClusterID: b1iyif97xgi88iv3uq479whhr
Managers: 1
Nodes: 5
$ docker network create --driver overlay spark
$ docker service create --network spark --name spark_master -p 8888:8888 -p 6066:6066 -p 7077:7077 -p 8080:8080 adoji/spark-master:1.0.0
$ docker service create --network spark --replicas 5 --name spark_worker -p 8081:8080 gettyimages/spark:2.3.1-hadoop-3.0 bin/spark-class org.apache.spark.deploy.worker.Worker spark://
FROM gettyimages/spark:2.3.1-hadoop-3.0
MAINTAINER Dong-jin Ahn <>
RUN echo "download zeppelin and install" && \
cd && \
curl --output zeppelin-0.8.0-bin-all.tgz && \
tar -xzvf zeppelin-0.8.0-bin-all.tgz && \
rm zeppelin-0.8.0-bin-all.tgz && \
cd zeppelin-0.8.0-bin-all/conf && \
curl -L > zeppelin-site.xml
CMD ["sh", "-c", "/root/zeppelin-0.8.0-bin-all/bin/ start && bin/spark-class org.apache.spark.deploy.master.Master"]
val a = sc.parallelize(0 until Integer.MAX_VALUE)
name: spark
root: ~/
- all:
- ssh spark1
- ssh spark2
- ssh spark3
- ssh spark4
- ssh spark5
- master:
- ssh spark5
- workers:
- ssh spark1
- ssh spark2
- ssh spark3
- ssh spark4
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment