- All nodes have 2 CPU cores, 64GB of disk storage. Hardware or VM type needs to support CentOS 7.3.
- The Install bootstrap node needs 4GB of memory, Other node types might get by with 2GB, though the spec calls for more (32GB on Master and Boot node, 16GB on others), and 2GB is cutting it close on the Master node for sure.
- Assume a single NIC each, All on a common subnet - though other configurations may work
- Each node must have a hostname in DNS, with forward and reverse lookup working, DHCP is OK
This process is suitable for training and testing, but not suitable for heavy workloads or enterprise grade production deployments. This is specifically intended for on "on premises" deployment to "bare metal" or hypervisor. Easier deployment processes are available for running DC/OS in many of the popular public clouds. For production deployments, contacting Mesosphere Inc. for a subscription version, including security features and support is recommended.
ScaleIO is a software defined storage solution that provides block based storage (what you want for high performance stateful containerized apps such as databases), from commidity x86 servers. It can be deployed with DC/OS in a converged infrastructure, where ScaleIO is installed on the same nodes as the DC/OS agents which run containers. However, in the process described below, a non-converged ScaleIO deployment is assumed to be already deployed. ScaleIO binaries are available for free download here. You will use only the client package (EMC-ScaleIO-sds-2.0-5014.0.el7.x86_64.rpm) in the process described.
- use default centos disk format = xfs
- enable IPV4
- set timezone, with ntp (default)
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub | ssh root@192.168.1.21 "mkdir ~/.ssh && cat >> ~/.ssh/authorized_keys"
cat ~/.ssh/id_rsa.pub | ssh root@192.168.1.22 "mkdir ~/.ssh && cat >> ~/.ssh/authorized_keys"
cat ~/.ssh/id_rsa.pub | ssh root@192.168.1.23 "mkdir ~/.ssh && cat >> ~/.ssh/authorized_keys"
cat ~/.ssh/id_rsa.pub | ssh root@192.168.1.24 "mkdir ~/.ssh && cat >> ~/.ssh/authorized_keys"As an option, a tool that supports multiple concurent console sessions such as tmux could be useful for efficiently performing these steps that are common to multiple nodes.
Login as root
visudo- uncomment
# %wheel ALL=(ALL) NOPASSWD: ALL - comment out the other existing activated %wheel line
adduser centos
passwd centos
usermod -aG wheel centos
usermod -aG docker centoscopy public to targets (all masters and agents) - substitute your actual ips
cat ~/.ssh/id_rsa.pub | ssh centos@192.168.1.21 "mkdir ~/.ssh && cat >> ~/.ssh/authorized_keys"vi /etc/default/grubadd ipv6.disable=1 in GRUB_CMDLINE_LINUX definition
sudo systemctl stop firewalld && sudo systemctl disable firewalld
sudo tee /etc/modules-load.d/overlay.conf <<-'EOF'
overlay
EOFsudo sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config &&
sudo groupadd nogroupreboot
yum install -y nano ntp tar xz unzip curl ipset open-vm-tools nfs-utils yum-versionlock
chkconfig ntpd on
service ntpd restart
systemctl enable ntpd
yum -y updatesudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOFsudo mkdir -p /etc/systemd/system/docker.service.d && sudo tee /etc/systemd/system/docker.service.d/override.conf <<- EOF
[Service]ExecStart=ExecStart=/usr/bin/dockerd --storage-driver=overlay
EOFsudo yum install -y docker-engine-1.13.1 docker-engine-selinux-1.13.1
yum versionlock docker-engine docker-engine-selinux
sudo systemctl start docker
sudo systemctl enable dockersudo docker ps
docker run hello-worlddocker pull nginx:alpine
docker run -d --restart=unless-stopped -p 8081:80 -v /opt/dcos-setup/genconf/serve:/usr/share/nginx/html:ro --name=dcos-bootstrap-nginx nginx:alpinecurl -O https://downloads.dcos.io/dcos/stable/dcos_generate_config.sh
sudo bash dcos_generate_config.sh —webmkdir -p genconfcat << EOF > /root/genconf/ip-detect
#!/usr/bin/env bash
set -o nounset -o errexit -o pipefail
export PATH=/sbin:/usr/sbin:/bin:/usr/bin:$PATH
MASTER_IP=${MASTER_IP:-8.8.8.8}
INTERFACE_IP=$(ip r g ${MASTER_IP} | \
awk -v master_ip=${MASTER_IP} '
BEGIN { ec = 1 }
{
if($1 == master_ip) {
print $7
ec = 0
} else if($1 == "local") {
print $6
ec = 0
}
if (ec == 0) exit;
}
END { exit ec }
')
echo $INTERFACE_IP
EOF(Option #1) Launch the DC/OS web installer in your browser at: http://<bootstrap-node-public-ip>:9000
- Run
bash dcos_generate_config.sh --web -v - In browser, open
http://<bootstrap-node-public-ip>:9000
Web installer will compose a /root/genconf/config.yaml file which drives the install process
---
agent_list:
- 192.168.1.22
- 192.168.1.23
bootstrap_url: file:///opt/dcos_install_tmp
cluster_name: DC/OS
exhibitor_storage_backend: static
master_discovery: static
master_list:
- 192.168.1.21
oauth_enabled: false
process_timeout: 10000
public_agent_list:
- 192.168.1.24
resolvers:
- 192.168.1.1
ssh_port: 22
ssh_user: root
telemetry_enabled: falseOr (Option #2) manually compose a yaml file, like the example above, and invoke the CLI installer (recommended because issues are easier to troubleshoot)
Compose /root/genconf/config.yaml. See example above
The DC/OS installer will install a supported version of REX-Ray and "push" this configuration file to all cluster nodes. Substitute the actual ip of your ScaleIO gateway, your ScaleIO systemID and name, and your ScaleIO username and password.
rexray:
loglevel: info
modules:
default-admin:
host: tcp://127.0.0.1:61003
default-docker:
disabled: false
storageDrivers:
- scaleio
scaleio:
endpoint: https://192.168.1.14/api
insecure: true
userName: admin
password: Scaleio123!
systemID: 5ecccbed13f5b
systemName: tenantName
protectionDomainName: default
storagePoolName: defaultbash dcos_generate_config.sh --genconf
bash dcos_generate_config.sh --install-prereqs
bash dcos_generate_config.sh -v --preflight
bash dcos_generate_config.sh --deploy
bash dcos_generate_config.sh --postflightUninstall:
/opt/mesosphere/bin/pkgpanda uninstall && rm -fr /opt/mesosphere
yum install -y numactl libaio
yum localinstall -y EMC-ScaleIO-sdc-2.0-5014.0.el7.x86_64.rpm
/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip 192.168.1.11,192.168.1.12 --file /bin/emc/scaleio/drv_cfg.txtscli --add_sdc --sdc_ip 192.168.1.xVerify that REX-Ray configuration has been installed:
cat /etc/rexray/config.yml:
rexray:
loglevel: info
modules:
default-admin:
host: tcp://127.0.0.1:61003
default-docker:
disabled: false
storageDrivers:
- scaleio
scaleio:
endpoint: https://192.168.1.14/api
insecure: true
userName: admin
password: Scaleio123!
systemID: 5ecccbed13f5b
systemName: tenantName
protectionDomainName: default
storagePoolName: defaultTest operation of REX-Ray with ScaleIO
/opt/emc/scaleio/sdc/bin/drv_cfg --rescan
/opt/mesosphere/bin/rexray version
/opt/mesosphere/bin/rexray env
/opt/mesosphere/bin/rexray volume lsSubstitute the ip of your DC/OS Master node and open this link in a browser:
http://192.168.1.21/#/dashboard