Skip to content

Instantly share code, notes, and snippets.

@wshaddix
Last active December 31, 2017 17:50
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save wshaddix/3acf084908f31d7c18f1f20f53b19147 to your computer and use it in GitHub Desktop.
Save wshaddix/3acf084908f31d7c18f1f20f53b19147 to your computer and use it in GitHub Desktop.
RESOURCE_GROUP="dev-docker-swarm-us-east"
VNET_NAME="vnet-docker-swarm"
SUBNET_NAME="subnet-docker-swarm"
NSG_NAME="nsg-docker-swarm"
LOAD_BALANCER_NAME="load-balancer-swarm-cluster"
OS_IMAGE="Canonical:UbuntuServer:17.04:17.04.201711210"
VM_SIZE="Standard_B2S"
ADMIN_USERNAME="wshaddix"
AVAILABILITY_SET_NAME="availability-set-swarm-nodes"
# create a resource group
az group create -l eastus -n $RESOURCE_GROUP
# create a network security group
az network nsg create -g $RESOURCE_GROUP -n $NSG_NAME
# create a virtual network
az network vnet create -g $RESOURCE_GROUP -n $VNET_NAME
# create a subnet
az network vnet subnet create -g $RESOURCE_GROUP -n $SUBNET_NAME --vnet-name $VNET_NAME --address-prefix 10.0.0.0/24 --network-security-group $NSG_NAME
# create a public ip address for the load balancer (front-end)
az network public-ip create -g $RESOURCE_GROUP -n $LOAD_BALANCER_NAME-ip
# create a load balancer
az network lb create -g $RESOURCE_GROUP -n $LOAD_BALANCER_NAME --public-ip-address $LOAD_BALANCER_NAME-ip --frontend-ip-name $LOAD_BALANCER_NAME-front-end --backend-pool-name $LOAD_BALANCER_NAME-back-end
# create a load balancer probe on port 80
az network lb probe create -g $RESOURCE_GROUP -n load-balancer-health-probe-80 --lb-name $LOAD_BALANCER_NAME --protocol tcp --port 80
# create a load balancer traffic rule for port 80
az network lb rule create -g $RESOURCE_GROUP -n load-balancer-traffic-rule-80 --lb-name $LOAD_BALANCER_NAME --protocol tcp --frontend-port 80 --backend-port 80 --frontend-ip-name $LOAD_BALANCER_NAME-front-end --backend-pool-name $LOAD_BALANCER_NAME-back-end --probe-name load-balancer-health-probe-80
# create three NAT rules for port 22 (so we can ssh to each of the three nodes via the load balancer's public ip address)
for i in `seq 1 3`; do
az network lb inbound-nat-rule create -g $RESOURCE_GROUP -n nat-rule-for-node-$i-ssh --lb-name $LOAD_BALANCER_NAME --protocol tcp --frontend-port 422$i --backend-port 22 --frontend-ip-name $LOAD_BALANCER_NAME-front-end
done
# allow port 22 (ssh) traffic into the network
az network nsg rule create -g $RESOURCE_GROUP -n allow-ssh --nsg-name $NSG_NAME --destination-port-ranges 22 --access Allow --description "Allow inbound ssh traffic" --priority 100
# allow port 80 (http) traffic into the network
az network nsg rule create -g $RESOURCE_GROUP -n allow-http --nsg-name $NSG_NAME --destination-port-ranges 80 --access Allow --description "Allow inbound http traffic" --priority 200
# create three virtual network cards and associate with the network security group and load balancer. bind each NIC to one of the ssh nat rules we created
for i in `seq 1 3`; do
az network nic create -g $RESOURCE_GROUP -n node-$i-private-nic --vnet-name $VNET_NAME --subnet $SUBNET_NAME --lb-name $LOAD_BALANCER_NAME --lb-address-pools $LOAD_BALANCER_NAME-back-end --lb-inbound-nat-rules nat-rule-for-node-$i-ssh
done
# create an availability set with 3 fault domains and 3 update domains
az vm availability-set create -g $RESOURCE_GROUP -n $AVAILABILITY_SET_NAME --platform-fault-domain-count 3 --platform-update-domain-count 3
# generate ssh keys
ssh-keygen -t rsa -f ~/.ssh/docker_rsa -N ""
# create three virtual machines
for i in `seq 1 3`; do
az vm create -g $RESOURCE_GROUP -n node-$i --ssh-key-value "~/.ssh/docker_rsa.pub" --nics node-$i-private-nic --image $OS_IMAGE --size $VM_SIZE --authentication-type ssh --admin-username $ADMIN_USERNAME --availability-set $AVAILABILITY_SET_NAME --os-disk-name node-$i-os-disk
done
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment