This repository is based on three parts
- NodeJS UDP Server
- Terraform Script
- Ansible playbook to deploy docker
The Terrafrom provisiong is completely isolated from the application and ansible playbook.
# Running Terraform
cd project_dir/deploy
# set the variables in variables.tf
terraform plan
terraform apply
It will create the following things:
- Elastic Container Registry
- Instance Profile
- Policy
- Security Group
- AWS Instance
- CloudWatch Alarm
and it will output the AWS dns name, ECR URL and Security Group ID.
cd project_dir/app
# DOCKER_REPO_URL should be ECR URL e.g docker build -t 450245534400.dkr.ecr.us-east-1.amazonaws.com/tenfold:latest
docker build -t <DOCKER_REPO_URL>:latest
# check the docker images
docker images
# push to ECR
aws ecr get-login # It will output the docker login command. copy paste the command to login.
docker login -u AWS -p <password_string> <DOCKER_REPO_URL>:latest
virtualenv -p python3_path venv
# Activate the environment
source bin/activate
# Install ansible
pip install -r requirements.txt
Detailed instructions to install ansible on various architects are mentioned here
Configure your [hosts file]
[remote]
192.168.1.1 #AWS EC2 DNS NAME
The playbook has tested at Amazon Instance.
To run this playbook for the very first time, please authorise the SSH key or Password base login
# Assuming your public key is already in authorized_keys on a remote host
ansible-playbook provision.yml
# For password base login, please provide the username and private key of the remote user.
ansible-playbook provision.yml -u ec2-user --private-key=~/projects/instance_connect.pem
SSH to the machine using private key and follow the logs using this command.
docker logs --follow <DOCKER_IMAGE_ID>
EC2 instances are just like any other virtual machine, obviously you can put in a server that listens to UDP. Configuring the network for this is, of course, slightly more complicated, but possible. The one thing making it more complicated is that with UDP you will not be able to enjoy the load balancer service that Amazon offers, as it (currently) only supports TCP-based protocols.
So, if you have one server you wish to put on the internet, the procedure is probably same as what you'd do with a TCP server: set up a server and an elastic IP pointing to it, and then have your clients connect to it (by knowing the elastic IP you've been allocated, or by referring to that IP via a DNS resolution). If you have multiple servers you wish to set up, answering the same address, life is a bit more complicated. With TCP, you could have set up an Amazon load balancer and assign your elastic IP to the load balancer. If you'd want a load balancer for UDP, the Amazon stock load balancer can't do that, but you can still find a software load balancer to set up or you can create Launch Configratin and Auto scaling group for instances. To make it fault tolerant and scalable. Route53 DNS entry can be created for one server initially. Later, you can write a lamba function which triggered by ASG, and update or modify the DNS Zone file. With this implementatoin, your service can be scalable and fault tolerant.