Blog post to follow: https://sysdig.com/blog/deploy-openshift-aws/
AWS account must have accepted the CentOS terms via AWS marketplace See the AMI IDs in the CloudFormation file Upload the stack file to your own S3 bucket Replace your SSH key name, stack name, etc below:
aws cloudformation create-stack \
--region us-west-1 \
--stack-name robszumski-openshift-39 \
--template-url "https://s3-us-west-1.amazonaws.com/openshift-origin-cloudformation/CloudFormationTemplateOpenShift.yaml" \
--parameters \
ParameterKey=AvailabilityZone,ParameterValue=us-west-1a \
ParameterKey=KeyName,ParameterValue=robszumski \
--capabilities=CAPABILITY_IAM
$ git clone https://github.com/openshift/openshift-ansible.git
$ cd openshift-ansible
$ git checkout origin/release-3.9
- Generate an htpassword file for login to the Console, this is referenced in the inventory file. There are several web generators for these.
- Grab the IPs for your infra node, master (and etcd), and workers. Put those in the inventory file (see example in this gist).
Reference your modified inventory file, and the prepare playbook (see file in this gist).
$ ansible-playbook ~/Documents/openshift-origin-ansible/prepare.yaml -i ~/Documents/openshift-origin-ansible/inventory --key-file ~/.ssh/id_rsa
$ ansible-playbook -i ~/Documents/openshift-origin-ansible/inventory --key-file ~/.ssh/id_rsa \
~/Documents/openshift-ansible/playbooks/prerequisites.yml
This will take a while.
We are going to disable a few checks due to smaller nodes.
$ ansible-playbook -i ~/Documents/openshift-origin-ansible/inventory --key-file ~/.ssh/id_rsa \
~/Documents/openshift-ansible/playbooks/deploy_cluster.yml \
-e openshift_disable_check=package_version,disk_availability,memory_availability
This will take a while.
The Console should be up and running on the master node DNS address at 8443.