Skip to content

Instantly share code, notes, and snippets.

@owaism
Last active November 13, 2015 23:23
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save owaism/d820a9eaae53624a7e54 to your computer and use it in GitHub Desktop.
Save owaism/d820a9eaae53624a7e54 to your computer and use it in GitHub Desktop.
New CF Deployment GIST

Deploying Cloud Foundry to AWS - VPC

I worte this up because the instructions present in Cloud Foundry Docs led to loads of issues which required a lot of research. The Instruction provided here will simplify any subsequent deploys.

So till I find time to automate the deployment here are the instructions.

##Recommendations

  1. In the below instructions for AWS keys, use a user who belongs to administrators group. You will understand it as you go forward with the instructions.
  2. Start with a clean AWS account with nothing installed on it.
  3. If you run in to an issue: try not to restart or redo steps. Check if you have made any mistakes. Then Check the net to see if someone else has faced the same issue. And finally if everything else fails start redoing the steps.

##Warnings

  1. The whole process is going to take 4-5 hours the first time (unless you take the shortcut given below). So be patient.
  2. The Inception VM or machine that you use to deploy CF to AWS should have sufficient hard disk space. I have 20 gigs. The /tmp folder also needs to have sufficient space. 10 Gigs is what I would recommend.
  3. I wanted my VMs to run centos and started deploying CF using those configurations but found that CF open source is not tested for Centos and I was getting loads of errors at the last step. I then switched back to ubuntu. I have updated the instructions to reflect the use of ubuntu stemcell.

###1. Prepare AWS Domain Follow instruction provided in [http://docs.cloudfoundry.org/deploying/ec2/bootstrap-aws-vpc.html#domain-prep] to setup AWS Domain. Do NOT follow the next section yet.

###2. Spin up an AWS Inception server

Spin a new EC2 Instance with the following configurations. The configurations below are in YAML. Its easy to read. Spin up the instance using AWS UI but fill in the configurations from below as you go screen by screen. Read the comments in the below YAML.

instanceConfig:
  blockDevices:
    - deleteOnTermination: true
      mapping: /dev/sdb
      sizeInGB: 30
      volumeType: gp2
    - deleteOnTermination: true
      mapping: /dev/sdc
      sizeInGB: 15
      volumeType: gp2
  imageId: ami-9eaa1cf6 #Pick Ubuntu OS.
  instanceType: m3.medium #Can be a smaller VM also
  maxCount: 1
  minCount: 1
  networkInterfaceConfigs:
  - associatePublicIP: true
    deleteOnTermination: true
    deviceIndex: 0
    groupIds:
    - sg-c66866a3 # Just asssociate it with a Security group that has SSH (port 22) access
  terminationProtection: true

Also In the last screen just before you launch the VM select new SSH keys. Give it what ever name you want and spin it up.

###3. Connect to the Inception Server

ssh -i <your_key.pem> ubuntu@<public_ip_of_inception_server>

###4. Setup inception server

Execute the following script on the inception server:

#!/bin/bash

set -e            # fail fast
set -o pipefail   # don't ignore exit codes when piping output
set -x          # enable debugging

# Make EBS volume for cloudfoundry
sudo file -s /dev/xvdb

# EBS volume for tmp directory
sudo file -s /dev/xvdc

# Make ext4 file system on the volume
sudo mkfs -t ext4 /dev/xvdb
sudo mkfs -t ext4 /dev/xvdc

# create mount point
sudo mkdir /cf

# mount the new volume on the mount point
sudo mount /dev/xvdb /cf

# mount tmp directory on to the additional volume
sudo mount /dev/xvdc /tmp

sudo chown -R ubuntu /cf
sudo chmod 777 /tmp

# Making these drives mountable on each reload
sudo cp /etc/fstab /etc/fstab.orig
echo "/dev/xvdb       /cf   ext4    defaults,nofail        0       2" | sudo tee -a /etc/fstab
echo "/dev/xvdc       /tmp   ext4    defaults,nofail        0       2" | sudo tee -a /etc/fstab


echo "dns-nameservers 8.8.8.8" | sudo tee -a /etc/network/interfaces.d/eth0.cfg

sudo ifdown eth0 && sudo ifup eth0


echo "Installing RVM ..... "
# installing rvm (ruby version manager)
curl -sSL https://get.rvm.io | bash
source ~/.rvm/scripts/rvm

echo "RVM Installed."

echo "Ruby 1.9.3 Installing....."
# installing ruby 1.9.3 version 
rvm install 1.9.3

echo "Ruby 1.9.3 installed."

# use this to allow login shell | Or better just logout and log back in. If ssh then just exit and ssh back in
#/bin/bash --login
#/bin/sh --login

# to use ruby 1.9.3
rvm use 1.9.3

# Update Apt-get DB
sudo apt-get update --fix-missing

# For Ubuntu install all the required libraries
sudo apt-get install libxml2 libxml2-dev libxslt1.1 libxslt1-dev libpq-dev libmysql++-dev git -y

echo "Installing Bundler"

# Installs the bundler
gem install bundler

mkdir -p /cf/deployments/covs-cf
cd /cf/deployments/covs-cf

touch Gemfile

chmod 777 Gemfile

printf '%s\n%s\n%s\n' 'source "https://rubygems.org"' 'ruby "1.9.3"' 'gem "bosh_cli_plugin_aws"' >> Gemfile

echo "Installing Bosh CLI Plugin"
bundle install

echo "Installing Spiff"
curl -L -O https://www.dropbox.com/s/yliam5tushetrwm/spiff?dl=1
sudo mv spiff?dl=1 /usr/bin/spiff
sudo chmod +x /usr/bin/spiff
echo "Spiff Installed"


echo "Downloading and updating CF releases"
# Download CF Source Code
cd /cf
git clone https://github.com/cloudfoundry/cf-release.git
echo "CF Source Code Downloaded... Going to Update"
cd cf-release
./update
echo "CF Source Code Updated"

bundle install 

###5. Obtain AWS Access keys for the user If it is for a AWS user who does not have AWS IAM user privileges use instructions provided in the [Obtain AWS Credentials of this page](http://docs.cloudfoundry.org/deploying/ec2/configure_aws_micro_bosh.html#Obtain AWS credentials).

If an IAM user. Go to AWS IAM and tick the user you need to get the credentials and select Manage Access Keys from the User Actions dropdown.

Either way store the access key and secret for later use.

###6. Setup Bosh Environment variables Create a file bosh_environment and put in the following snippet within it.

export BOSH_VPC_DOMAIN= <Domain setup in AWS like 'example.com'>
export BOSH_VPC_SUBDOMAIN=<Subdomain setup in AWS like 'cf' if what you have setup is 'cf.example.com'.
export BOSH_AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID>
export BOSH_AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY>
export BOSH_AWS_REGION=us-east-1
export BOSH_VPC_SECONDARY_AZ=us-east-1c # see note below
export BOSH_VPC_PRIMARY_AZ=us-east-1d   # see note below

NOTE Make sure the Secondary and Primary Regions that you have specified above are healthy. Use the EC2 Dashboard to check up on Availability Zone Status to make sure they are healthy. If not please change accordingly.

Source it to reflect the environment variables

source bosh_environment

###7. Setup AWS for Cloud Foundry

Setup can be done by executing the following command:

bundle exec bosh aws create

if you have an error like bosh not recognized then do the following and re-run the command:

rvm use 1.9.3

Problems faced during Jan 2015 Deploy

  1. Unable to delete dhcpOptions

    Error Obtained:

/home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/aws-sdk-v1-1.60.2/lib/aws/core/client.rb:375:in return_or_raise': The dhcpOptions 'dopt-ada4bdcf' has dependencies and cannot be deleted. (AWS::EC2::Errors::DependencyViolation) from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/aws-sdk-v1-1.60.2/lib/aws/core/client.rb:476:in client_request'

Resolution:

Comment out line `476` in `client.rb`. Check the stack trace to get the file and line number.

2. Recently AWS stopped support for MySQL 5.5 but bosh aws deployer is still stuck with 5.5. You will have to change code to point to 5.6.13.
I changed code in `/home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_aws-1.2824.0/lib/bosh_cli_plugin_aws/rds.rb` to point to 5.6.21. Also had to change code in the same file to point to 5.6 (previously 5.5) Database Option Set.

`bosh aws create` will take a very long time. Took me about 30 Minutes. You just have to wait and watch. You cannot leave your desk, because if it fails you have to correct the error, clean up and then restart.

If this command fails for any reason, re-running this command will result in things already created being duplicated. It may not be harmful but will mess up your AWS account. So just clean your AWS to bring it back to pristine state by doing the following:

bundle exec bosh aws destroy

Type 'yes' where ever applicable.

**Warning for destroy:**
Now doing this will clean almost everything (Except the domain that you created and a few other things) even if they were not created by the previous command. **THIS IS NOT A ROLLBACK COMMAND. It will just destroy everything.**

Type in `no` for when you are prompted about deleting VPC and Subnets. Delete VPC and Subnets manually. Because using the destroy utility to delete VPCs will delete the default VPC.

Not: If you notice that command `bosh aws create` does not do anyting and exits without a message add parameter `--trace` to the command to see all the HTTP requests being made. It will generally be because some AWS resource has not been deleted. In my case it did not delete AWS S3 buckets. I had to manually delete them.

###8. Deploy Micro Bosh
Execute the following to deploy:

bundle exec bosh aws bootstrap micro

**Problems faced during Jan 2015 Deploy - Due to different Regions**

Got the following error:

Started deploy micro bosh > Creating VM from ami-7017b018/home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_aws_cpi-1.2824.0/lib/cloud/aws/stemcell.rb:10:in find': could not find AMI 'ami-7017b018' (Bosh::Clouds::CloudError) from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_aws_cpi-1.2824.0/lib/cloud/aws/stemcell_finder.rb:10:in find_by_region_and_id' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_aws_cpi-1.2824.0/lib/cloud/aws/cloud.rb:88:in block in create_vm' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_common-1.2824.0/lib/common/thread_formatter.rb:49:in with_thread_name' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_aws_cpi-1.2824.0/lib/cloud/aws/cloud.rb:86:in create_vm' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_micro-1.2824.0/lib/bosh/deployer/instance_manager.rb:243:in create_vm' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_micro-1.2824.0/lib/bosh/deployer/instance_manager.rb:123:in block in create' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_micro-1.2824.0/lib/bosh/deployer/instance_manager.rb:85:in step' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_micro-1.2824.0/lib/bosh/deployer/instance_manager.rb:122:in create' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_micro-1.2824.0/lib/bosh/deployer/instance_manager.rb:98:in block in create_deployment' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_micro-1.2824.0/lib/bosh/deployer/instance_manager.rb:92:in with_lifecycle' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_micro-1.2824.0/lib/bosh/deployer/instance_manager.rb:98:in create_deployment' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_micro-1.2824.0/lib/bosh/cli/commands/micro.rb:179:in perform' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_aws-1.2824.0/lib/bosh_cli_plugin_aws/micro_bosh_bootstrap.rb:19:in block in deploy' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_aws-1.2824.0/lib/bosh_cli_plugin_aws/micro_bosh_bootstrap.rb:15:in chdir' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_aws-1.2824.0/lib/bosh_cli_plugin_aws/micro_bosh_bootstrap.rb:15:in deploy' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_aws-1.2824.0/lib/bosh_cli_plugin_aws/micro_bosh_bootstrap.rb:9:in start' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli_plugin_aws-1.2824.0/lib/bosh/cli/commands/aws.rb:35:in bootstrap_micro' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli-1.2824.0/lib/cli/command_handler.rb:57:in run' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli-1.2824.0/lib/cli/runner.rb:56:in run' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli-1.2824.0/lib/cli/runner.rb:16:in run' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/gems/bosh_cli-1.2824.0/bin/bosh:7:in <top (required)>' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/bin/bosh:23:in load' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/bin/bosh:23:in

' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/bin/ruby_executable_hooks:15:in eval' from /home/ubuntu/.rvm/gems/ruby-1.9.3-p551/bin/ruby_executable_hooks:15:in '


That is because `ami-7017b018` is not present in US-WEST-2 region. It is only present in US-EAST-1 region. I searched the correct ami using http://thecloudmarket.com/ and changed code for file `bosh_aws_cpi-1.2824.0/lib/cloud/aws/stemcell.rb` in method `find` to the right ami id. This was hard coding and I need to find a long term solution to this.

It will ask you for a `username` and `password`. These are the credentials that you will use to access this CF installation later.

You can now check status of you Micro Bosh installation using:

bundle exec bosh status


###9. Uploading Stemcell
Stemcell are your base VM images which could most probably consist of just the operating system. 

To view all publicly available stemcells use:

bundle exec bosh public stemcells


Download the one that you want to your machine. I need `bosh-stemcell-2719.1-aws-xen-ubuntu-trusty-go_agent.tgz`, so I am going to download that.

bundle exec bosh download public stemcell bosh-stemcell-XXXX-aws-xen-ubuntu-trusty-go_agent.tgz

XXXX is the latest release number that you get from viewing the public stemcells.

Once downloaded use the following to upload it to the bosh director:

bosh upload stemcell bosh-stemcell-XXXX-aws-xen-ubuntu-trusty-go_agent.tgz

###10. Creating Deployment Stub

Create a file `cf-aws-stub.yml` in the same directory. The starting contents of the file would be as below. Modify it based on the comments provided in the file. 

Tip: Use a yml editor like sublime text to create this file.


name: cftest # This is the deployment name that you want to give this deployment director_uuid: 39d20a92-6293-4ad9-8609-d1d93b2e44be # you can get the directory UUID by executing 'bosh status' on the console releases:

  • name: cf # This should be same as the release name that you provide in the later steps while creating the cloud foundry release. # Keep it as CF till you get a hang of things. version: latest

networks:

  • name: cf1 type: manual subnets:

    • range: 10.10.16.0/20 name: default_unused reserved:
      • 10.10.16.2 - 10.10.16.9 static:
      • 10.10.16.10 - 10.10.16.253 gateway: 10.10.16.1 dns:
      • 10.10.0.2 cloud_properties: security_groups:
        • cf subnet: (( properties.template_only.aws.subnet_ids.cf1 ))
  • name: cf2 type: manual subnets:

    • range: 10.10.80.0/20 name: default_unused reserved:
      • 10.10.80.2 - 10.10.80.9 static:
      • 10.10.80.10 - 10.10.80.253 gateway: 10.10.80.1 dns:
      • 10.10.0.2 cloud_properties: security_groups:
        • cf subnet: (( properties.template_only.aws.subnet_ids.cf2 ))

properties: template_only: aws: access_key_id: XXXXX # AWS Access Key secret_access_key: XXXXX # AWS secret availability_zone: us-east-1b # Change this if you'd like to availability_zone2: us-east-1c # Change this if you'd like to subnet_ids: cf1: subnet-fb7796d0 # you would have received this in aws_vpc_receipt.xml when you did bosh aws create cf2: subnet-6ffd5b18 # you would have received this in aws_vpc_receipt.xml when you did bosh aws create

domain: ..com #same as the one that you gave in bosh environment properties.

nats: user: cf_user #change if you want. but keep a common username throughout it will be easier to experiment. password: XXXXX # give a common password through out just a tip. DO NOT use @ in the password.

cc: db_encryption_key: qwe123qws # Change if you like bulk_api_password: XXXXX # give a common password through out just a tip. DO NOT use @ in the password. staging_upload_password: XXXXX # give a common password through out just a tip. DO NOT use @ in the password. staging_upload_user: cf_user #change if you want. but keep a common username throughout it will be easier to experiment.

uaa: scim: users: - admin|the_admin_pw|scim.write,scim.read,openid,cloud_controller.admin #change if you like - services|the_services_pw|scim.write,scim.read,openid,cloud_controller.admin #change if you like admin: client_secret: XXXXX # give a common password through out just a tip. DO NOT use @ in the password. jwt: # you Should generate your own assymteric key pairs. See below on how to generate a pair signing_key: | -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAu9zUPjfUxogAg1a+HxGKx43Ybna1ZtFjIX8ZBAwO87zW35j9 Q6CEE0OleSRe67F7eolix0Z6O0H1+gSiJdMSomfaoGR2coQwpsgBJ3SeSqmOlltH lGkUnsLLrIjNHu44yoNpq779g6WX7VjP4ZwDJ/0BXVR4tNG+34sFV2WxIcc5FW6w N8Njf0AnSEcWT6BJmoKmzzMXEM3T/XWlIfMQFRP9hVC7k4wc4EuAbr7rEHRLUH/o wD0Ac34Ms2UKg0FKtfIlmml3gKW1mPJlL9FKN41C5X/kPuGt5NMtd59+L7UWpFWT AwZ8qgFGLhkE/CEMzvTQd3zs8poilyR0tso33wIDAQABAoIBAEzs/CwOCJ7TCgK9 /lQShtV4C+wPx/A2RXVt6fxyQ50i8onUx8Btdie9R4D3l7bDkmB26W/YOC0TsXmT dCIw8Cx4glmzSZ1S6+kfdPmHE1pXW//NmN34uWzZLzWTPwsvWSnz7c1aA81ofXG/ MECd9dzmCS0Cbfr8+D/pWWKUTZganCnPHBQ9gMrfCIAaz/la5ozREV/8F5I3s3Iw 7D4MGWxq6SpEra4Eg8VDE1Q9Wo5YuFLkPmZt3vO6UmF5E8EmaQ1XiF6XCU3z8Ncq YwjW7nBuiZjMSqkhQWn9dJ9ubgv/fGyze3ljAjNIkGhMdfdw1tb1kuXLlEEgk5Rj 9WSgK0ECgYEA6G7zW9BcnkXnQLi+lW/JLkS6+1QWYo//+AaByyBqpEgvVMRDQQXs DNeN42dMK78R/18mxH5BlHttsX80lbDPc2zcCYwm5gXJXRmu5t4aviB2sNTsm/tK GTC33H8a8Vy1c78qn/HGXft+Q+yfnYpRECxvgBcUTJA4YgoN4HdT/HcCgYEAzuj/ YB/g+ss5zt966yRYFKkBa3E3HUzg4lGmXkU4rTKeUcgOW1pIPqVZ00u+drTn+cr+ iv2tYEcJxeUOLdPV++GfF2FVwzpmhrbxbvLy4aVUrASwtP/6O39/flTx25jBW+HF yrAxFvO0FSK1+hXm5dtLo9VgZy3ecdxYnPE5QdkCgYBtcOBxWLhjZbKvTM2f+1SU zpPkBwHLQtZZaGbwx8CuvbZbiVXJZgpxOYV7j4XUC1FkFt9gIbqrOTq7GpQd73Se eqFYdX9TS2I2zgMGfYnF/+8i7/7Aqx+GoOPRlJ+RCf/+EgL18JdgZSxcuyukuB3X KbUOcM+EBVwm/WjvSgBnnQKBgQC/9Gj3RJv0D5YR1kKy44TTpfcrNl1rUWdQj29J Be8Ov2chd/fZyGg9tikfXaXVev+7Phfn2nB+YWkvrtD4sw5SH374sdReyk9Tq2VR CRNLQ5bJ/4/wW4pKqH4fNa8riwvXsh1NbAgdovnuocUxvh/4HvqNg+dr0aIM/981 upTkAQKBgFwHbhKkOzDQTqlAc+sovataKD1Bm2rq/3RfxnNNA825RnZC4y0J1F/m 5p1PTcf/inr+JvDKVmKEQh7N6gIsb4xn7tGjy0Ku3BDBYDiMEjwsMC4GkFJNl9qj wEuTTI0J6XmUYWPNSlsPHQ1eug9JaTE4e1ECASQ5vCpTIWyX7zCm -----END RSA PRIVATE KEY----- verification_key: | -----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAu9zUPjfUxogAg1a+HxGK x43Ybna1ZtFjIX8ZBAwO87zW35j9Q6CEE0OleSEe67F7eolix0Z6O0H1+gSiJdMS omfaoGR2coQwpsgBJ3SeSqmOlltHlGkUnsLLrIjNHu44yoNpq779g6WX7VjP4ZwD J/0BXVR4tNG+34sFV2WxIcc5FW6wN8Njf0AnSEcWT6BJmoKmzzMXEM3T/XWlIfMQ FRP9hVC7k4wc4EuAbr$rEHRLUH/owD0Ac34Ms2UKg0FKtgIlmml3gKW1mPJlL9FK N41C5X/kPuGt5NMtd59+L7UWpFWTAwZ8qgFGLhkE/CEMzvTQd3zs8poilyR0tso3 3wIDAQAB -----END PUBLIC KEY----- clients: login: secret: XXXXX # give a common password through out just a tip. DO NOT use @ in the password. developer_console: secret: XXXXX # give a common password through out just a tip. DO NOT use @ in the password. app-direct: secret: XXXXX # give a common password through out just a tip. DO NOT use @ in the password. support-services: secret: XXXXX # give a common password through out just a tip. DO NOT use @ in the password. servicesmgmt: secret: XXXXX # give a common password through out just a tip. DO NOT use @ in the password. space-mail: secret: XXXXX # give a common password through out just a tip. DO NOT use @ in the password. notifications: secret: XXXXX # give a common password through out just a tip. DO NOT use @ in the password. dopler: secret: XXXXX # give a common password through out just a tip. DO NOT use @ in the password. batch: username: cf_user #change if you want. but keep a common username throughout it will be easier to experiment. password: XXXXX # give a common password through out just a tip. DO NOT use @ in the password. cc: client_secret: XXXXX # give a common password through out just a tip. DO NOT use @ in the password.

uaadb: # Repalce all the below propertis from the aws_rds_reciept file that you have obtained when you did a bosh aws create. db_scheme: mysql roles: - tag: admin name: u8b652e45161f7g password: p2fdce2269fd8e2e1ee91f48c7f012dc8 databases: - tag: uaa name: uaadb address: uaadb.c3z1rkbvtqhv.us-east-1.rds.amazonaws.com port: 3306 ccdb: # Repalce all the below propertis from the aws_rds_reciept file that you have obtained when you did a bosh aws create. db_scheme: mysql roles: - tag: admin name: u9fd936919dc1a6 password: p3bb3f0f595b2eec972eccccc7057ce4f databases: - tag: cc name: ccdb address: ccdb.c3z1rkbvtqhv.us-east-1.rds.amazonaws.com port: 3306

router: status: user: cf_user #change if you want. but keep a common username throughout it will be easier to experiment. password: XXXXX # give a common password through out just a tip. DO NOT use @ in the password.

dea_next: disk_mb: 400001 memory_mb: 6656

loggregator_endpoint: shared_secret: XXXXX # give a common password through out just a tip. DO NOT use @ in the password.

ssl: skip_cert_verify: true # skip for now.


**Generating JWT Signing Keys:**
Use the following commands to generate the keys

openssl genrsa -out privkey.pem 2048 openssl rsa -pubout -in privkey.pem -out pubkey.pem


###11. Generate the CF deployment manifest
Generate the CF Deployment manifest from the deployment stub that you created in the previous steps.

./generate_deployment_manifest aws templates/cf-minimal-dev.yml ~/deployments/cftest/cf-aws-stub.yml > ~/deployments/cftest/cf.yml

#####Additional Activity in this step
Once generated there are a few corrections that need to be done:

1. Open ~/deployments/cftest/cf.yml

2. Replace all `https` to `http` [For the first deploy go with HTTP protocol. Once you have successfully deployed this you can change to https and experiment. I will write a post or gist on the problems of HTTPS and how to correct them.]

3. Change your FOG connection to:
'''
fog_connection:
        provider: Local
        local_root: /var/vcap/data
'''
You need to change all fog connections for packages, buildpacks, droplets and cc-resources.

##### Additional Information:
If you have any problems like "out of disk space" while pushing apps then you need to do this (SKIP for now):
The local_root above should point to a folder on api_z1 job which has enough disk storage. For this you need to bosh ssh into api_z1 job and check out `df -h` to see which mounted drives have enough spaces. For me it was `/var/vcap/data`

###12. Compile, upload and Install CF to AWS

Execute the following

Point BOSH deployment to the correctly generated CF deployment manifest

bundle exec bosh deployment ~/deployments/cftest/cf.yml

Create the CF release. This is going to take some time. Around 30 mins

bundle exec bosh create release --name cf

Now upload your release to the cloud

bundle exec bosh upload release

Deploy release to AWS

bundle exec bosh deploy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment