Skip to content

Instantly share code, notes, and snippets.



Last active Mar 5, 2019
What would you like to do?

Using BUCC on Google Cloud Platform (GCP)

BUCC is a command line tool from Stark & Wayne that let's you easily set up BOSH, UAA, Credhub and Concourse on multiple IaaS providers. In other words: what you get is a ready to use CI/CD infrastructure based on Concourse up and running within a few minutes. This blog post covers my setup on Google Cloud Platform (GCP).

Preparations on GCP

Before we can start using BUCC, we have to prepare a few things on GCP. I assume that you have a project within GCP where you have admin permissions.

Creating a GCP Service Account

In order to use BUCC you need a GCP Service Account with the following permissions:

  • Project: owner
  • Compute Engine: admin

Disclaimer: I am not sure whether you really need to be owner and admin on those two roles. I tested it with these settings and for me it worked. If you figure out how to run it with less permissions, I will gladly update the post.

Make sure to create a private key for your service account - you will later on need it for your BUCC setup.

Create a VPC and Subnet on GCP

To get a separate network for our BUCC setup, we create a dedicated VPC for it. You can either do this using the GCP console in the browser or use the commands below (which assume that you properly authenticated at GCP using gcloud auth login first and that your authenticated user has the necessary permissions to run the tasks).

gcloud compute networks create bucc-vpc \
    --description="VPC for BUCC" \

Afterwards you have to create a subnet within this VPC

gcloud compute networks subnets create bucc-subnet \
    --network=bucc-vpc \
    --range='' \
    --description="Subnet for BUCC" \

Add some firewall rules to allow traffic within the subnet, pings and ssh to our jumphost:

gcloud compute firewall-rules create bucc-allow-internal \
    --allow=all \
    --network=bucc-vpc \
    --priority=65534 \
gcloud compute firewall-rules create bucc-allow-ssh \
    --allow=tcp:22 \
    --network=bucc-vpc \
    --priority=65534 \
gcloud compute firewall-rules create bucc-allow-icmp \
    --allow=icmp \
    --network=bucc-vpc \
    --priority=65534 \

Provision a GCP VM as jumphost

For managing our BUCC setup, we provision a VM in GCP that acts as a jumphost. It will be the only VM that allows SSH access from the internet and hence acts as a bastion host to connect to the other machines provisioned by BOSH later on.

The jumphost will have two IPs: one external IP (allowing access from the internet) and one internal IP for communicating with the other VMs in our VPC. Here's the gcloud command to provision the machine:

gcloud compute instances create bucc-jumphost \
    --machine-type=f1-micro \
    --network=bucc-vpc \
    --subnet=bucc-subnet \
    --zone=europe-west3-c \

You can add your ssh public key if you want. Therefore create a file with the following structure and save it as ssh-keys.txt:


now run the following command to add this ssh key to your GCP VM:

gcloud compute instances add-metadata bucc-jumphost --metadata-from-file ssh-keys=ssh-keys.txt --zone=europe-west3-c

Get your public IP for the GCP VM via:

gcloud compute instances describe bucc-jumphost --zone=europe-west3-c | grep natIP

You should now be able to ssh into your VM via


The following packages need to be installed on the jumphost:

sudo apt-get install -y git direnv make ruby build-essential ruby-dev
sudo gem install cf-uaac

Set up direnv via adding the following line to your user's .bashrc

eval "$(direnv hook bash)"

Install BUCC (BOSH, UAA, Credhub, Concourse)

ssh into your jumphost and clone the bucc repository:

git clone
cd bucc

assuming that you have direnv installed as described above, you should get an error message like

direnv: error .envrc is blocked. Run `direnv allow` to approve its content.

run direnv allow to allow the .envrc file in this directory to be sourced.

Run bucc up

After all this preparation you should now be ready to finally bucc up your BOSH, UAA, Credhub and Concourse instances.

bucc up --cpi=gcp --debug

First of all this will download the BOSH cli for you and put it into a proper location (namely bucc/bin/bosh). Afterwards it will create a vars.yml file inside your bucc folder that you have to adapt to your needs. Before you can go on, you have to create an access key for your Service Account:

# run this command on your local workstation with gcloud set up
gcloud iam service-accounts keys create ~/key.json \
  --iam-account [SA-NAME]@[PROJECT-ID]

this will create a file key.json in your current directory. Copy the content of this file and replace the section # paste service account's key JSON here in the generated vars.yml file on your jumphost.

vars.yml sample with all values generated above - make sure to replace the project_id and the service_account with your values:

director_name: bosh-bucc
gcp_credentials_json: |
  # paste service account's key JSON here

network: bucc-vpc
project_id: <YOUR GCP PROJECT ID>
subnetwork: bucc-subnet	
tags: [internal]
zone: europe-west3-c

# flag: --service-account
service_account: [SA-NAME]@[PROJECT-ID]

When everything is set, run

bucc up --cpi=gcp --debug

After saving this file, you should be able to run

buch up --cpi=gcp

and bucc provisions a VM with BOSH, UAA, Credhub and Concourse in it.

Using BUCC

Here are some examples of how to use the BUCC setup.

  • Source the BUCC environment (not necessary if you use direnv):

    source <(bucc env)
  • check BOSH connection:

    bosh alias-env bucc
  • set up UAA:

    bucc uaac
    uaac client get admin
  • get Concourse URL and credentials via:

    bucc info
  • open Concourse UI in your browser

    # setup port forwarding from Concourse VM to your workstation (assuming Concourse runs on
    ssh -L 8443: michael.lihs@<EXTERNAL JUMPHOST IP>

    you can now open the Concourse UI in your browser via https://localhost:8443/ (make sure to use https!)

Troubleshooting BUCC on GCP

Here are some error messages that I came accross during testing the setup - maybe they are helpful for you as well.

bucc up on local workstation

The first deployment with upper vars.yml resulted in the following error:

creating stemcell (bosh-google-kvm-ubuntu-xenial-go_agent 170.9):
CPI 'create_stemcell' method responded with error:
  CmdError{"type":"Bosh::Clouds::CloudError","message":"Creating stemcell: Creating Google Image from URL: Failed to create Google Image: Post Get dial tcp connect: host is down","ok_to_retry":false}

Exit code 1
  • moving from local workstation to a VM in GCP fixed the problem

bucc up on GCP VM - zone problem

Creating instance 'bosh/0':
  Creating VM:
    Creating vm with stemcell cid 'stemcell-ee1c5f71-d5d4-41f9-4425-236afdfbb9b4':
      CPI 'create_vm' method responded with error: CmdError{"type":"Bosh::Clouds::CloudError","message":"Creating vm: Failed to find Google Machine Type 'n1-standard-1' in zone 'europe-west3': googleapi: Error 400: Invalid value for field 'zone': 'europe-west3'. Unknown zone., invalid","ok_to_retry":false}

Exit code 1
  • changing europe-west3 to europe-west3-c in the vars.yml fixed the problem

bucc up on GCP VM - network problem

   Creating instance 'bosh/0':
     Creating VM:
       Creating vm with stemcell cid 'stemcell-ee1c5f71-d5d4-41f9-4425-236afdfbb9b4':
         CPI 'create_vm' method responded with error: CmdError{"type":"Bosh::Clouds::VMCreationFailed","message":"VM failed to create: googleapi: Error 400: Invalid value for field 'resource.networkInterfaces[0].networkIP': ''. Requested internal IP is outside the subnetwork CIDR range., invalid","ok_to_retry":true}

Exit code 1
  • fixed network and subnet configuration in vars.yml

bucc up errors with time out

  Creating instance 'bosh/0':
    Waiting until instance is ready:
      Post https://mbus:<redacted>@ dial tcp i/o timeout
 Exit code 1
  • adding allow-all-internal firewall rule for bucc network fixed the issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment