Skip to content

Instantly share code, notes, and snippets.

@andyshinn
Last active June 7, 2020 19:17
Show Gist options
  • Save andyshinn/a78b617b2b16a2782655 to your computer and use it in GitHub Desktop.
Save andyshinn/a78b617b2b16a2782655 to your computer and use it in GitHub Desktop.
Deis in Google Compute Engine

Deis in Google Compute Engine

Let's build a Deis cluster in Google's Compute Engine!

Google

Get a few Google things squared away so we can provison VM instances.

Google Cloud SDK

Install

Install the Google Cloud SDK from https://developers.google.com/compute/docs/gcutil/#install. You will then need to login with your Google Account:

$ gcloud auth login
Your browser has been opened to visit:

    https://accounts.google.com/o/oauth2/auth?redirect_uri=http%3A%2F%2Flocalhost%3A8085%2F&prompt=select_account&response_type=code&client_id=22535940678.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fbigquery+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdevstorage.full_control+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fndev.cloudman+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fsqlservice.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fprediction+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fprojecthosting&access_type=offline



You are now logged in as [youremail@gmail.com].
Your current project is [named-mason-824].  You can change this setting by running:
  $ gcloud config set project <project>

Create Project

Create a new project in the Google Developer Console (https://console.developers.google.com/project). You should get a project ID like orbital-gantry-285 back. We'll set it as the default for the SDK tools:

$ gcloud config set project orbital-gantry-285

Enable Billing

Please note that you will begin to accrue charges once you create resources such as disks and instances

Navigate to the project console and then the Billing & Settings section in the browser. Click the Enable billing button and fill out the form. This is needed to create resources in Google's Compute Engine.

Initialize Compute Engine

Google Computer Engine won't be available via the command line tools until it is initialized in the web console. Navigate to COMPUTE -> COMPUTE ENGINE -> VM Instances in the project console. The Compute Engine will take a moment to initialize and then be ready to create resources via gcutil.

Cloud Init

Create your cloud init file. It will look something like (be sure to generate and replace your own discovery URL):

#cloud-config

coreos:
  etcd:
    # generate a new token for each unique cluster from https://discovery.etcd.io/new
    discovery: https://discovery.etcd.io/<token>
    # multi-region and multi-cloud deployments need to use $public_ipv4
    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
    - name: format-ephemeral.service
      command: start
      content: |
        [Unit]
        Description=Formats the ephemeral drive
        [Service]
        Type=oneshot
        RemainAfterExit=yes
        ExecStart=/usr/sbin/wipefs -f /dev/disk/by-id/scsi-0Google_PersistentDisk_coredocker
        ExecStart=/usr/sbin/mkfs.btrfs -f /dev/disk/by-id/scsi-0Google_PersistentDisk_coredocker
    - name: var-lib-docker.mount
      command: start
      content: |
        [Unit]
        Description=Mount ephemeral to /var/lib/docker
        Requires=format-ephemeral.service
        Before=docker.service
        [Mount]
        What=/dev/disk/by-id/scsi-0Google_PersistentDisk_coredocker
        Where=/var/lib/docker
        Type=btrfs
    - name: write-deis-motd.service
      command: start
      content: |
        [Unit]
        Description=write the Deis motd
        ConditionFileNotEmpty=/run/deis/motd

        [Service]
        Type=oneshot
        ExecStartPre=/usr/bin/rm /etc/motd
        ExecStart=/usr/bin/ln -s /run/deis/motd /etc/motd
    - name: link-deis-bashrc.service
      command: start
      content: |
        [Unit]
        Description=remove .bashrc file for CoreOS user
        ConditionFileNotEmpty=/run/deis/.bashrc
        
        [Service]
        Type=oneshot
        ExecStartPre=/usr/bin/rm /home/core/.bashrc
        ExecStart=/usr/bin/ln -s /run/deis/.bashrc /home/core/.bashrc
write_files:
  - path: /etc/deis-release
    content: |
      DEIS_RELEASE=v0.10.0
  - path: /run/deis/motd
    content: " \e[31m* *    \e[34m*   \e[32m*****    \e[39mddddd   eeeeeee iiiiiii   ssss\n\e[31m*   *  \e[34m* *  \e[32m*   *     \e[39md   d   e    e    i     s    s\n \e[31m* *  \e[34m***** \e[32m*****     \e[39md    d  e         i    s\n\e[32m*****  \e[31m* *    \e[34m*       \e[39md     d e         i     s\n\e[32m*   * \e[31m*   *  \e[34m* *      \e[39md     d eee       i      sss\n\e[32m*****  \e[31m* *  \e[34m*****     \e[39md     d e         i         s\n  \e[34m*   \e[32m*****  \e[31m* *      \e[39md    d  e         i          s\n \e[34m* *  \e[32m*   * \e[31m*   *     \e[39md   d   e    e    i    s    s\n\e[34m***** \e[32m*****  \e[31m* *     \e[39mddddd   eeeeeee iiiiiii  ssss\n\n\e[39mWelcome to Deis\t\t\tPowered by Core\e[38;5;45mO\e[38;5;206mS\e[39m\n"
  - path: /run/deis/.bashrc
    owner: core
    content: |
      source /usr/share/skel/.bashrc
      function nse() {
        sudo nsenter --pid --uts --mount --ipc --net --target $(docker inspect --format="{{ .State.Pid }}" $1)
      }
  - path: /run/deis/bin/get_image
    permissions: 0755
    content: |
      #!/bin/bash
      # usage: get_image <component_path>
      IMAGE=`etcdctl get $1/image 2>/dev/null`

      # if no image was set in etcd, we use the default plus the release string
      if [ $? -ne 0 ]; then
        RELEASE=`etcdctl get /deis/release 2>/dev/null`

        # if no release was set in etcd, use the default provisioned with the server
        if [ $? -ne 0 ]; then
          source /etc/deis-release
          RELEASE=$DEIS_RELEASE
        fi

        IMAGE=$1:$RELEASE
      fi

      # remove leading slash
      echo ${IMAGE#/}

Save this file as gce-cloud-init.yaml. We will use it in the VM instance creation.

Launch Instances

Create a SSH key that we will use for Deis host communication:

$ ssh-keygen -q -t rsa -f ~/.ssh/deis -N '' -C deis

Create some persistent disks to use for /var/lib/docker. The default root partition of CoreOS is only around 4 GB and not enough for storing Docker images and instances. The following creates 3 disks sized at 32 GB:

$ gcutil adddisk --zone us-central1-a --size_gb 32 cored1 cored2 cored3

Table of resources:

+--------+---------------+--------+---------+
| name   | zone          | status | size-gb |
+--------+---------------+--------+---------+
| cored1 | us-central1-a | READY  |      32 |
+--------+---------------+--------+---------+
| cored2 | us-central1-a | READY  |      32 |
+--------+---------------+--------+---------+
| cored3 | us-central1-a | READY  |      32 |
+--------+---------------+--------+---------+

Launch 3 instances using coreos-stable-367-1-0-v20140724 image. You can choose another starting CoreOS image from the listing output of gcloud compute images list:

$ for num in 1 2 3; do gcutil addinstance --image coreos-stable-367-1-0-v20140724 --persistent_boot_disk --zone us-central1-a --machine_type n1-standard-2 --tags deis --metadata_from_file user-data:gce-cloud-config.yaml -disk cored${num},deviceName=coredocker --authorized_ssh_keys=core:~/.ssh/deis.pub,core:~/.ssh/google_compute_engine.pub core${num}; done

Table of resources:

+-------+---------------+--------------+---------------+---------+
| name  | network-ip    | external-ip  | zone          | status  |
+-------+---------------+--------------+---------------+---------+
| core1 | 10.240.33.107 | 23.236.59.66 | us-central1-a | RUNNING |
+-------+---------------+--------------+---------------+---------+
| core2 | 10.240.94.33  | 108.59.80.17 | us-central1-a | RUNNING |
+-------+---------------+--------------+---------------+---------+
| core3 | 10.240.28.163 | 108.59.85.85 | us-central1-a | RUNNING |
+-------+---------------+--------------+---------------+---------+

Load Balancing

We will need to load balance the Deis routers so we can get to Deis services (controller and builder) and our applications.

$ gcutil addhttphealthcheck basic-check --request_path /health-check
$ gcutil addtargetpool deis --health_checks basic-check --region us-central1 --instances core1,core2,core3
$ gcutil addforwardingrule deisapp --region us-central1 --target_pool deis

Table of resources:

+---------+-------------+--------------+
| name    | region      | ip           |
+---------+-------------+--------------+
| deisapp | us-central1 | 23.251.153.6 |
+---------+-------------+--------------+

Note the forwarding rule external IP address. We will use it as the Deis login endpoint in a future step. Now allow the ports on the CoreOS nodes:

$ gcutil addfirewall deis-router --target_tags deis --allowed "tcp:80,tcp:2222"

DNS

We can create DNS records in Google Cloud DNS using the gcloud utility. In our example we will be using the domain name rovid.io. Create the zone:

$ gcloud dns managed-zone create --dns_name rovid.io. --description "Example Deis cluster domain name" rovidio
Creating {'dnsName': 'rovid.io.', 'name': 'rovidio', 'description':
'Example Deis cluster domain name'} in eco-theater-654

Do you want to continue (Y/n)?  Y

{
    "creationTime": "2014-07-28T00:01:45.835Z",
    "description": "Example Deis cluster domain name",
    "dnsName": "rovid.io.",
    "id": "1374035518570040348",
    "kind": "dns#managedZone",
    "name": "rovidio",
    "nameServers": [
        "ns-cloud-d1.googledomains.com.",
        "ns-cloud-d2.googledomains.com.",
        "ns-cloud-d3.googledomains.com.",
        "ns-cloud-d4.googledomains.com."
    ]
}

Note the nameServers array from the JSON output. We will need to setup our upstream domain name servers to these.

Now edit the zone to add the Deis endpoint and wildcard DNS:

$ gcloud dns records --zone rovidio edit
{
    "additions": [
        {
            "kind": "dns#resourceRecordSet",
            "name": "rovid.io.",
            "rrdatas": [
                "ns-cloud-d1.googledomains.com. dns-admin.google.com. 2 21600 3600 1209600 300"
            ],
            "ttl": 21600,
            "type": "SOA"
        }
    ],
    "deletions": [
        {
            "kind": "dns#resourceRecordSet",
            "name": "rovid.io.",
            "rrdatas": [
                "ns-cloud-d1.googledomains.com. dns-admin.google.com. 1 21600 3600 1209600 300"
            ],
            "ttl": 21600,
            "type": "SOA"
        }
    ]
}

You will want to add two records as JSON objects. Here is an example edit for the two A record aditions:

{
    "additions": [
        {
            "kind": "dns#resourceRecordSet",
            "name": "rovid.io.",
            "rrdatas": [
                "ns-cloud-d1.googledomains.com. dns-admin.google.com. 2 21600 3600 1209600 300"
            ],
            "ttl": 21600,
            "type": "SOA"
        },
        {
            "kind": "dns#resourceRecordSet",
            "name": "deis.rovid.io.",
            "rrdatas": [
                "23.251.153.6"
            ],
            "ttl": 21600,
            "type": "A"
        },
        {
            "kind": "dns#resourceRecordSet",
            "name": "*.dev.rovid.io.",
            "rrdatas": [
                "23.251.153.6"
            ],
            "ttl": 21600,
            "type": "A"
        }
    ],
    "deletions": [
        {
            "kind": "dns#resourceRecordSet",
            "name": "rovid.io.",
            "rrdatas": [
                "ns-cloud-d1.googledomains.com. dns-admin.google.com. 1 21600 3600 1209600 300"
            ],
            "ttl": 21600,
            "type": "SOA"
        }
    ]
}

Deis

Time to install Deis!

Install

Clone and checkout a version of Deis. In this example we will be deploying version 0.10.0:

$ git clone https://github.com/deis/deis.git deis
remote: Counting objects: 3180, done.
remote: Compressing objects: 100% (1550/1550), done.
remote: Total 3180 (delta 1680), reused 2779 (delta 1419)
Receiving objects: 100% (3180/3180), 1.16 MiB | 565.00 KiB/s, done.
Resolving deltas: 100% (1680/1680), done.
From https://github.com/deis/deis

$ cd deis

$ git checkout v0.10.0
Note: checking out 'v0.10.0'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at 64708ab... chore(docs): update CLI versions and download links

Then install the CLI:

$ sudo pip install --upgrade ./client

Setup

The FLEETCTL_TUNNEL environment variable provides a SSH gateway to use in with CoreOS. Use the public IP address for one of the CoreOS nodes we deployed earlier:

export FLEETCTL_TUNNEL=23.236.59.66

Verify the CoreOS cluster is operation and that we can see all our nodes:

$ fleetctl list-machines
MACHINE		IP		METADATA
b0d96509...	10.240.19.74	-
c6e39062...	10.240.174.34	-
e084a014...	10.240.26.179	-

Now we can bootstrap the Deis containers. DEIS_NUM_INSTANCES should match the number of EC2 instances launched. DEIS_NUM_ROUTERS should be at least 2. But can also match node count:

DEIS_NUM_INSTANCES=3 DEIS_NUM_ROUTERS=3 make run
Job deis-router.1.service loaded on 47c540a2.../10.240.62.89
Job deis-router.2.service loaded on 794c2897.../10.240.194.149
Job deis-router.3.service loaded on 177b5a76.../10.240.98.27
Job deis-builder-data.service loaded on 177b5a76.../10.240.98.27
Job deis-database-data.service loaded on 47c540a2.../10.240.62.89
Job deis-logger-data.service loaded on 177b5a76.../10.240.98.27
Job deis-registry-data.service loaded on 47c540a2.../10.240.62.89
fleetctl --strict-host-key-checking=false load registry/systemd/deis-registry.service logger/systemd/deis-logger.service cache/systemd/deis-cache.service database/systemd/deis-database.service
Job deis-logger.service loaded on 177b5a76.../10.240.98.27
Job deis-registry.service loaded on 47c540a2.../10.240.62.89
Job deis-cache.service loaded on 47c540a2.../10.240.62.89
Job deis-database.service loaded on 47c540a2.../10.240.62.89
fleetctl --strict-host-key-checking=false load controller/systemd/*.service
Job deis-controller.service loaded on 177b5a76.../10.240.98.27
fleetctl --strict-host-key-checking=false load builder/systemd/*.service
Job deis-builder.service loaded on 177b5a76.../10.240.98.27
Deis components may take a long time to start the first time they are initialized.
Waiting for 1 of 3 deis-routers to start...
fleetctl --strict-host-key-checking=false start -no-block deis-router.1.service; fleetctl --strict-host-key-checking=false start -no-block deis-router.2.service; fleetctl --strict-host-key-checking=false start -no-block deis-router.3.service;
Triggered job deis-router.1.service start
Triggered job deis-router.2.service start
Triggered job deis-router.3.service start
Waiting for deis-registry to start...
fleetctl --strict-host-key-checking=false start -no-block registry/systemd/deis-registry.service logger/systemd/deis-logger.service cache/systemd/deis-cache.service database/systemd/deis-database.service
Triggered job deis-registry.service start
Triggered job deis-logger.service start
Triggered job deis-cache.service start
Triggered job deis-database.service start
Waiting for deis-controller to start...
fleetctl --strict-host-key-checking=false start -no-block controller/systemd/*
Triggered job deis-controller.service start
Waiting for deis-builder to start...
fleetctl --strict-host-key-checking=false start -no-block builder/systemd/*.service
Triggered job deis-builder.service start
Your Deis cluster is ready to go! Continue following the README to login and use Deis.

This operation will take a while as all the Deis systemd units are loaded into the CoreOS cluster and the Docker images are pulled down. Grab some iced tea!

Verify that all the units are active after the make run operation completes:

$ fleetctl list-units
UNIT				STATE		LOAD	ACTIVE	SUB	DESC			MACHINE
deis-builder-data.service	loaded		loaded	active	exited	deis-builder-data	177b5a76.../10.240.98.27
deis-builder.service		launched	loaded	active	running	deis-builder		177b5a76.../10.240.98.27
deis-cache.service		launched	loaded	active	running	deis-cache		47c540a2.../10.240.62.89
deis-controller.service		launched	loaded	active	running	deis-controller		177b5a76.../10.240.98.27
deis-database-data.service	loaded		loaded	active	exited	deis-database-data	47c540a2.../10.240.62.89
deis-database.service		launched	loaded	active	running	deis-database		47c540a2.../10.240.62.89
deis-logger-data.service	loaded		loaded	active	exited	deis-logger-data	177b5a76.../10.240.98.27
deis-logger.service		launched	loaded	active	running	deis-logger		177b5a76.../10.240.98.27
deis-registry-data.service	loaded		loaded	active	exited	deis-registry-data	47c540a2.../10.240.62.89
deis-registry.service		launched	loaded	active	running	deis-registry		47c540a2.../10.240.62.89
deis-router.1.service		launched	loaded	active	running	deis-router		47c540a2.../10.240.62.89
deis-router.2.service		launched	loaded	active	running	deis-router		794c2897.../10.240.194.149
deis-router.3.service		launched	loaded	active	running	deis-router		177b5a76.../10.240.98.27

Everything looks good! Register the admin user. The first user added to the system becomes the admin:

$ deis register http://deis.rovid.io
username: andyshinn
password:
password (confirm):
email: andys@andyshinn.as
Registered andyshinn
Logged in as andyshinn

You are now registered and logged in. Create a new cluster named dev to run applications under. You could name this cluster something other than dev. We only use it as an example to illustrate a cluster can be restricted to certain CoreOS nodes. The hosts are the internal IP addresses of the CoreOS nodes:

$ deis clusters:create dev rovid.io --hosts 10.240.33.107,10.240.94.33,10.240.28.163 --auth ~/.ssh/deis
Creating cluster... done, created dev

Add your SSH key so you can publish applications:

$ deis keys:add
Found the following SSH public keys:
1) id_rsa.pub andy
Which would you like to use with Deis? 1
Uploading andy to Deis...done

Applications

Creating an application requires that application be housed under git already. Clone an example application and deploy it:

$ git clone https://github.com/deis/example-ruby-sinatra.git
Cloning into 'example-ruby-sinatra'...
remote: Counting objects: 98, done.
remote: Compressing objects: 100% (50/50), done.
remote: Total 98 (delta 42), reused 97 (delta 42)
Unpacking objects: 100% (98/98), done.
Checking connectivity... done.
$ cd example-ruby-sinatra
$ deis 
$ deis create
Creating application... done, created breezy-frosting
Git remote deis added

Time to push:

$ git push deis master
Counting objects: 98, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (92/92), done.
Writing objects: 100% (98/98), 20.95 KiB | 0 bytes/s, done.
Total 98 (delta 42), reused 0 (delta 0)
-----> Ruby app detected
-----> Compiling Ruby/Rack
-----> Using Ruby version: ruby-1.9.3
-----> Installing dependencies using 1.6.3
       Running: bundle install --without development:test --path vendor/bundle --binstubs vendor/bundle/bin -j4 --deployment
       Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
       installing your bundle as root will break this application for all non-root
       users on this machine.
       Fetching gem metadata from http://rubygems.org/..........
       Fetching additional metadata from http://rubygems.org/..
       Using bundler 1.6.3
       Installing rack 1.5.2
       Installing tilt 1.3.6
       Installing rack-protection 1.5.0
       Installing sinatra 1.4.2
       Your bundle is complete!
       Gems in the groups development and test were not installed.
       It was installed into ./vendor/bundle
       Bundle completed (5.72s)
       Cleaning up the bundler cache.
-----> Discovering process types
       Procfile declares types -> web
       Default process types for Ruby -> rake, console, web
-----> Compiled slug size is 12M
remote: -----> Building Docker image
remote: Sending build context to Docker daemon 11.77 MB
remote: Sending build context to Docker daemon
remote: Step 0 : FROM deis/slugrunner
remote:  ---> f607bc8783a5
remote: Step 1 : RUN mkdir -p /app
remote:  ---> Running in dd1cb10534c0
remote:  ---> 3151b07f7623
remote: Removing intermediate container dd1cb10534c0
remote: Step 2 : ADD slug.tgz /app
remote:  ---> b86143c577ae
remote: Removing intermediate container 63dca22b29d6
remote: Step 3 : ENTRYPOINT ["/runner/init"]
remote:  ---> Running in 43c572eacc69
remote:  ---> 6eeace9fea7e
remote: Removing intermediate container 43c572eacc69
remote: Successfully built 6eeace9fea7e
remote: -----> Pushing image to private registry
remote:
remote:        Launching... done, v2
remote:
remote: -----> breezy-frosting deployed to Deis
remote:        http://breezy-frosting.dev.rovid.io
remote:
remote:        To learn more, use `deis help` or visit http://deis.io
remote:
To ssh://git@deis.rovid.io:2222/breezy-frosting.git
 * [new branch]      master -> master

Your application will now be built and run inside the Deis cluster! After the application is pushed it should be running at http://breezy-frosting.dev.rovid.io. Check the application information:

$ deis apps:info
=== breezy-frosting Application
{
  "updated": "2014-07-28T00:35:45.528Z",
  "uuid": "fd926c94-5b65-48e8-8afe-7ac547c12bd6",
  "created": "2014-07-28T00:33:35.346Z",
  "cluster": "dev",
  "owner": "andyshinn",
  "id": "breezy-frosting",
  "structure": "{\"web\": 1}"
}

=== breezy-frosting Processes

--- web:
web.1 up (v2)

=== breezy-frosting Domains
No domains

Can we connect to the application?

$ curl -s http://breezy-frosting.dev.rovid.io
Powered by Deis!

It works! Enjoy your Deis cluster in Google Compute Engine!

@carmstrong
Copy link

@andyshinn Any chance you'd want to PR this to get it into /contrib? Sounds like this works beautifully on the new CoreOS

@andyshinn
Copy link
Author

@carmstrong Yeah! I'm happy to run through this, clean it up, and open a pull request late next week. If you are feeling ambitious, go ahead and run through yourself and feel free to commit.

@senagbe
Copy link

senagbe commented Jul 25, 2014

Hi guys. Any eta on this? Looks great. What needs to be tidied up?

@senagbe
Copy link

senagbe commented Jul 25, 2014

I am goign to try running through it tomorrow (or possibly Monday latest) I might be able to make any minor changes. Should I PR if I have the time?

@andyshinn
Copy link
Author

I did a lot of refactoring and cleanup today. Give it a try now. I followed the instructions and got the example Ruby application up with little issue. Let me know if you run into any problems.

@senagbe
Copy link

senagbe commented Jul 28, 2014

Cheers, I'll have a look at it tomorrow GMT+7. I'll let you know how it goes. Cheers

@andyshinn
Copy link
Author

For anyone looking for notifications specific to this Gist, I'd recommend subscribing to deis/deis#1036. I just realized Gists don't have any proper subscribing or notifications 😦

@senagbe
Copy link

senagbe commented Jul 29, 2014

@andyshinn One of my devs just pointed out I can fork this and make a send you a PR so I'l do that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment