Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Step By Step Guide to Configure a CoreOS Cluster From Scratch

Step By Step Guide to Configure a CoreOS Cluster From Scratch

This guide describes how to bootstrap new Production Core OS Cluster as High Availability Service in a 15 minutes with using etcd2, Fleet, Flannel, Confd, Nginx Balancer and Docker.



CoreOS is a powerful Linux distribution built to make large, scalable deployments on varied infrastructure simple to manage. CoreOS is designed for security, consistency, and reliability. Instead of installing packages via yum or apt, CoreOS uses Linux containers to manage your services at a higher level of abstraction. A single service's code and all dependencies are packaged within a container that can be run on one or many CoreOS machines.

Main building blocks of CoreOS — etcd, Docker and systemd.

See: 7 reasons why you should be using CoreOS with Docker.

Tools Used

  • etcd: key-value store for service registration and discovery
  • fleet: scheduling and failover of Docker containers across CoreOS Cluster
  • flannel: gives each docker container a unique IP that allows you to access the internal port (i.e. port 80 not 32679)
  • confd: watch etcd for nodes arriving/leaving and update (with reload) nginx configuration by using specified template

Basic Configuration

Connect your servers as a cluster

  1. Find your Cloud Config file location. For examples below we will use:
  1. Open your config to edit:
sudo vi /var/lib/coreos-install/user_data
  1. Generate new token for your cluster:, where X is servers count.
  2. Merge follow lines with your Cloud Config:
    # Generate a new token for each unique cluster from
    # discovery:<token>
    # Multi-region and multi-cloud deployments need to use $public_ipv4
    advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001
    initial-advertise-peer-urls: http://$private_ipv4:2380
    # Listen on both the official ports and the legacy ports
    # Legacy ports can be omitted if your application doesn't depend on them
    listen-peer-urls: http://$private_ipv4:2380
    public-ip: $private_ipv4
    metadata: region=europe,public_ip=$public_ipv4
    interface: $private_ipv4
    - name: etcd2.service
      command: start
      # See issue:
        - name: "timeout.conf"
          content: |
    - name: fleet.service
      command: start
    # Network configuration should be here, e.g:
    # - name:
    #   content: "[Match]\nName=eno1\n\n[Network]\nDHCP=yes\n\n[DHCP]\nUseMTU=9000\n"
    # - name:
    #   runtime: true
    #   content: "[Match]\nName=eno2\n\n[Network]\nDHCP=yes\n\n[DHCP]\nUseMTU=9000\n"
    - name: flanneld.service
      command: start
      - name: 50-network-config.conf
        content: |
          ExecStartPre=/usr/bin/etcdctl set / '{ "Network": "" }'
    - name: docker.service
      command: start
      - name: 60-docker-wait-for-flannel-config.conf
        content: |

    - name: docker-tcp.socket
      command: start
      enable: true
      content: |
        Description=Docker Socket for the API


  1. provide has a specific configuration preset. It requires you to process additional step - add those lines to Cloud Config to get Private Network working:
  # ...
  - name:
    runtime: true
    content: "[Match]\nName=eno2\n\n[Network]\nDHCP=yes\n\n[DHCP]\nUseMTU=9000\n"
  1. Validate your changes:
sudo coreos-cloudinit -validate --from-file /var/lib/coreos-install/user_data
  1. Reboot the system:
sudo reboot
  1. Check status for etcd2:
sudo systemctl status -r etcd2

Output should contain a follow line:

 Active: active (running)

Sometimes it takes a time. Don't panic. Just wait for a few minutes.

  1. Repeat those steps for each server in your cluster.

  2. Check your cluster health and fleet status:

# should be healthy
sudo etcdctl cluster-health
# should display all servers
sudo fleetctl list-machines

Create Fleet Units

Fleet Scheduling

See: Launching Containers with fleet

Application Unit

  1. Enter to your home directory:
cd ~
  1. Create new Application Template Unit. For example - run vi test-app@.service and add follow lines:

ExecStartPre=-/usr/bin/docker kill test-app%i
ExecStartPre=-/usr/bin/docker rm test-app%i
ExecStartPre=/usr/bin/docker pull willrstern/node-sample
ExecStart=/usr/bin/docker run -e APPNAME=test-app%i --name test-app%i -P willrstern/node-sample
ExecStop=/usr/bin/docker stop test-app%i
  1. Submit Application Template Unit to Fleet:
fleetctl submit test-app@.service
  1. Start new instances from Application Template Unit:
fleetctl start test-app@1
fleetctl start test-app@2
fleetctl start test-app@3
  1. Check that all instances has been started and active. It could take a few minutes. Example command and its output:
$ fleetctl list-units
test-app@1.service	e1512f34.../	active	running
test-app@2.service	a78a3229.../	active	running
test-app@3.service	081c8a1e.../	active	running

Configure Firewall Rules

Run from Deis v1 on your local machine:

curl -O
# run follow line for each server
ssh core@<host1> 'bash -s' <

Load Balancers and Service Discovery

  1. Download someapp@.service, someapp-discovery@.service and someapp-lb@.service.
  2. Modify those Unit Templates according to your application config.
  3. Submit modificated files to your Fleet:
fleetctl submit someapp@.service
fleetctl submit someapp-discovery@.service
fleetctl submit someapp-lb@.service
  1. Start Unit instances from templates:
fleetctl start someapp@{1..6}
fleetctl start someapp-discovery@{1..6}
fleetctl start someapp-lb@{1..2}
  1. Verify all is working good:
fleetctl list-units


  • Something goes wrong and a service doesn't work

    Use those commands to debug:

    # also for fleet, etcd, flanneld
    sudo systemctl start etcd2
    sudo systemctl status etcd2
    sudo journalctl -xe
    sudo journalctl -xep3
    sudo journalctl -ru etcd2
  • fleet list-units is displaying failed state for any units

    For local units:

    sudo fleetctl journal someapp@1

    For remote units:

    fleetctl journal someapp@1
  • fleetctl reponds with: Error running remote command: SSH_AUTH_SOCK environment variable is not set. Verify ssh-agent is running. See for help.

    1. Check you have connected with ssh -A.
    2. Check you are not using sudo for remote machines. In this case a process under sudo can't access to your SSH_AUTH_SOCK.
  • Error response from daemon: Conflict. The name "someapp1" is already in use by container c4acbb70c654. You have to delete (or rename) that container to be able to reuse that name.

    fleetctl stop someapp@1
    docker rm someapp1
    fleetctl start someapp@1
  • fleet ssh command doesn't working

    1. Ensure your public key has been added everywhere in user_data. On each server.
    2. Connect to your server with SSH agent:
    eval `ssh-agent -s`
    ssh-add ~/.ssh/id_rsa
    ssh -A <your-host>

Update CoreOS

sudo update_engine_client -update
sudo reboot

See more details here.

Install Deis v1

Attention! It seems that doesn't work correctly with and other bare metal setups because ceph which is using for v1 works unstable and unpredictable. But if you would like to make an experiment, let's go:

  1. Create backup copy of your original config:
sudo vi cp /var/lib/coreos-install/user_data /var/lib/coreos-install/user_data.without-deis1
  1. Merge your Cloud Config with Deis Cloud Config example.

  2. You can configure Deis Platform from your workstation by following this instruction. The next steps adopted for server environment.

  3. Download deisctl:

curl -sSL | sudo sh -s 1.12.3
  1. Set your configuration:
deisctl config platform set domain=<your-domain>
  1. Run platform installation:
deisctl install platform
  1. Boot up Deis:
deisctl start platform

If you get problems try to check Docker containers:

docker ps -a

Also you could use journal and status commands for deisctl to debug.

  1. Once you see “Deis started.”, your Deis platform is running on a cluster. Verify that all Deis units are loaded by run:
deisctl list

All Deis units should be active. Otherwise you could destroy that all and don't forget to remove unused Docker volumes.

Appendix 1 - Info and Tutorials

Appendix 2 - Tools and Services

Description=Announce Someapp%i
# Requirements
# Dependency ordering
ExecStart=/bin/sh -c "while true; do etcdctl set /services/someapp/upstream/someapp%i \"$(sleep 5 && docker inspect -f '{{.NetworkSettings.IPAddress}}' someapp%i):3000\" --ttl 60;sleep 45;done"
ExecStop=/usr/bin/etcdctl rm /services/someapp/upstream/someapp%i
# Requirements
# Dependency ordering
# Let the process take awhile to start up (for first run Docker containers)
# Change killmode from "control-group" to "none" to let Docker remove
# work correctly.
# Get CoreOS environmental variables
# Directives with "=-" are allowed to fail without consequence
ExecStartPre=-/usr/bin/docker kill someapp-lb%i
ExecStartPre=-/usr/bin/docker rm someapp-lb%i
ExecStartPre=/usr/bin/docker pull denisizmaylov/nginx-lb
ExecStart=/usr/bin/sh -c "/usr/bin/docker run --name someapp-lb%i --rm -p 80:80 -e SERVICE_NAME=someapp -e ETCD=\"$(ifconfig docker0 | awk '/\\<inet\\>/ { print $2 }'):2379\" denisizmaylov/nginx-lb"
ExecStop=/usr/bin/docker stop someapp-lb%i
# Let the process take awhile to start up (for first run Docker containers)
# Directives with "=-" are allowed to fail without consequence
ExecStartPre=-/usr/bin/docker kill someapp%i
ExecStartPre=-/usr/bin/docker rm someapp%i
ExecStartPre=/usr/bin/docker pull denisizmaylov/node-sample
ExecStart=/usr/bin/docker run -e APPNAME=someapp%i --name someapp%i -P denisizmaylov/node-sample
ExecStop=/usr/bin/docker stop someapp%i

This comment has been minimized.

Copy link

@rdgacarvalho rdgacarvalho commented Mar 30, 2018

Nice tutorial, congrats.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment