Skip to content

Instantly share code, notes, and snippets.

@stevendborrelli
Last active December 20, 2015 22:01
Show Gist options
  • Save stevendborrelli/a0cb4d49af4042e6e255 to your computer and use it in GitHub Desktop.
Save stevendborrelli/a0cb4d49af4042e6e255 to your computer and use it in GitHub Desktop.
Consul

###How we use consul We're big fans of consul at aster.is. How do we use it?

####Service discovery We feel that service discovery is a fundamental building block upon which we can build more flexible systems. With consul, service discovery is part of every node.

Consul makes it easy to register services that run on a server. For example by adding a file like zookeeper.json to the consul configuration directory, we can register static services across the cluster. This means that any node can just connect to zookeeper.service.consul instead of a hardcoded list of ips. We use this pattern quite a bit to reduce the amount of automation code we need to write.

In addition to the API, consul exposes service discovery available via DNS so that clients don't need to have additional libraires installed to find components.

Another thing we can do is modify system components dynamically based on service discovery. We've been using consul template to dynamically generate iptables rules. With only a few lines of template code, we can open up ports to hosts as they join the cluster.

Consul also makes it easy for nodes to find each other by connecting to hostname.node.consul. In practice this means we spend less time modifying host files.

Key/value store

Having a raft-based key/value store availble across opens up many possibilities. We have multiple tools that get store their configuration in consul.

We also are using consul as our highly available key-value store for the vault secret store.

We're in development of a tool called gestalt that is designed for different groups to use consul as a configuration backend for their apps. Gestalt wraps docker's libkv (consul, etcd, zookeeper) with json schema enforcement so that we can build management control panels.

For example, team a could upload a schema for their app into the k/v path teams/dev/configs/app-1, and then their containers could connect to get configuration. Since configurations are in a json schema, frontend components can just query the k/v store to build up UIs.

Health & configuration checks

Instead of using an outside tool like nagios, we can embed health checks into consul itself.

This allows our monitoring platform to be integrated with service discovery, and gives us a single pane to see the status of our clusters.

We're written a tool called distributive that we use to perform health checks and system configuration tests. In fact, we use distributive to automatically verify that our sytems are configured as intended.

Distributive is very simple to isntall and run. It's just a static binary + json list of checks. It can be used via the command line or called by tools like Nagios.

As can be seen below, our dashboad includes live status updates from consul: mantl screen shot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment