Skip to content
Create a gist now

Instantly share code, notes, and snippets.

Embed URL


Subversion checkout URL

You can clone with
Download ZIP
hypothetical stagecoach cluster management commands



Save hub locations and passwords by name. Inspired by git remote.

$ stagecoach remote add mysite --hub=localhost:6000 --password=beepboop


Set up a git http server to listen on and the upnode control interface.

$ stagecoach hub 6000


Create a

$ stagecoach drone mysite


Create a bouncy http proxy to route requests through.

$ stagecoach router mysite 80


Contacts the hub to compute the git resource and does a git push:

$ stagecoach push mysite


Deploy a particular version:

$ stagecoach deploy auth@0.1.5

Deploy the latest version:

$ stagecoach deploy web

Deploy a version based on a git hash:

$ stagecoach deploy auth@35819ef7d025999cae31566b449212b2cadaf3fd

Deploy more than 1 instance:

$ stagecoach deploy web@0.2.2 --instances=2

Deploy to specific hosts:

$ stagecoach deploy web@0.2.2 --host= --host=


List the running processes by name@version with the commit hash and IP addresses that the services are deployed to.

$ stagecoach list mysite
web@0.2.0    40548daf17f7d25ec0f1925b97d24bc05043e4ed

web@0.2.2    35819ef7d025999cae31566b449212b2cadaf3fd

auth@0.1.5    8046c32c018dbf8ae4b9a67fc3e9047132d4f12d

auth@0.2.0    35819ef7d025999cae31566b449212b2cadaf3fd

logger@0.0.2  8046c32c018dbf8ae4b9a67fc3e9047132d4f12d


List the running services grouped by machine.

$ stagecoach servers mysite
    web@0.2.0     40548daf17f7d25ec0f1925b97d24bc05043e4ed
    web@0.2.2     35819ef7d025999cae31566b449212b2cadaf3fd
    auth@0.2.0    35819ef7d025999cae31566b449212b2cadaf3fd
    web@0.2.0     40548daf17f7d25ec0f1925b97d24bc05043e4ed
    auth@0.1.5    8046c32c018dbf8ae4b9a67fc3e9047132d4f12d
    logger@0.0.2  8046c32c018dbf8ae4b9a67fc3e9047132d4f12d


Show the dependency graph of the cluster.

$ stagecoach deps mysite
├─┬ auth@0.1.5
│ └── logger@0.0.2
└── logger@0.0.2

├─┬ auth@0.2.0
│ └── logger@0.0.2
└── logger@0.0.2

└── logger@0.0.2

└── logger@0.0.2


Looks cool, is it safe to assume that your drones, router, and hub could all be on different physical hosts? also, how does the router choose a backend for an incoming request? Would it be possible for me as a user to define routing based on a version header? eg, could I implement the behavior of


Yep, the services are designed to operate on different hosts.

I'm just going to use bouncy for the routing so it'll be entirely programatic. Later I'll make an a/b platform for performing experiments at the proxy level but anything that can connect to the control service over dnode will be operable with this.


This looks really nice! How would service dependencies be declared? I assume this may be similar to module dependencies in a package.json and then you'll use seaport to resolve those dependencies?

Any thought to how you might manage other config like key/secret pairs, logging levels, other service parameters, etc.? What if you could register this type of config with stagecoach like you register remotes? The values could a JSON object. Then just as an application depends on modules of a certain version, and services can depend on other services of a certain version, the service could also depend on configs of a particular version. That version may be semver like or just latest.

$ stagecoach config add google_oauth

where it would look for a foo.json file or maybe just push JSON itself as a parameter if simple

$ stagecoach config add foo '{ "admin_email":"" }'

Then those config dependencies could be included somehow in the dependency graph

├─ config:google_oauth@1.0.0
└──┬ logger@0.0.2
   └── config:foo@latest

Curious if you've given this much thought.


A more direct way to add config to the api would be in seaport to allow ports.service() to take an optional object that other services can read through the parameters array in ports.get(). Then the applications can expose relevant shared configuration without having to have stagecoach know about how that works.


Nice, I really like that idea, especially when a particular service owns a particular config value. What do you think for environment level config, shared across services? I suppose you could just create a config service (nothing specific to stagecoach or seaport) that would register with seaport, other services would look it up via seaport and then query that service for config.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.