Skip to content

Instantly share code, notes, and snippets.

@imbstack
Last active April 4, 2019 05:46
Show Gist options
  • Save imbstack/ec7e3a1787f684b5043ffd4b63bebaf5 to your computer and use it in GitHub Desktop.
Save imbstack/ec7e3a1787f684b5043ffd4b63bebaf5 to your computer and use it in GitHub Desktop.
Worker Manager Configuration

Worker Manager Plan

The following is my thinking on how we will get a good balance between sharing configuration and having understandable configuration.

One provider per workertype

Unfortunately, I think the current plan of having the provisioner compare bids accross clouds is too aggressive for the current time. The time and money units that different clouds make available aren't easily comparable (and in gcp's case, it is hard to get the information in an understandable way in general). We can still allow bidding within a provider, but each workertype will need to specify which provider is responsible for creating its workers.

This has side benefits of simplifying the provisioning loop and the provider interfaces considerably. Each provider will still need to provide a standardised way to list/terminate/etc instances but that should be doable.

Provider Config Schemas

The current rules-based system for generating worker configuration is proving difficult to implement efficiently and might be difficult to reason about in practice. Let's choose a simpler system.

Each provider will have a set of schemas that define what a valid worker configuration is. It will also have a set of schemas that define a valid form of shared config will look like. An example of this would be awsRegionsMapping in aws-reference.yml below.

This will be all hardcoded into the aws provider itself. At docs generation time, it will publish this reference in the same way that we do for api/event/log references. This way it will be available in the UI to validate inputs as well as showing up in the documentation. I believe with this system, we can validate that all configurations are correct at the time they are created/edited rather than waiting for a loop of the provisioner to tell us what went wrong.

The nice part about these shared configs is that it would be easy for whatever is generating new AMI to update the configuration and know that all workertypes will pick up their new config.

# This would be hardcoded by the aws provider
---
workerConfiguration:
schema: aws-worker-config.yaml
sharedConfigSchemas:
- name: awsRegionsDockerWorker # This is a list of regions and associated ami
schema: aws-regions-docker-worker.yml
- name: awsInstanceTypesSmall
schema: aws-instancetypes-small.yml
- name: awsInstanceTypesLarge
schema: aws-instancetypes-large.yml
#This would be hardcoded by the aws provider
---
$schema: "/schemas/common/metadata-metaschema.json#"
$id: "/schemas/common/aws-regions.json#"
title: AWS Regions configuration
description: ...
type: array
items:
type: object
properties: ...
# This will be hardcoded into the aws provider
---
$schema: "/schemas/common/metadata-metaschema.json#"
$id: "/schemas/common/aws-worker-config.json#"
title: AWS Worker Configuratoin
description: ...
type: object
properties:
workerType:
type: string
minCapacity:
type: number
...
# This would be a record that lives in azure
# This would be a shared config that is stated to conform to /schemas/common/aws-regions.json#
# Anything that relies on it can assume it has that structure, even across updates.
# The schema for a shared config cannot be changed once it is created, we also do not delete shared configs.
# Scopes to manage this worker-config would be needed in order to change it.
---
{
"region": "us-east-1",
"secrets": {},
"scopes": [],
"userData": {},
"launchSpec": {
"ImageId": "ami-0d2e14f2a00a42c94"
}
},
{
"region": "us-west-1",
"secrets": {},
"scopes": [],
"userData": {},
"launchSpec": {
"ImageId": "ami-010f8632e858089cd"
}
# This would also be in azure. It could be managed by whoever has scopes to manage the github-worker workertype
---
workerType: github-worker
minCapacity: 1
maxCapacity: 100
provider: aws-provider
regions: {$eval: context.sample-shared-aws-region-config-docker}
instanceTypes: regions: {$eval: context.sample-shared-aws-instance-types}
scopes:
- something.particular.to.this.workertype
description: foo
owner: bar
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment