Skip to content

Instantly share code, notes, and snippets.

@markmandel
Last active October 3, 2018 05:35
Show Gist options
  • Save markmandel/dc61f7bc2fa5435bd781fe4f7e08f31a to your computer and use it in GitHub Desktop.
Save markmandel/dc61f7bc2fa5435bd781fe4f7e08f31a to your computer and use it in GitHub Desktop.
agones-368

Problem

  • It is desirable to autoscale the Kubernetes cluster to account for increases in player count, and need for more game servers, in a way that will work across cloud
  • There currently exists a generic cluster autoscaler project -- but it is mostly targeted at stateless workloads

Design

The overall design to use the open source cluster autoscaler revolves around using the cluster autoscaler to remove empty nodes as they appear, but specifically disallows the autoscaler from attempting to evict GameServer Pods in an attempt to move them to a new Node.

This gives us the following benefits:

  • Implementations for multiple cloud providers are already written and community tested
  • Load testing (on GKE) has been done for us
  • Existing SLOs already exist for the autoscaler
  • Removes any possibility for the autoscaler to cause race conditions when allocating GameServers.

Since the autoscaler can be implement with Agones GameServers -- this essentially means that scaling and autoscaling can be essentially managed at a Fleet level. If you want a bigger cluster, increase the size of your Fleet, and your cluster will adjust. If you want a smaller cluster, shrink your fleet, and the cluster will adjust.

Implementation

Write a scheduler that bin packs all our pods into as tight a cluster as possible

A custom scheduler will be built that will prioritise the scheduling of GameServer Pods onto Nodes that have the most GameServer pods. This will ease scaling down, as it will mean the game servers aren't spread out of many Nodes, and there is wasted resource space.

(Unless there is a way to do this with the default scheduler, but I've not found one so far -- best I could find was PreferredDuringSchedulingIgnoredDuringExecution on HostName)

Prioritise Allocating GameServers from Nodes that already have Allocated GameServers

To also make it easier to scale down, we essentially want to bin-pack as many allocated game servers on a single node as much as possible.

To that end, the allocate() function will order the GameServers it is considering by the number of other GameServers that exist on the same node as it.

This ensures that we don't end up with (as much as possible) a "swiss cheese" problem, with Allocated game servers spread out across the cluster. Bin packing Allocated GameServers makes it much easier Fleets to scale down in a way that will leave empty nodes for the autoscaler to delete.

When Fleets get shrunk, prioritise removal from Nodes with the least number of GameServer Pods

Again, to make it easier to create empty nodes when scaling down Fleets, prioritise removing un-allocated GameServer Pods from Nodes with the least number of GameServers Pods currently on them.

Mark All GameServer Pods as not "safe-to-evict"

If a Pod has the annotation cluster-autoscaler.kubernetes.io/safe-to-evict: false, then the Node that the Pod is on cannot be removed.

Therefore, all GameServer Pods should have the annotation cluster-autoscaler.kubernetes.io/safe-to-evict: false, so the autoscaler will not attempt to evict them.

Since we are bin packing through our custom scheduler, we won't actually have a need to move existing GameServer pods when nodes shrink, as we will only be leaving behind empty Pods.

Mark the Agones controller as safe-to-evict: false

The Agones controller should have the annotation cluster-autoscaler.kubernetes.io/safe-to-evict: false, to ensure the autoscaler doesn't try and move around the controller.

Documentation

Documentation of the above needs to be implemented, but also pointing to how to setup the autoscaler on different cloud providers, etc.

Research

History

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment