Skip to content

Instantly share code, notes, and snippets.

@dennisjbell
Last active December 17, 2015 13:59
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save dennisjbell/ef527d04f9c62c27bfd4 to your computer and use it in GitHub Desktop.
Save dennisjbell/ef527d04f9c62c27bfd4 to your computer and use it in GitHub Desktop.
<style> table.numeric tr td:first-child {text-align: left} table.numeric td {text-align: right} </style>

Worker Allocation on Engine Yard Cloud

Workers are the processes that allow your application to respond to incoming web requests. Regardless of what Application Server Stack you select, it will use one or more workers per instance to run your application.

Standard Allocations based on Instance vCPU and Memory Configuration

For Passenger, Unicorn, Thin or Mongrel, these workers are allocated using an algorithm that takes into consideration vCPU and Memory available to the instance to determine the number of workers to allocate, as shown in the table below.

We are also happy to announce we have recently modified the algorithm that generates the worker counts to better make use of the available resources, which has increased the number of workers for most sizes.

TypeMemory (MB)Swap (MB)vCPUsPool Size
Original New Solo New Cluster
Small 1700 895 1 3 3 3
Medium 3840 1343 2 7 7 7
High CPU Medium 1740 1343 5 6 6 6
Large 7680 30725 4 6 8 8
Extra Large 15360 30725 8 12 16 16
High CPU Extra Large 7168 30725 20 24 40 40
High Memory Extra Large 17500 4096 6.5 8 13 13
High Memory Double Extra Large 35000 4096 13 8 26 26
High Memory Quad Extra Large 70000 30725 26 24 52 52
High I/O Quad Extra Large 62000 30725 35 70 70 70

#####Note about vCPUs:

vCPUs is a generic label for virtual CPU cores (and in some cased hyperthreads) available to your instance. Amazon, the provider of the instances used on Engine Yard Cloud, uses the term ECU, or Elastic Compute Units, to convey the relative compute power available on each instance type. This value is different than the virtual CPUs reported by the operating system, such as in top or /proc/cpuinfo, but better allows you to determine how much effective compute resources you need. According to Amazon, the definition of ECU is:

One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.

Tuning for your Application's Needs

The values above are based on generic parameters of an idealized application. However, if your app falls outside of these parameters, you should consider tuning your environment, to either:

  • reduce the worker counts to improve responsiveness because your app requires more resources
  • increase available workers for a lightweight app with high requests per minute

Tuning is done by setting parameters on your environment. The sections below will explain the various parameters that can be tuned. Once you know what values you want tuned, you can open a support ticket to have them put in effect.

WARNING:

In the case of Medium and High CPU Medium instances, there in an override in place to ensure they didn't reduce their pool size from the values provided in the original algorithm. If you tune one or more parameters, these overrides are removed and the generated values based on those parameters will be used instead.

Tunable Parameters

pool_size

This is the most direct way of modifying your pool size. By specifying pool_size, you can set it regardless of resource availability. This is recommended only if you are fully aware of your resource needs and don't plan on changing your instance size in the future.

min_pool_size

Slightly less forceful, the min_pool_size parameter allows you to ensure your application always has the minimum number of workers available. Useful if using the smaller instance types and you don't mind overloading the resources -- be careful though, overloaded instances may result in underperforming or even hung instances, potentially triggering a takeover.

If you find yourself using this and having poor responsiveness or takeovers, please consider upgrading to a larger instance size.

Default: 3

max_pool_size

Likewise, the max_pool_size parameter allows you to ensure you don't have more workers than you need, regardless of resource availability. You may want to do this if your workers connect to external entities that cannot handle too many simultaneous connections.

Default: 100

worker_memory_size

This is the best and most accurate way of tuning your app based on memory usage. The standardized values above are based on an app in which each worker uses 250MB of memory. If your app is significantly different than this, it is best to tune this value to get accurate resource usage calculations.

Default:250 (MB)

workers_per_vcpu

Some apps are compute heavy, others barely use the any processor capacity. The standard configuration provides 2 workers per ECU; tuning this to your app's needs can improve its responsiveness.

Default: 2

swap_usage_percent

While it is not a good idea to go in to swap in most cases, it is a balancing act to provide the optimal number of workers. We have chosen a modest 25% usage of swap by the workers in order to provide a sufficient worker count without excessive paging. If you find that your app is spending a great deal of time in swap, you may want to change this value.

Note: It is usually due to CPU constraints rather than memory constrants that determines the workers available, but depending on how many workers you allow per CPU, this may not be the case. Under default settings, this value most significanly impacts the Small, Medium, and High CPU Extra Large instances.

Default: 25 (%)

reserved_memory

If you are running background workers or other processes that eat into your memory resources, you may want to specify a chunk of memory to reserve. By default, this is set to a value that accomodates the OS and support processes.

Default: 1500 (MB)

reserved_memory_solo

Similar to reserved_memory but also takes into consideration the database that shared the instance in a single instance (aka Solo) environment.

Default: 2000 (MB)

db_vcpu_max and db_workers_per_vcpu

These settings are only applicable to single instance environments. By default, no consideration is taken to reserve ECU resources for the database, but if you have a database-heavy application, it would be beneficial to allocate some of the worker resources to the database instead. The number of workers represented by the db_workers_per_vcpu parameter is removed from each ECU, up to the db_vcpu_max value.

Default: db_vcpu_max = 0; db_workers_per_vcpu = 0.5

What about other Application Server Stacks?

Concurrent application servers, such as Puma (all Ruby Runtimes) and Trinidad (JRuby only), use a simple one worker per ECU rule, and rely on threading to scale to the incoming requests. These work best for concurrent Ruby implementations such as Rubinius and JRuby, but threadsafe apps properly configured can benefit from concurrency under MRI by using Puma.

@krutten
Copy link

krutten commented May 21, 2013

"... but better allows you to determine how much effective compute resources you need. According to Amazon, the definition of ECU is:"

I think need should actually be have

"... but better allows you to determine how much effective compute resources you have. According to Amazon, the definition of ECU is:"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment