Skip to content

Instantly share code, notes, and snippets.

@afolarin
Last active March 18, 2024 17:01
  • Star 93 You must be signed in to star a gist
  • Fork 12 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save afolarin/15d12a476e40c173bf5f to your computer and use it in GitHub Desktop.
Resource Allocation in Docker

#Container Resource Allocation Options in docker-run

now see: https://docs.docker.com/engine/reference/run/#runtime-constraints-on-resources

You have various options for controlling resources (cpu, memory, disk) in docker. These are principally via the docker-run command options.

##Dynamic CPU Allocation -c, --cpu-shares=0
CPU shares (relative weight, specify some numeric value which is used to allocate relative cpu share)

##Reserved CPU Allocation

--cpuset=""                
specify which CPUs by processor number (0=1st, n=nth cpu) in which to allow execution (contiguous range: "0-3", or discontiguous: "0,3,4" ranges) 

You can see these processors being used by viewing in mpstat (note mpstat requires you do a time interval inorder to get the diff in usage... ) e.g. $ mpstat -P ALL 2 5

some further reading: http://stackoverflow.com/questions/26282072/puzzled-by-the-cpushare-setting-on-docker https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt

Alternatively: LXC (if not using default libcontainer) You can also allocate the cups on LXC containers with on using the --lxc-config option:

--lxc-conf=[]              
(lxc exec-driver only) Add custom lxc options
--lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"
--lxc-conf="lxc.cgroup.cpu.shares = 1234"

#Testing cpu allocation You can use this to experiment with the --cpu-share, --cpuset flags of docker run. see here: http://agileek.github.io/docker/2014/08/06/docker-cpuset/

USAGE (depending on the number of cpus in your machine, numbers are from 0 to n):

sudo docker run -it --rm --cpuset=0,1 agileek/cpuset-test /cpus 2
sudo docker run -it --rm --cpuset=3 agileek/cpuset-test

install sysstat for monitoring (if memstat not available) also htop is quite nice for visualizing this

$ sudo apt-get install sysstat

$ memstat -P ALL 2 10

or $ htop

e.g. burn cpu's number 1 and 6

$ docker run -ti --rm --cpuset=1,6 agileek/cpuset-test /cpus 2

$ memstat -P ALL 2  #update every 2 seconds
13:56:32     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
13:56:34     all   29.80    0.00    0.38    0.69    0.00    0.00    0.00    0.00    0.00   69.13
13:56:34       0   13.43    0.00    1.00    2.49    0.00    0.00    0.00    0.00    0.00   83.08
13:56:34       1  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
13:56:34       2    2.03    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00   97.97
13:56:34       3   14.78    0.00    0.99    2.46    0.00    0.00    0.00    0.00    0.00   81.77
13:56:34       4    2.51    0.00    0.50    0.50    0.00    0.00    0.00    0.00    0.00   96.48
13:56:34       5    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
13:56:34       6  100.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
13:56:34       7    4.50    0.00    0.50    0.50    0.00    0.00    0.00    0.00    0.00   94.50

##RAM Allocation -m, --memory="" Memory limit (format: , where unit = b, k, m or g)

##Disk Allocation Resizing container filesystems. http://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/ moby/moby#471 https://github.com/snitm/docker/blob/master/daemon/graphdriver/devmapper/README.md

Copy link

ghost commented Jan 23, 2016

Everything operating normally.!!!!!!

@zoobab
Copy link

zoobab commented Nov 2, 2016

OpenVZ had a feature where you were doing a cat /proc/cpuinfo or htop, and only one core would show up. I can't get this to work with docker.

@CliveUnger
Copy link

Is there a way to utilize this concept to have two (or more) docker containers running on a disjoint set of cpus and memory where the two containers do not affect the performance of one another? Or will there always be kernel and memory overhead that will slow the containers down?

For example: If have an 8 cpu system and run a performance sensitive program on cpus 0-3, and also run the same exact program in parallel on cpus 4-7. I my experience it seems that the performance is degraded by running the program on disjoints sets of the hardware. I would have thought that it would not, essentially using docker to divide the system resources equally without overlapping, in a pseudo-virtualized fashion. Is this possible with Docker?

@ashguptablr
Copy link

ashguptablr commented May 26, 2021 via email

@afolarin
Copy link
Author

afolarin commented Jun 8, 2021

For example: If have an 8 cpu system and run a performance sensitive program on cpus 0-3, and also run the same exact program in parallel on cpus 4-7. I my experience it seems that the performance is degraded by running the program on disjoints sets of the hardware.

How are you separating your processes currently. Under the hood docker is just using cgroups to control resource usage. There are some other overheads from docker but in principle I don't see any reason you shouldn't be able to use it to delimit the resourcing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment