Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
AWS Lambda CPU Cores

AWS Lambda

Whats the maximum number of virtual processor cores available in aws lambda

Memory Settings:

Memory: 3008 MB
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 62
model name	: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
stepping	: 4
microcode	: 0x42a
cpu MHz		: 2799.950
cache size	: 25600 KB
physical id	: 0
siblings	: 2
core id		: 0
cpu cores	: 1
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm fsgsbase smep erms xsaveopt
bugs		:
bogomips	: 5600.17
clflush size	: 64
cache_alignment	: 64
address sizes	: 46 bits physical, 48 bits virtual
power management:

processor	: 1
vendor_id	: GenuineIntel
cpu family	: 6
model		: 62
model name	: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
stepping	: 4
microcode	: 0x42a
cpu MHz		: 2799.950
cache size	: 25600 KB
physical id	: 0
siblings	: 2
core id		: 0
cpu cores	: 1
apicid		: 1
initial apicid	: 1
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm fsgsbase smep erms xsaveopt
bugs		:
bogomips	: 5600.17
clflush size	: 64
cache_alignment	: 64
address sizes	: 46 bits physical, 48 bits virtual
power management:

Total CPU

CPU count: 2
def lambda_handler(event, context):
    import multiprocessing
    from subprocess import Popen, PIPE, STDOUT
    print "CPU count: {0}".format(multiprocessing.cpu_count())
    print "=" * 50
    p = Popen("cat /proc/cpuinfo", shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
    output = p.stdout.read()
    print output
@gnsx

This comment has been minimized.

Copy link

@gnsx gnsx commented Apr 25, 2019

That's it?
They give you 2 CPUs for 3GB RAM

@lcb

This comment has been minimized.

Copy link

@lcb lcb commented Nov 25, 2019

@gnsx: Yes as from the docs:

At 1,792 MB, a function has the equivalent of 1 full vCPU

So for 3GB you get 2 CPUs. That makes me wonder on what type of machine they are running lambdas behind the scenes. t3.micro?

@saidsef

This comment has been minimized.

Copy link
Owner Author

@saidsef saidsef commented Nov 25, 2019

@gnsx That's correct, as things currently stand you get 2 CPUs for 3GB RAM

@lcb Under the hood they seem to be using a C3 class instances

@lcb

This comment has been minimized.

Copy link

@lcb lcb commented Nov 25, 2019

@saidsef Oh, you are the first one telling me this. Highly interesting. Any source for that info?

@gnsx

This comment has been minimized.

Copy link

@gnsx gnsx commented Nov 25, 2019

I was told by the AWS CS to think of it as EC2 scaling.

As per my experience with lambda now, the more RAM you provision the more IO/Network you get.

I didn't find any compute difference from 1024 onwards though, network has a significant improvement from 1024MB to 3GB.

The better practice is to run multiple lambdas in parallel if there is more parallization required which is a design issue of it's own if lambda is chosen to do this type of work.

@saidsef

This comment has been minimized.

Copy link
Owner Author

@saidsef saidsef commented Nov 26, 2019

@saidsef Oh, you are the first one telling me this. Highly interesting. Any source for that info?

@lcb As per test result, the CPU type - this is based on 3GB RAM - is C3 class instances (they've now been superseded by C5's, and AWS have updated docs accordingly) see here: https://docs.aws.amazon.com/opsworks/latest/userguide/agent-instance.html

Of course, this all subject to memory size, as @gnsx stated - the more RAM you provision the more IO/Network you get and the different instance type your function will deployed on.

@lcb

This comment has been minimized.

Copy link

@lcb lcb commented Nov 27, 2019

Thanks for all the info. In summary Lambdas

Not that I actually should care about that, but it sheds some light on the Lambda container black box :)

@hjander

This comment has been minimized.

Copy link

@hjander hjander commented Feb 29, 2020

thank you, really helpful!

@msiemens

This comment has been minimized.

Copy link

@msiemens msiemens commented Jan 7, 2021

Seems like now vCPU cores are proportional to the amount of memory configured:

Since Lambda allocates CPU power proportional to the amount of memory provisioned, customers now have access to up to 6 vCPUs. This helps compute intensive applications like machine learning, modelling, genomics, and high-performance computing (HPC) application perform faster.

(https://aws.amazon.com/de/about-aws/whats-new/2020/12/aws-lambda-supports-10gb-memory-6-vcpu-cores-lambda-functions/)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment