Skip to content

Instantly share code, notes, and snippets.

@chriha
Created September 23, 2019 08:30
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save chriha/65dd332b72bf740efdabf7b92c2336dc to your computer and use it in GitHub Desktop.
Save chriha/65dd332b72bf740efdabf7b92c2336dc to your computer and use it in GitHub Desktop.
Create large file to allow higher baseline throughput when using bursting for AWS EFS
# To get better performance out of EFS, you may consider increasing the size of
# your filesystem by adding dummy data onto it. You can create dummy data using
# the command below to create a 256 GB file called "large_file":
sudo dd if=/dev/urandom of=large_file bs=1024k count=256000 status=progress
# A larger EFS fielsystem will result in a higher baseline throughput and you
# will be able to burst for longer periods. In the example above where we have
# an EFS with +256 GB, the baseline throughput will be 12.5 MiB/s and you can
# burst for 360 minutes per day as opposed to a 1GiB EFS which has a baseline
# of 50 KiB/s and can only burst for about 1 minute. However, the tradeoff
# here is that you will have to pay for the 256GiB of dummy data. You
# can find out more about how the size of the EFS affects performance here:
# https://docs.aws.amazon.com/efs/latest/ug/performance.html#bursting
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment