- On Linux,
tmpfs
==shm
==shared memory
. - you can mount
tmpfs
to a dir, and use that dir as normal dir but super fast. - Default path is at
/dev/shm/
.
when you create a file there, it actually lives in memory. Thus, it will be super fast to read/write, and applications/processes can use it to communicate.
By default, the size limit of /dev/shm
is half of your physical memory. E.g.: my linux PC has 32G mem:
df -h
Filesystem Size Used Avail Use% Mounted on
...
tmpfs 16G 498M 16G 4% /dev/shm
You will get error if you exceed the limit:
dd if=/dev/zero of=/dev/shm/test bs=1G count=16
write error: No space left on device
WARNING: if you do above on your working pc, apps such as chrome which relies on shm heavily, will crash.
There are a few ways to show shm usage:
free -h # shared column
df -h # | grep shm # see Used
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
NOTE: the first 2 seem not working instantly sometimes ?
For Docker, default size limit for shm is only 64M
. E.g.:
docker run --rm -it ubuntu bash
df -h
Filesystem Size Used Avail Use% Mounted on
...
shm 64M 0 64M 0% /dev/shm
...
You can change it with --shm-size
option:
docker run --rm -it --shm-size 2g ubuntu bash
root@bfc744c41c52:/# df -h
Filesystem Size Used Avail Use% Mounted on
...
shm 2.0G 0 2.0G 0% /dev/shm
...
In k8s, shm size for pod is same as Docker(64M
).
However, k8s doesn't have equivalent for --shm-size
.
To workaround, you can change it with emptyDir
volume + media Memory
:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: xxx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: xxx
template:
metadata:
labels:
app: xxx
spec:
containers:
...
volumeMounts:
- mountPath: /dev/shm
name: shm
volumes:
- name: shm
emptyDir:
medium: Memory
sizeLimit: 128Mi
NOTE:
- once you mount this volume, in pod's
df
output, you will see/dev/shm
size limit changes to about half of the host node. You can image that's because the/dev/shm
now lives on host node. sizeLimit
won't change the value displayed indf
cmd. Instead,kubelet
will check this value at a interval, and evict the pod if exceeded, so it acts more like asoft limit
used by k8s only.- memory used by shm is also part of the resource used by your pod, if your pod has resource limit on memory, shm size will be limit by it. ()