Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
constant disk thrashing(reading) before OOM kills the offending process

WARNING: I have tested the following mainly without any swap (partitions/files) enabled (eg. either no swap, or no swap support in kernel)

Explanations as to why kswapd0 does constant disk reading before OOM-killer kills the offending process:

  1. see the answer and comment of https://askubuntu.com/a/432827/861003
  2. see the answer and David Schwartz comments of https://unix.stackexchange.com/a/24646/306023

An example of constant disk reading due to out of memory, that I've encountered compiling firefox inside a qube(VM):
starts here: https://groups.google.com/d/msg/qubes-users/aSPefKH223U/PYc4m25SCQAJ
(note that this happens even with vm.swappiness=0(seen in a comment below) even though it was 60 in that example)

Wanting to recompile a custom kernel without kswapd0 (don't know how yetnot possible without kswapd0): https://unix.stackexchange.com/q/463233/306023
But it seems, it SEEMS, so far that just vm.swappiness=0 does it! (according to my below comments/screens)

Potential mitigating effects:
1.

sudo sysctl vm.overcommit_memory=2 #was 0
sudo sysctl vm.overcommit_ratio=50 #was 50 by default (if set to 200 it brings back the disk thrashing)
  1. vm.watermark_scale_factor=1000 seems to cause a delay during which you could stop the disk thrashing before it begins to fully freeze the system.
sudo sysctl vm.overcommit_memory=0 #was 0
sudo sysctl vm.overcommit_ratio=50 #was 50
sudo sysctl vm.vfs_cache_pressure=0 #was 100
#sudo sysctl vm.watermark_scale_factor=1 #was 10
sudo sysctl vm.watermark_scale_factor=1000 #was 10
sudo sysctl vm.oom_kill_allocating_task=0 #was 0

Possibly irrelevant(in the above context only): vm.vfs_cache_pressure=0

  1. (I have not looked at or checked) https://github.com/rfjakob/earlyoom (thanks to gudok on StackOverflow)
  2. (I have not looked at or checked) https://github.com/hakavlad/nohang
  3. (I have not looked at or checked) https://github.com/facebookincubator/oomd

Solutions so far:

  1. IFF you're not using any swap(eg. kernel support for swap is disabled) then patching kernel so that Active(file) pages are not evicted works for me: https://stackoverflow.com/q/52067753/10239615
    mirror1, mirror2 noobish patch by me though, so the side-effects are unknown. But, no more disk thrashing and no more freezing the OS!
#!/bin/bash
#./showallblocks rev.01 rewritten for question/answer from: https://stackoverflow.com/q/52058914/10239615
if test "`id -u`" != "0"; then
sudo='sudo'
else
sudo=''
fi
dmesglog="$1"
if test -z "$dmesglog"; then
echo "Usage: '$0' <dmesglogfile>"
echo "Examples:"
echo "sudo dmesg > dmesg1.log && '$0' dmesg1.log"
echo "'$0' <(sudo dmesg)"
#Note: '$0' used for the case when $0 has spaces or other things in its path names, and user wants to copy paste, for whatever reason, the output of the above into the command line.
exit 1
fi
#(optional) Stop logging if already in progress:
$sudo sysctl -w vm.block_dump=0
#$sudo ./showblock $($sudo dmesg |grep --color=never -E -- 'READ block [0-9]+ on xvda3'|sed -re 's/.*READ block ([0-9]+).*$/\1/g' | tr '\n' ' ') | grep -B3 -- 'path : '
#$sudo ./showblock $($sudo dmesg | tail -n1000 |grep --color=never -E -- 'READ block [0-9]+ on xvda3'|sed -re 's/.*READ block ([0-9]+).*$/\1/g' | tr '\n' ' ') | grep -B3 -- 'path : '
#$sudo ./showblock $(cat ~/dmesg1.log |grep --color=never -E -- 'READ block [0-9]+ on xvda3'|sed -re 's/.*READ block ([0-9]+).*$/\1/g' | tr '\n' ' ') | grep -B3 -- 'path : '
#Using the answer from here(thanks to glenn jackman): https://unix.stackexchange.com/a/467377/306023
#grep --color=never -E -- 'READ block [0-9]+ on xvda3' "$dmesglog" |
#cat "$dmesglog" |
$sudo perl -pe '
if (! /READ block [0-9]+ on [A-Za-z0-9]+ .*$/) {
s{.*}{}s
}
s{(READ block) (\d+) (on) ([A-Za-z0-9]+) ([^\$]*)\n$}
{join " ",$1, $2, $3, $4, $5, qx(./showblock "/dev/$4" "$2" | grep -F -- "Found path :" | cut -f4- -d" ")}es
' -- "$dmesglog"
#Note: the output of qx(...) above is purposely allowed to have trailing newline! (I did wonder if purposely is correct here or it should be purposefully, https://www.merriam-webster.com/words-at-play/purposely-purposefully-usage )
#To find out what "}es"(above) is, see perlre modifiers: https://perldoc.perl.org/perlre.html#Modifiers
#FIXME: noobish try to exclude lines not matching the lines that need to be replaced, from output! used 'if' above
#{join " ",$1, $2, $3, $4, $5, qx(echo "test")}es
#s{^.*$}
#{join " ", "1", "2"}e
#$s{(READ block) (\d+) (on )([A-Za-z0-9]+)(.*$)}
#{join " ",$1, $2, $3, $4, $5, qx(./showblock "/dev/$4" "$2" | grep -F -- "Found path :" | cut -f3- -d" " | tr -d "\\n")}e
#{join " ",$1, $2, $3, qx(./showblock $2 | grep "path :" | cut -f3- -d" ")}e
#{join " ",$1, $2, $3, qx(echo -n "X")}e
#!/bin/bash
#./showblock rev.03 rewritten for question/answer from: https://stackoverflow.com/q/52058914/10239615
#----
bytes_per_sector=512 #assumed that dmesg block numbers are 512 bytes each (ie. 512 bytes per sector; aka block size is 512)!
#----
#use `sudo` only when not already root
if test "`id -u`" != "0"; then
sudo='sudo'
else
sudo=''
fi
if ! test "$#" -ge "2"; then
echo "Usage: '$0' <device> <dmesgblocknumber> [dmesgblocknumber ...]"
echo "Examples:"
echo "'$0' /dev/xvda3 5379184"
echo "'$0' /dev/xvda3 5379184 5129952 7420192"
#Note: '$0' used for the case when $0 has spaces or other things in its path names, and user wants to copy paste, for whatever reason, the output of the above into the command line.
exit 1
fi
within_exit() {
echo -e "\nSkipped current instruction within on_exit()"
}
on_exit() {
#trap - EXIT SIGINT SIGQUIT SIGHUP #will exit by skipping the rest of all instrunction from on_exit() eg. if C-c
trap within_exit EXIT SIGINT SIGQUIT SIGHUP #skip only current instruction from on_exit() eg. when C-c is pressed
#echo "first sleep"
#sleep 10
#echo "second sleep"
#sleep 10
if test "${#remaining_args[@]}" -gt 0; then
echo -n "WARNING: There are '${#remaining_args[@]}' remaining args not processed, they are: " >&2
for i in `seq 0 1 "$(( "${#remaining_args[@]}" - 1 ))"`; do #seq is part of coreutils package
echo -n "'${remaining_args[${i}]}' " >&2
done
echo >&2
fi
}
trap on_exit EXIT SIGINT SIGQUIT SIGHUP
dev="$1"
shift 1
if test -z "$dev" -o ! -b "$dev"; then
echo "Bad device name or not a device: '$dev'" >&2
exit 1
fi
blocksize="`$sudo blockdev --getbsz "$dev"`"
if test "${blocksize:-0}" -le "0"; then #handles empty arg too
echo "Failed getting block size for '$dev', got '$blocksize'" >&2
exit 1
fi
#TODO: check and fail if not a multiplier
divider="$(( $blocksize / $bytes_per_sector ))"
if ! test "${divider:-0}" -gt "0"; then
echo "Failed computing divider from: '$blocksize' / '$bytes_per_sector', got '$divider'" >&2
exit 1
fi
# for each passed-in dmesg block number do
while test "$#" -gt "0"; do
dmesgblock="$1"
shift
remaining_args=("$@") #for on_exit() above
echo '--------'
echo "Passed-in dmesg block($bytes_per_sector bytes per block) number: '$dmesgblock'"
#have to handle the case when $dmesgblock is empty and when it's negative eg. "-1" so using a default value(of 0) when unset in the below 'if' block will help not break the 'test' expecting an integer while also allowing negative numbers ("0$dmesgblock" would otherwise yield "0-1" a non-integer):
if test "${dmesgblock:-0}" -le "0"; then
echo "Bad passed-in dmesg block number: '$dmesgblock'" >&2
exit 1
fi
#TODO: check and fail(or warn? nah, it should be fail!) if not a multiplier (eg. use modullo? is it "%" ?)
block=$(( $dmesgblock / 8 ))
if ! test "${block:--1}" -ge "0"; then
echo "Badly computed device block number: '$block'" >&2
exit 1
fi
echo "Actual block number(of $blocksize bytes per block): $block"
inode="$(echo "open ${dev}"$'\n'"icheck ${block}"$'\n'"close" | $sudo debugfs -f - 2>/dev/null | tail -n2|head -1|cut -f2 -d$'\t')"
if test "<block not found>" == "$inode"; then
echo "No inode was found for the provided dmesg block number '$dmesgblock' which mapped to dev block number '$block'" >&2
exit 1
else
#assuming number TODO: check for this!
echo "Found inode: $inode"
fpath="$(echo "open ${dev}"$'\n'"ncheck ${inode}"$'\n'"close" | $sudo debugfs -f - 2>/dev/null | tail -n2|head -1|cut -f2- -d$'\t')"
#fpath always begins with '/' right?
if test "$fpath" != "Pathname"; then
echo "Found path : $fpath"
else
echo "not found"
fi
fi
done
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 18, 2018

Here's a snapshot(screenshots) of things, with vm.swappiness=60 at the point when there was constant disk reading of over 220MiB/sec (according to dom0 xfce-panel's Disk Performance Monitor plugin(not pictured)) that was caused by the specific AppVM which was compiling firefox and was thus frozen (the windows weren't updating):
frozen_ccache_screenshot_2018-08-18_17-38-50
frozen_ffcompilation_screenshot_2018-08-18_17-39-05
frozen_interrupts_screenshot_2018-08-18_17-39-31
frozen_iotop_screenshot_2018-08-18_17-38-06
frozen_meminfo_screenshot_2018-08-18_17-38-25
frozen_top_screenshot_2018-08-18_17-38-39

And the following is the first(or second/third? can't really be too sure) window update after the OOM-killer killed the offending rustc process which was hogging the RAM:
unfroze1up_ccache_screenshot_2018-08-18_17-46-39
unfroze1up_ffcompilation_screenshot_2018-08-18_17-40-32
unfroze1up_interrupts_screenshot_2018-08-18_17-46-57
unfroze1up_iotop_screenshot_2018-08-18_17-46-08
unfroze1up_meminfo_screenshot_2018-08-18_17-43-01
unfroze1up_top_screenshot_2018-08-18_17-41-26

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 18, 2018

Now, in the same session(above), but now with vm.swappiness=0 (sudo sysctl -w vm.swappiness=0), firefox recompilation(time rpmbuild -bb -v -- ~/rpmbuild/SPECS/firefox.spec) succeded:
13:56.09 We know it took a while, but your build finally finished successfully!
success_screenshot_2018-08-18_18-05-20

(Note: the AppVM max RAM was set to 12000MB)

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 18, 2018

EDIT: ok actually, before I set swappiness to 60 in the first comment it was set to 0 upon boot! (via a /etc/sysctl.d/some.conf) So, this means that it must be some form of ccache or something else which prevented this disk thrashing to occur again, below!
EDIT2: nothing else(relevant) changes when setting from 0 to 60. (in sysctl -a, at least)

now, to make sure ccache didn't jinx anything, after the above I redid:

$ sudo sysctl -w vm.swappiness=60
vm.swappiness = 60
$ time rpmbuild -bb -v -- ~/rpmbuild/SPECS/firefox.spec
...
12:00.64 We know it took a while, but your build finally finished successfully!

so it didn't freeze, OOM or disk thrash! but it should have!
This means either ccache did this now, or some side effect of having set vm.swappiness to 0 earlier ? maybe other values got tweaked by setting this to 0?

Now the ccache stats are at:
cc2_screenshot_2018-08-18_18-21-27

Btw the firefox build (howto) is from: https://gist.github.com/constantoverride/bd7e25dae8f49ac753cb661956ad5388

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 18, 2018

ok here's vm.swappiness=60 but with only 4000MB max RAM for the AppVM (instead of 12000MB) and after a shutdown. (though I didn't start the other terminals) , the windows are frozen and disk read was over 200MiB/sec:
4gfroze_ffcompile_screenshot_2018-08-18_19-02-48
4gfroze_iotop_screenshot_2018-08-18_19-02-07
4gfroze_top_screenshot_2018-08-18_19-02-39

Interestingly it didn't froze at compiling style via rustc anymore, but later... though it kinda makes sense to me.

After a few pause/unpause qube, eventually got killed:

 8:56.75    Compiling gkrust v0.1.0 (file:///home/user/rpmbuild/BUILD/firefox-61.0.2/toolkit/library/rust)
10:33.45     Finished release [optimized] target(s) in 9m 34s
10:33.67 symverscript
10:33.98 toolkit/library
10:34.13 libxul.so
20:24.40 collect2: fatal error: ld terminated with signal 9 [Killed]
20:24.43 compilation terminated.
20:24.43 gmake[4]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/rules.mk:681: libxul.so] Error 1
20:24.43 gmake[4]: *** Deleting file 'libxul.so'
20:24.45 gmake[3]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/recurse.mk:73: toolkit/library/target] Error 2
20:24.45 gmake[2]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/recurse.mk:33: compile] Error 2
20:24.45 gmake[1]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/rules.mk:418: default] Error 2
20:24.45 gmake: *** [client.mk:172: build] Error 2
20:24.49 0 compiler warnings present.
20:24.66 Failed to parse ccache stats output: stats zero time                     Thu Aug 16 18:16:42 2018
20:24.66 /usr/bin/notify-send --app-name=Mozilla Build System Mozilla Build System Build failed
error: Bad exit status from /var/tmp/rpm-tmp.mg3T2Z (%build)

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 18, 2018

Ok I set vm.swappiness=0 (via sudo sysctl vm.swappiness=0 and I did check via sysctl vm.swappiness that it is indeed 0!) in the same session and did a time rpmbuild -bb -v --noprep -- ~/rpmbuild/SPECS/firefox.spec so that compilation would continue from where it left off, and lo and behold I still hit the epid over 200MiB/sec disk read thrashing almost immediately!

 0:02.49 force-cargo-library-build
 0:03.72     Finished release [optimized] target(s) in 1.13s
 0:03.82 libxul.so
 3:20.31 collect2: fatal error: ld terminated with signal 9 [Killed]
 3:20.32 compilation terminated.
 3:20.32 gmake[4]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/rules.mk:681: libxul.so] Error 1
 3:20.33 gmake[4]: *** Deleting file 'libxul.so'
 3:20.35 gmake[3]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/recurse.mk:73: toolkit/library/target] Error 2
 3:20.35 gmake[2]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/recurse.mk:33: compile] Error 2
 3:20.35 gmake[1]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/rules.mk:418: default] Error 2
 3:20.35 gmake: *** [client.mk:172: build] Error 2
 3:20.40 0 compiler warnings present.
 3:20.54 Failed to parse ccache stats output: stats zero time                     Thu Aug 16 18:16:42 2018
error: Bad exit status from /var/tmp/rpm-tmp.eXkdpg (%build)

So then, he was right: https://unix.stackexchange.com/a/73503/306023
And of course, I didn't expect otherwise.

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 19, 2018

Here's with /proc/sys/vm/overcommit_memory set to 1 (acts the same as if it's set to 0) inspired by this question: https://unix.stackexchange.com/q/373312/306023
on a 4000MB max RAM AppVM (EDIT the sad part is that it's almost impossible to trigger OOM-killer now, with this few RAM, even after plenty of Pause/Resume qube(EDIT2: had to kill qube); allthewhile the disk-read thrashing is happening), because I wanted it to trigger sooner:

Screens of the terminals at the point it's frozen, and constant disk reading (over 190MiB/sec, reported by Disk Performance Monitor xfce4-panel item) follow:
2frozen_ffcompilation_screenshot_2018-08-19_12-30-03
2frozen_iotop_screenshot_2018-08-19_12-29-26
2frozen_meminfo_screenshot_2018-08-19_12-29-49
2frozen_top_screenshot_2018-08-19_12-30-23

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 25, 2018

With SWAP off in kernel .config, ie. using way2 from this to compile kernel.
The same disk thrashing is reached!

$ zcat /proc/config.gz |grep -i SWAP
# CONFIG_SWAP is not set
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_ARCH_WANTS_THP_SWAP=y
CONFIG_THP_SWAP=y
CONFIG_NFS_SWAP=y
CONFIG_SUNRPC_SWAP=y
# CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP is not set

the kernel is 4.18.5-1.pvops.qubes.x86_64 #1 SMP Sat Aug 25 16:40:48 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

here's iotop:
still_iotop_screenshot_2018-08-25_19-27-05

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 26, 2018

Just found that vm.overcommit_memory=2 will instantly kill the offending task without causing any disk thrashing first!
source: https://unix.stackexchange.com/a/87769/306023

The kernel memory accounting algorithm can be tuned with the vm.overcommit_memory sysctl settings. The possible values are as follows:

0 (default) Heuristic overcommit with weak checks.

1 Always overcommit, no checks.

2 Strict accounting, in this mode the virtual address space limit is determined by the value of vm.overcommit_ratio settings according to the following formula:

virtual memory = (swap + physical memory * (overcommit_ratio / 100))

overcommit_ratio is 50 by default (on Fedora 28)

WARNING: The disk-thrashing is back with overcommit_ratio=200 !

WARNING: the Fedora 28 AppVM kills gnome-terminal as soon as you set overcommit_ratio=0 and cannot shut it down, but can kill it. (implies that vm.overcommit_memory=2 already)

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 26, 2018

Relevant kernel files and identifiers:
In /home/user/qubes-builder/chroot-fc25/home/user/rpmbuild/BUILD/kernel-latest-4.18.5/linux-4.18.5/include/linux/gfp.h
__GFP_FS
__GFP_IO

In /home/user/qubes-builder/chroot-fc25/home/user/rpmbuild/BUILD/kernel-latest-4.18.5/linux-4.18.5/mm/page_alloc.c
__need_fs_reclaim
current_gfp_context

In /home/user/qubes-builder/chroot-fc25/home/user/rpmbuild/BUILD/kernel-latest-4.18.5/linux-4.18.5/include/linux/sched/mm.h
current_gfp_context
memalloc_nofs_save
memalloc_nofs_restore

 * PF_MEMALLOC_NOIO implies GFP_NOIO
 * PF_MEMALLOC_NOFS implies GFP_NOFS
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 26, 2018

$ grep -nrIF -- __GFP_FS
include/trace/events/mmflags.h:34:	{(unsigned long)__GFP_FS,		"__GFP_FS"},		\
include/linux/sched/mm.h:159:		flags &= ~(__GFP_IO | __GFP_FS);
include/linux/sched/mm.h:161:		flags &= ~__GFP_FS;
include/linux/sched/mm.h:212: * All further allocations will implicitly drop __GFP_FS flag and so
include/linux/gfp.h:26:#define ___GFP_FS		0x80u
include/linux/gfp.h:121: * __GFP_FS can call down to the low-level FS. Clearing the flag avoids the
include/linux/gfp.h:183:#define __GFP_FS	((__force gfp_t)___GFP_FS)
include/linux/gfp.h:273:#define GFP_KERNEL	(__GFP_RECLAIM | __GFP_IO | __GFP_FS)
include/linux/gfp.h:278:#define GFP_USER	(__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL)
include/linux/pagemap.h:337: * Clear __GFP_FS when allocating the page to avoid recursion into the fs
tools/perf/builtin-kmem.c:643:	{ "__GFP_FS",			"F" },
tools/testing/radix-tree/linux/gfp.h:12:#define __GFP_FS		0x80u
tools/testing/radix-tree/linux/gfp.h:24:#define GFP_KERNEL	(__GFP_RECLAIM | __GFP_IO | __GFP_FS)
drivers/block/loop.c:729:			     lo->old_gfp_mask & ~(__GFP_IO|__GFP_FS));
drivers/block/loop.c:959:	mapping_set_gfp_mask(mapping, lo->old_gfp_mask & ~(__GFP_IO|__GFP_FS));
drivers/staging/android/ashmem.c:439:	if (!(sc->gfp_mask & __GFP_FS))
drivers/md/dm-bufio.c:1549:	if (!(gfp & __GFP_FS)) {
drivers/md/dm-bufio.c:1605:	if (sc->gfp_mask & __GFP_FS)
Documentation/core-api/gfp_mask-from-fs-io.rst:17:The traditional way to avoid this deadlock problem is to clear __GFP_FS
Documentation/core-api/gfp_mask-from-fs-io.rst:33:scope will inherently drop __GFP_FS respectively __GFP_IO from the given
mm/page_alloc.c:179:	gfp_allowed_mask &= ~(__GFP_IO | __GFP_FS);
mm/page_alloc.c:184:	if ((gfp_allowed_mask & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS))
mm/page_alloc.c:3716:	/* We're only interested __GFP_FS allocations for now */
mm/page_alloc.c:3717:	if (!(gfp_mask & __GFP_FS))
mm/compaction.c:845:		if (!(cc->gfp_mask & __GFP_FS) && page_mapping(page))
mm/oom_kill.c:1053:	if (oc->gfp_mask && !(oc->gfp_mask & __GFP_FS))
mm/vmscan.c:604: * __GFP_FS.
mm/vmscan.c:978:		may_enter_fs = (sc->gfp_mask & __GFP_FS) ||
mm/vmscan.c:1022:		 *    have __GFP_FS (or __GFP_IO if it's simply going to swap,
mm/vmscan.c:1030:		 *    __GFP_IO|__GFP_FS for this reason); but more thought
mm/vmscan.c:1649:	if ((sc->gfp_mask & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS))
mm/vmscan.c:2845: * If the caller is !__GFP_FS then the probability of a failure is reasonably
mm/vmscan.c:3031:	if (!(gfp_mask & __GFP_FS)) {
mm/filemap.c:1575:			gfp_mask &= ~__GFP_FS;
mm/filemap.c:3315: * this page (__GFP_IO), and whether the call may block (__GFP_RECLAIM & __GFP_FS).
mm/vmpressure.c:259:	if (!(gfp & (__GFP_HIGHMEM | __GFP_MOVABLE | __GFP_IO | __GFP_FS)))
mm/memory.c:2372:		return mapping_gfp_mask(vm_file->f_mapping) | __GFP_FS | __GFP_IO;
mm/internal.h:25:#define GFP_RECLAIM_MASK (__GFP_RECLAIM|__GFP_HIGH|__GFP_IO|__GFP_FS|\
mm/internal.h:31:#define GFP_BOOT_MASK (__GFP_BITS_MASK & ~(__GFP_RECLAIM|__GFP_IO|__GFP_FS))
fs/ceph/addr.c:1489:						~__GFP_FS));
fs/ceph/addr.c:1637:					   ~__GFP_FS));
fs/fscache/page.c:132:	if (!(gfp & __GFP_DIRECT_RECLAIM) || !(gfp & __GFP_FS)) {
fs/gfs2/quota.c:172:	if (!(sc->gfp_mask & __GFP_FS))
fs/gfs2/glock.c:1517:	if (!(sc->gfp_mask & __GFP_FS))
fs/nilfs2/inode.c:355:			   mapping_gfp_constraint(inode->i_mapping, ~__GFP_FS));
fs/nilfs2/inode.c:525:			   mapping_gfp_constraint(inode->i_mapping, ~__GFP_FS));
fs/buffer.c:930:	gfp_mask = mapping_gfp_constraint(inode->i_mapping, ~__GFP_FS) | gfp;
fs/ext4/inode.c:3986:				   mapping_gfp_constraint(mapping, ~__GFP_FS));
fs/btrfs/ctree.h:2598:	return mapping_gfp_constraint(mapping, ~__GFP_FS);
fs/btrfs/free-space-cache.c:81:			~(__GFP_FS | __GFP_HIGHMEM)));
fs/btrfs/compression.c:458:							 ~__GFP_FS));
fs/ubifs/file.c:738:	gfp_t ra_gfp_mask = readahead_gfp_mask(mapping) & ~__GFP_FS;
fs/xfs/xfs_qm.c:509:	if ((sc->gfp_mask & (__GFP_FS|__GFP_DIRECT_RECLAIM)) != (__GFP_FS|__GFP_DIRECT_RECLAIM))
fs/xfs/kmem.h:42:			lflags &= ~__GFP_FS;
fs/xfs/xfs_iops.c:1286:	mapping_set_gfp_mask(inode->i_mapping, (gfp_mask & ~(__GFP_FS)));
fs/namei.c:4878:			!mapping_gfp_constraint(inode->i_mapping, __GFP_FS));
fs/super.c:74:	if (!(sc->gfp_mask & __GFP_FS))
fs/jbd2/transaction.c:300:		 * If __GFP_FS is not present, then we may be being called from
fs/jbd2/transaction.c:303:		if ((gfp_mask & __GFP_FS) == 0)
fs/jbd2/transaction.c:1958: * buffers. If __GFP_DIRECT_RECLAIM and __GFP_FS is set, we wait for commit
$ grep -nrIF -- PF_MEMALLOC_NOFS
include/linux/sched.h:1394:#define PF_MEMALLOC_NOFS	0x00040000	/* All allocation requests will inherit GFP_NOFS */
include/linux/sched/mm.h:150: * PF_MEMALLOC_NOFS implies GFP_NOFS
include/linux/sched/mm.h:160:	else if (unlikely(current->flags & PF_MEMALLOC_NOFS))
include/linux/sched/mm.h:221:	unsigned int flags = current->flags & PF_MEMALLOC_NOFS;
include/linux/sched/mm.h:222:	current->flags |= PF_MEMALLOC_NOFS;
include/linux/sched/mm.h:236:	current->flags = (current->flags & ~PF_MEMALLOC_NOFS) | flags;
fs/xfs/xfs_buf.c:456:		 * that we are in such a context via PF_MEMALLOC_NOFS to prevent
fs/xfs/kmem.c:51:	 * context via PF_MEMALLOC_NOFS to prevent memory reclaim re-entering
fs/xfs/xfs_aops.c:216:	current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
fs/xfs/xfs_aops.c:279:	current_set_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
fs/xfs/xfs_aops.c:1093:	if (WARN_ON_ONCE(current->flags & PF_MEMALLOC_NOFS))
fs/xfs/xfs_trans.c:154:	current_set_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
fs/xfs/xfs_trans.c:164:			current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
fs/xfs/xfs_trans.c:241:	current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
fs/xfs/xfs_trans.c:948:	current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
fs/xfs/xfs_trans.c:979:	current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
fs/xfs/xfs_trans.c:1037:	current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS);
fs/xfs/libxfs/xfs_btree.c:2867:	unsigned long		new_pflags = PF_MEMALLOC_NOFS;
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 26, 2018

tried a manual patch but failed
here's sudo iotop -d 0.1:
3_iotop_screenshot_2018-08-26_21-11-43

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 26, 2018

every kernel change takes 8min to recompile (thanks to ccache!), but the time to package is the one that takes longer(like 5min)

real	8m20.445s
user	8m44.137s
sys	7m14.858s
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 27, 2018

what teh ?! ....
with vmwatermark_scale_factor=1000 (default 10) not only I don't hit the disk thrashing but also I don't run out of memory!
oh wait, I do run out of memory, but the compilation of other things keeps going:

 0:07.04    Compiling encoding_rs v0.7.2
 0:10.54 virtual memory exhausted: Cannot allocate memory
 0:10.71 virtual memory exhausted: Cannot allocate memory
 0:10.71 virtual memory exhausted: Cannot allocate memory
 0:10.71 cc1plus: out of memory allocating 2008 bytes after a total of 23990272 bytes
 0:10.71 gmake[4]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/rules.mk:1032: UnifiedProtocols25.o] Error 1
 0:10.71 gmake[4]: *** Waiting for unfinished jobs....
 0:10.71 gmake[4]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/rules.mk:1032: UnifiedProtocols27.o] Error 1
 0:10.71 gmake[4]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/rules.mk:1032: UnifiedProtocols28.o] Error 1
 0:10.71 gmake[4]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/rules.mk:1032: UnifiedProtocols26.o] Error 1
 0:10.71 gmake[3]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/recurse.mk:73: ipc/ipdl/target] Error 2
 0:10.71 gmake[3]: *** Waiting for unfinished jobs....

but hey, no disk thrashing! (even with vm.overcommit_memory=0)

watermark_scale_factor:

This factor controls the aggressiveness of kswapd. It defines the
amount of memory left in a node/system before kswapd is woken up and
how much memory needs to be free before kswapd goes back to sleep.

The unit is in fractions of 10,000. The default value of 10 means the
distances between watermarks are 0.1% of the available memory in the
node/system. The maximum value is 1000, or 10% of memory.

A high rate of threads entering direct reclaim (allocstall) or kswapd
going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate
that the number of free pages kswapd maintains for latency reasons is
too small for the allocation bursts occurring in the system. This knob
can then be used to tune kswapd aggressiveness accordingly.

source: https://www.kernel.org/doc/Documentation/sysctl/vm.txt

EDIT: ok, gnome-terminal died before the compilation:

[  322.071585] Core dump to |/usr/lib/systemd/systemd-coredump 923 1000 1000 5 1535360590 18446744073709551615 dev01-w-s-f-fdr28 gnome-terminal- pipe failed

very likely because vm.oom_kill_allocating_task=1 (default 0)

Interestingly, if I do clean the build dir and rebuild, I do hit some disk thrashing for like less than 20 sec, but then continues to compile(due to 'Waiting for unfinished jobs) ... then it disk thrashes again(for minutes) at Compiling style` but it is responsive!(eg. pressing Enter(s) in terminal shows it after like 5 sec tops), but if I try to start another terminal, the Enter(s) don't work anymore.

Search for watermark_scale_factor in this answer: https://unix.stackexchange.com/a/41831/306023
reproducing here:

I nowadays (2017) prefer to have no swap at all if you have enough RAM. Having no swap will usually lose 200-1000 MB of RAM on long running desktop machine. I'm willing to sacrifice that much to avoid worst case scenario latency (swapping application code in when RAM is full). In practice, this means that I prefer OOM Killer to swapping. If you allow/need swapping, you might want to increase /proc/sys/vm/watermark_scale_factor, too, to avoid some latency. I would suggest values between 100 and 500. You can consider this setting as trading CPU usage for lower swap latency. Default is 10 and maximum possible is 1000. Higher value should (according to kernel documentation) result in higher CPU usage for kswapd processes and lower overall swapping latency.

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 27, 2018

sudo sysctl vm.overcommit_memory=0 #was 0
sudo sysctl vm.overcommit_ratio=50 #was 50
sudo sysctl vm.vfs_cache_pressure=0 #was 100
#sudo sysctl vm.watermark_scale_factor=1 #was 10
sudo sysctl vm.watermark_scale_factor=1000 #was 10
sudo sysctl vm.oom_kill_allocating_task=0 #was 0
sync
time rpmbuild -bi --noprep -- SPECS/firefox.spec

I'm guessing the effect of vmwatermark_scale_factor=1000 compared to vmwatermark_scale_factor=10 is that the former is just the same but slower...
which means sudo iotop -d 0.1 would see more reads before the OS/terminal freezes due to the disk thrashing:

well it froze twice, and this last one is final:
5_frozen_iotop_screenshot_2018-08-27_11-31-28
5_frozen_iotop2_screenshot_2018-08-27_11-31-53

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 27, 2018

I've a synthetic way of causing 10% of the usual disk thrashing:
top in one terminal,
sudo iotop in another terminal,
sudo watch -n0.1 -d -- sysctl vm.drop_caches=3 in another terminal.

Actual DISK READ: 17.10 M/s | Actual DISK WRITE: 0.00 B/s

if you also start mc and just hold Enter on any directory. you can get 35M/s actual disk read.

drop_caches
    Setting this value to 1, 2, or 3 causes the kernel to drop various combinations of page cache and slab cache.

    1
        The system invalidates and frees all page cache memory. 
    2
        The system frees all unused slab cache memory. 
    3
        The system frees all page cache and slab cache memory. 

    This is a non-destructive operation. Since dirty objects cannot be freed, running sync before setting this parameter's value is recommended. 

source: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-memory-tunables

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 27, 2018

this is what happens after

sudo sysctl vm.drop_caches=3
vm.drop_caches = 3

(note: I had top and mc running, but mc isn't mentioned below)
(those read blocks don't seem to point to any files, looked using debugfs)
Seen on dmesg only when vm.block_dump=1 as follows:

[ 3489.290016] systemd-journal(285): dirtied inode 391208 (system.journal) on xvda3
[ 3489.674702] systemd-journal(285): dirtied inode 391208 (system.journal) on xvda3
[ 3494.533764] audit: type=1101 audit(1535368153.714:392): pid=12703 uid=1000 auid=1000 ses=1 msg='op=PAM:accounting grantors=pam_unix acct="user" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/2 res=success'
[ 3494.533851] jbd2/xvda3-8(246): WRITE block 8669496 on xvda3 (8 sectors)
[ 3494.533857] jbd2/xvda3-8(246): WRITE block 8669504 on xvda3 (8 sectors)
[ 3494.533858] jbd2/xvda3-8(246): WRITE block 8669512 on xvda3 (8 sectors)
[ 3494.533859] jbd2/xvda3-8(246): WRITE block 8669520 on xvda3 (8 sectors)
[ 3494.533882] audit: type=1123 audit(1535368153.714:393): pid=12703 uid=1000 auid=1000 ses=1 msg='cwd="/home/user/rpmbuild" cmd=73797363746C20766D2E64726F705F6361636865733D33 terminal=pts/2 res=success'
[ 3494.534059] audit: type=1110 audit(1535368153.714:394): pid=12703 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/2 res=success'
[ 3494.534333] jbd2/xvda3-8(246): WRITE block 8669528 on xvda3 (8 sectors)
[ 3494.535072] systemd-journal(285): dirtied inode 1309961 (exe) on proc
[ 3494.535099] audit: type=1105 audit(1535368153.716:395): pid=12703 uid=0 auid=1000 ses=1 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/2 res=success'
[ 3494.535496] sysctl(12704): READ block 5274288 on xvda3 (8 sectors)
[ 3494.537595] sysctl(12704): READ block 5274216 on xvda3 (32 sectors)
[ 3494.538143] sysctl(12704): READ block 6130792 on xvda3 (8 sectors)
[ 3494.538569] sysctl(12704): READ block 6130736 on xvda3 (56 sectors)
[ 3494.538579] sysctl(12704): READ block 6130800 on xvda3 (120 sectors)
[ 3494.539339] sysctl(12704): READ block 5288008 on xvda3 (8 sectors)
[ 3494.540025] sysctl(12704): READ block 5287928 on xvda3 (80 sectors)
[ 3494.540037] sysctl(12704): READ block 5288016 on xvda3 (152 sectors)
[ 3494.541237] sysctl(12704): READ block 5285776 on xvda3 (24 sectors)
[ 3494.572248] sysctl (12704): drop_caches: 3
[ 3494.572306] sysctl(12704): READ block 4595872 on xvda3 (200 sectors)
[ 3494.573475] sysctl(12704): READ block 4624072 on xvda3 (64 sectors)
[ 3494.574252] sysctl(12704): READ block 4749808 on xvda3 (256 sectors)
[ 3494.575267] sudo(12703): READ block 8391936 on xvda3 (8 sectors)
[ 3494.575275] sudo(12703): READ block 8391944 on xvda3 (8 sectors)
[ 3494.575277] sudo(12703): READ block 8391952 on xvda3 (8 sectors)
[ 3494.575279] sudo(12703): READ block 8391960 on xvda3 (8 sectors)
[ 3494.575281] sudo(12703): READ block 8391968 on xvda3 (8 sectors)
[ 3494.575282] sudo(12703): READ block 8391976 on xvda3 (8 sectors)
[ 3494.575284] sudo(12703): READ block 8391984 on xvda3 (8 sectors)
[ 3494.575286] sudo(12703): READ block 8391992 on xvda3 (8 sectors)
[ 3494.575287] sudo(12703): READ block 8392000 on xvda3 (8 sectors)
[ 3494.575289] sudo(12703): READ block 8392008 on xvda3 (8 sectors)
[ 3494.575292] sudo(12703): READ block 8392016 on xvda3 (8 sectors)
[ 3494.575293] sudo(12703): READ block 8392024 on xvda3 (8 sectors)
[ 3494.575295] sudo(12703): READ block 8392032 on xvda3 (8 sectors)
[ 3494.575297] sudo(12703): READ block 8392040 on xvda3 (8 sectors)
[ 3494.575299] sudo(12703): READ block 8392048 on xvda3 (8 sectors)
[ 3494.575301] sudo(12703): READ block 8392056 on xvda3 (8 sectors)
[ 3494.575305] sudo(12703): READ block 8392064 on xvda3 (8 sectors)
[ 3494.575307] sudo(12703): READ block 8392072 on xvda3 (8 sectors)
[ 3494.575310] sudo(12703): READ block 8392080 on xvda3 (8 sectors)
[ 3494.575313] sudo(12703): READ block 8392088 on xvda3 (8 sectors)
[ 3494.575314] sudo(12703): READ block 8392096 on xvda3 (8 sectors)
[ 3494.575316] sudo(12703): READ block 8392104 on xvda3 (8 sectors)
[ 3494.575318] sudo(12703): READ block 8392112 on xvda3 (8 sectors)
[ 3494.575319] sudo(12703): READ block 8392120 on xvda3 (8 sectors)
[ 3494.575321] sudo(12703): READ block 8392128 on xvda3 (8 sectors)
[ 3494.575323] sudo(12703): READ block 8392136 on xvda3 (8 sectors)
[ 3494.575327] sudo(12703): READ block 8392144 on xvda3 (8 sectors)
[ 3494.575329] sudo(12703): READ block 8392160 on xvda3 (8 sectors)
[ 3494.575331] sudo(12703): READ block 8392168 on xvda3 (8 sectors)
[ 3494.575333] sudo(12703): READ block 8392176 on xvda3 (8 sectors)
[ 3494.575334] sudo(12703): READ block 8392184 on xvda3 (8 sectors)
[ 3494.575336] sudo(12703): READ block 8392192 on xvda3 (8 sectors)
[ 3494.575340] sudo(12703): READ block 8392152 on xvda3 (8 sectors)
[ 3494.576782] sudo(12703): READ block 10125608 on xvda3 (8 sectors)
[ 3494.577385] sudo(12703): READ block 8389376 on xvda3 (8 sectors)
[ 3494.577392] sudo(12703): READ block 8389384 on xvda3 (8 sectors)
[ 3494.577394] sudo(12703): READ block 8389392 on xvda3 (8 sectors)
[ 3494.577395] sudo(12703): READ block 8389400 on xvda3 (8 sectors)
[ 3494.577397] sudo(12703): READ block 8389408 on xvda3 (8 sectors)
[ 3494.577399] sudo(12703): READ block 8389416 on xvda3 (8 sectors)
[ 3494.577401] sudo(12703): READ block 8389424 on xvda3 (8 sectors)
[ 3494.577403] sudo(12703): READ block 8389432 on xvda3 (8 sectors)
[ 3494.577404] sudo(12703): READ block 8389440 on xvda3 (8 sectors)
[ 3494.577406] sudo(12703): READ block 8389448 on xvda3 (8 sectors)
[ 3494.577407] sudo(12703): READ block 8389456 on xvda3 (8 sectors)
[ 3494.577409] sudo(12703): READ block 8389464 on xvda3 (8 sectors)
[ 3494.577411] sudo(12703): READ block 8389472 on xvda3 (8 sectors)
[ 3494.577413] sudo(12703): READ block 8389480 on xvda3 (8 sectors)
[ 3494.577414] sudo(12703): READ block 8389488 on xvda3 (8 sectors)
[ 3494.577416] sudo(12703): READ block 8389504 on xvda3 (8 sectors)
[ 3494.577418] sudo(12703): READ block 8389512 on xvda3 (8 sectors)
[ 3494.577419] sudo(12703): READ block 8389520 on xvda3 (8 sectors)
[ 3494.577421] sudo(12703): READ block 8389528 on xvda3 (8 sectors)
[ 3494.577423] sudo(12703): READ block 8389536 on xvda3 (8 sectors)
[ 3494.577425] sudo(12703): READ block 8389544 on xvda3 (8 sectors)
[ 3494.577426] sudo(12703): READ block 8389552 on xvda3 (8 sectors)
[ 3494.577428] sudo(12703): READ block 8389560 on xvda3 (8 sectors)
[ 3494.577429] sudo(12703): READ block 8389568 on xvda3 (8 sectors)
[ 3494.577431] sudo(12703): READ block 8389576 on xvda3 (8 sectors)
[ 3494.577433] sudo(12703): READ block 8389584 on xvda3 (8 sectors)
[ 3494.577434] sudo(12703): READ block 8389592 on xvda3 (8 sectors)
[ 3494.577436] sudo(12703): READ block 8389600 on xvda3 (8 sectors)
[ 3494.577437] sudo(12703): READ block 8389616 on xvda3 (8 sectors)
[ 3494.577439] sudo(12703): READ block 8389624 on xvda3 (8 sectors)
[ 3494.577441] sudo(12703): READ block 8389632 on xvda3 (8 sectors)
[ 3494.577442] sudo(12703): READ block 8389608 on xvda3 (8 sectors)
[ 3494.578389] sudo(12703): READ block 8784704 on xvda3 (8 sectors)
[ 3494.578958] audit: type=1106 audit(1535368153.759:396): pid=12703 uid=0 auid=1000 ses=1 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/2 res=success'
[ 3494.579018] sudo(12703): READ block 8389888 on xvda3 (8 sectors)
[ 3494.579027] sudo(12703): READ block 8389896 on xvda3 (8 sectors)
[ 3494.579029] sudo(12703): READ block 8389904 on xvda3 (8 sectors)
[ 3494.579032] sudo(12703): READ block 8389912 on xvda3 (8 sectors)
[ 3494.579034] sudo(12703): READ block 8389920 on xvda3 (8 sectors)
[ 3494.579036] sudo(12703): READ block 8389928 on xvda3 (8 sectors)
[ 3494.579039] sudo(12703): READ block 8389944 on xvda3 (8 sectors)
[ 3494.579041] sudo(12703): READ block 8389952 on xvda3 (8 sectors)
[ 3494.579043] sudo(12703): READ block 8389960 on xvda3 (8 sectors)
[ 3494.579047] sudo(12703): READ block 8389968 on xvda3 (8 sectors)
[ 3494.579049] sudo(12703): READ block 8389976 on xvda3 (8 sectors)
[ 3494.579051] sudo(12703): READ block 8389984 on xvda3 (8 sectors)
[ 3494.579054] sudo(12703): READ block 8389992 on xvda3 (8 sectors)
[ 3494.579057] sudo(12703): READ block 8390000 on xvda3 (8 sectors)
[ 3494.579059] sudo(12703): READ block 8390008 on xvda3 (8 sectors)
[ 3494.579062] sudo(12703): READ block 8390016 on xvda3 (8 sectors)
[ 3494.579065] sudo(12703): READ block 8390024 on xvda3 (8 sectors)
[ 3494.579068] sudo(12703): READ block 8390032 on xvda3 (8 sectors)
[ 3494.579070] sudo(12703): READ block 8390040 on xvda3 (8 sectors)
[ 3494.579073] sudo(12703): READ block 8390048 on xvda3 (8 sectors)
[ 3494.579075] sudo(12703): READ block 8390056 on xvda3 (8 sectors)
[ 3494.579077] sudo(12703): READ block 8390064 on xvda3 (8 sectors)
[ 3494.579080] sudo(12703): READ block 8390072 on xvda3 (8 sectors)
[ 3494.579083] sudo(12703): READ block 8390080 on xvda3 (8 sectors)
[ 3494.579085] sudo(12703): READ block 8390088 on xvda3 (8 sectors)
[ 3494.579088] sudo(12703): READ block 8390096 on xvda3 (8 sectors)
[ 3494.579090] sudo(12703): READ block 8390104 on xvda3 (8 sectors)
[ 3494.579093] sudo(12703): READ block 8390112 on xvda3 (8 sectors)
[ 3494.579096] sudo(12703): READ block 8390120 on xvda3 (8 sectors)
[ 3494.579098] sudo(12703): READ block 8390128 on xvda3 (8 sectors)
[ 3494.579100] sudo(12703): READ block 8390136 on xvda3 (8 sectors)
[ 3494.579104] sudo(12703): READ block 8390144 on xvda3 (8 sectors)
[ 3494.579106] sudo(12703): READ block 8389936 on xvda3 (8 sectors)
[ 3494.580909] sudo(12703): READ block 8462056 on xvda3 (8 sectors)
[ 3494.581374] sudo(12703): READ block 8811808 on xvda3 (8 sectors)
[ 3494.582040] sudo(12703): READ block 8388872 on xvda3 (8 sectors)
[ 3494.582046] sudo(12703): READ block 8388880 on xvda3 (8 sectors)
[ 3494.582048] sudo(12703): READ block 8388888 on xvda3 (8 sectors)
[ 3494.582050] sudo(12703): READ block 8388896 on xvda3 (8 sectors)
[ 3494.582051] sudo(12703): READ block 8388904 on xvda3 (8 sectors)
[ 3494.582054] sudo(12703): READ block 8388912 on xvda3 (8 sectors)
[ 3494.582056] sudo(12703): READ block 8388920 on xvda3 (8 sectors)
[ 3494.582058] sudo(12703): READ block 8388928 on xvda3 (8 sectors)
[ 3494.582059] sudo(12703): READ block 8388936 on xvda3 (8 sectors)
[ 3494.582061] sudo(12703): READ block 8388944 on xvda3 (8 sectors)
[ 3494.582062] sudo(12703): READ block 8388952 on xvda3 (8 sectors)
[ 3494.582065] sudo(12703): READ block 8388960 on xvda3 (8 sectors)
[ 3494.582067] sudo(12703): READ block 8388984 on xvda3 (8 sectors)
[ 3494.582069] sudo(12703): READ block 8388992 on xvda3 (8 sectors)
[ 3494.582070] sudo(12703): READ block 8389000 on xvda3 (8 sectors)
[ 3494.582072] sudo(12703): READ block 8389008 on xvda3 (8 sectors)
[ 3494.582074] sudo(12703): READ block 8389016 on xvda3 (8 sectors)
[ 3494.582075] sudo(12703): READ block 8389024 on xvda3 (8 sectors)
[ 3494.582077] sudo(12703): READ block 8389032 on xvda3 (8 sectors)
[ 3494.582080] sudo(12703): READ block 8389040 on xvda3 (8 sectors)
[ 3494.582081] sudo(12703): READ block 8389048 on xvda3 (8 sectors)
[ 3494.582084] sudo(12703): READ block 8389056 on xvda3 (8 sectors)
[ 3494.582085] sudo(12703): READ block 8389064 on xvda3 (8 sectors)
[ 3494.582087] sudo(12703): READ block 8389072 on xvda3 (8 sectors)
[ 3494.582088] sudo(12703): READ block 8389080 on xvda3 (8 sectors)
[ 3494.582090] sudo(12703): READ block 8389088 on xvda3 (8 sectors)
[ 3494.582092] sudo(12703): READ block 8389096 on xvda3 (8 sectors)
[ 3494.582094] sudo(12703): READ block 8389104 on xvda3 (8 sectors)
[ 3494.582095] sudo(12703): READ block 8389112 on xvda3 (8 sectors)
[ 3494.582097] sudo(12703): READ block 8389120 on xvda3 (8 sectors)
[ 3494.582098] sudo(12703): READ block 8388968 on xvda3 (8 sectors)
[ 3494.582971] audit: type=1104 audit(1535368153.763:397): pid=12703 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/2 res=success'
[ 3494.583116] sudo(12703): READ block 4649808 on xvda3 (88 sectors)
[ 3494.583679] sudo(12703): READ block 4797296 on xvda3 (256 sectors)
[ 3494.584946] sudo(12703): READ block 4739504 on xvda3 (256 sectors)
[ 3494.586212] sudo(12703): READ block 4711480 on xvda3 (112 sectors)
[ 3494.586835] sudo(12703): READ block 4698480 on xvda3 (256 sectors)
[ 3494.587704] sudo(12703): READ block 4697904 on xvda3 (224 sectors)
[ 3494.588628] sudo(12703): READ block 6387424 on xvda3 (256 sectors)
[ 3494.589704] sudo(12703): READ block 4464256 on xvda3 (192 sectors)
[ 3494.590747] sudo(12703): READ block 4605312 on xvda3 (256 sectors)
[ 3494.591745] sudo(12703): READ block 4787816 on xvda3 (256 sectors)
[ 3494.592578] sudo(12703): READ block 6132744 on xvda3 (256 sectors)
[ 3494.593468] sudo(12703): READ block 4463232 on xvda3 (144 sectors)
[ 3494.594031] sudo(12703): READ block 4790592 on xvda3 (256 sectors)
[ 3494.595420] bash(12707): READ block 4228608 on xvda3 (8 sectors)
[ 3494.595432] bash(12707): READ block 4228616 on xvda3 (8 sectors)
[ 3494.595435] bash(12707): READ block 4228624 on xvda3 (8 sectors)
[ 3494.595437] bash(12707): READ block 4228632 on xvda3 (8 sectors)
[ 3494.595440] bash(12707): READ block 4228648 on xvda3 (8 sectors)
[ 3494.595442] bash(12707): READ block 4228656 on xvda3 (8 sectors)
[ 3494.595444] bash(12707): READ block 4228664 on xvda3 (8 sectors)
[ 3494.595447] bash(12707): READ block 4228672 on xvda3 (8 sectors)
[ 3494.595452] bash(12707): READ block 4228680 on xvda3 (8 sectors)
[ 3494.595454] bash(12707): READ block 4228688 on xvda3 (8 sectors)
[ 3494.595457] bash(12707): READ block 4228696 on xvda3 (8 sectors)
[ 3494.595462] bash(12707): READ block 4228704 on xvda3 (8 sectors)
[ 3494.595466] bash(12707): READ block 4228712 on xvda3 (8 sectors)
[ 3494.595471] bash(12707): READ block 4228720 on xvda3 (8 sectors)
[ 3494.595474] bash(12707): READ block 4228728 on xvda3 (8 sectors)
[ 3494.595476] bash(12707): READ block 4228736 on xvda3 (8 sectors)
[ 3494.595482] bash(12707): READ block 4228744 on xvda3 (8 sectors)
[ 3494.595484] bash(12707): READ block 4228752 on xvda3 (8 sectors)
[ 3494.595487] bash(12707): READ block 4228760 on xvda3 (8 sectors)
[ 3494.595492] bash(12707): READ block 4228768 on xvda3 (8 sectors)
[ 3494.595494] bash(12707): READ block 4228776 on xvda3 (8 sectors)
[ 3494.595497] bash(12707): READ block 4228784 on xvda3 (8 sectors)
[ 3494.595503] bash(12707): READ block 4228792 on xvda3 (8 sectors)
[ 3494.595505] bash(12707): READ block 4228800 on xvda3 (8 sectors)
[ 3494.595512] bash(12707): READ block 4228808 on xvda3 (8 sectors)
[ 3494.595514] bash(12707): READ block 4228816 on xvda3 (8 sectors)
[ 3494.595519] bash(12707): READ block 4228824 on xvda3 (8 sectors)
[ 3494.595524] bash(12707): READ block 4228832 on xvda3 (8 sectors)
[ 3494.595530] bash(12707): READ block 4228840 on xvda3 (8 sectors)
[ 3494.595532] bash(12707): READ block 4228848 on xvda3 (8 sectors)
[ 3494.595535] bash(12707): READ block 4228856 on xvda3 (8 sectors)
[ 3494.595540] bash(12707): READ block 4228864 on xvda3 (8 sectors)
[ 3494.595542] bash(12707): READ block 4228640 on xvda3 (8 sectors)
[ 3494.596960] bash(12707): READ block 4273856 on xvda3 (8 sectors)
[ 3494.597175] bash(12707): READ block 4268856 on xvda3 (8 sectors)
[ 3494.597376] bash(12707): READ block 4221184 on xvda3 (8 sectors)
[ 3494.597384] bash(12707): READ block 4221192 on xvda3 (8 sectors)
[ 3494.597387] bash(12707): READ block 4221200 on xvda3 (8 sectors)
[ 3494.597391] bash(12707): READ block 4221208 on xvda3 (8 sectors)
[ 3494.597400] bash(12707): READ block 4221216 on xvda3 (8 sectors)
[ 3494.597403] bash(12707): READ block 4221224 on xvda3 (8 sectors)
[ 3494.597409] bash(12707): READ block 4221232 on xvda3 (8 sectors)
[ 3494.597414] bash(12707): READ block 4221240 on xvda3 (8 sectors)
[ 3494.597418] bash(12707): READ block 4221248 on xvda3 (8 sectors)
[ 3494.597421] bash(12707): READ block 4221256 on xvda3 (8 sectors)
[ 3494.597424] bash(12707): READ block 4221264 on xvda3 (8 sectors)
[ 3494.597427] bash(12707): READ block 4221272 on xvda3 (8 sectors)
[ 3494.597430] bash(12707): READ block 4221280 on xvda3 (8 sectors)
[ 3494.597433] bash(12707): READ block 4221288 on xvda3 (8 sectors)
[ 3494.597435] bash(12707): READ block 4221296 on xvda3 (8 sectors)
[ 3494.597439] bash(12707): READ block 4221304 on xvda3 (8 sectors)
[ 3494.597441] bash(12707): READ block 4221312 on xvda3 (8 sectors)
[ 3494.597446] bash(12707): READ block 4221320 on xvda3 (8 sectors)
[ 3494.597448] bash(12707): READ block 4221328 on xvda3 (8 sectors)
[ 3494.597450] bash(12707): READ block 4221336 on xvda3 (8 sectors)
[ 3494.597452] bash(12707): READ block 4221344 on xvda3 (8 sectors)
[ 3494.597453] bash(12707): READ block 4221352 on xvda3 (8 sectors)
[ 3494.597456] bash(12707): READ block 4221368 on xvda3 (8 sectors)
[ 3494.597461] bash(12707): READ block 4221376 on xvda3 (8 sectors)
[ 3494.597463] bash(12707): READ block 4221384 on xvda3 (8 sectors)
[ 3494.597464] bash(12707): READ block 4221392 on xvda3 (8 sectors)
[ 3494.597466] bash(12707): READ block 4221400 on xvda3 (8 sectors)
[ 3494.597468] bash(12707): READ block 4221408 on xvda3 (8 sectors)
[ 3494.597473] bash(12707): READ block 4221416 on xvda3 (8 sectors)
[ 3494.597476] bash(12707): READ block 4221424 on xvda3 (8 sectors)
[ 3494.597479] bash(12707): READ block 4221432 on xvda3 (8 sectors)
[ 3494.597491] bash(12707): READ block 4221440 on xvda3 (8 sectors)
[ 3494.597494] bash(12707): READ block 4221360 on xvda3 (8 sectors)
[ 3494.598585] bash(12707): READ block 67904 on xvdb (8 sectors)
[ 3494.598858] bash(12707): READ block 2320 on xvdb (8 sectors)
[ 3494.598863] bash(12707): READ block 2328 on xvdb (8 sectors)
[ 3494.598865] bash(12707): READ block 2336 on xvdb (8 sectors)
[ 3494.598869] bash(12707): READ block 2344 on xvdb (8 sectors)
[ 3494.598873] bash(12707): READ block 2352 on xvdb (8 sectors)
[ 3494.598877] bash(12707): READ block 2360 on xvdb (8 sectors)
[ 3494.598880] bash(12707): READ block 2368 on xvdb (8 sectors)
[ 3494.598883] bash(12707): READ block 2376 on xvdb (8 sectors)
[ 3494.598886] bash(12707): READ block 2384 on xvdb (8 sectors)
[ 3494.598889] bash(12707): READ block 2392 on xvdb (8 sectors)
[ 3494.598891] bash(12707): READ block 2400 on xvdb (8 sectors)
[ 3494.598894] bash(12707): READ block 2408 on xvdb (8 sectors)
[ 3494.598897] bash(12707): READ block 2416 on xvdb (8 sectors)
[ 3494.598900] bash(12707): READ block 2424 on xvdb (8 sectors)
[ 3494.598904] bash(12707): READ block 2432 on xvdb (8 sectors)
[ 3494.598906] bash(12707): READ block 2440 on xvdb (8 sectors)
[ 3494.598910] bash(12707): READ block 2448 on xvdb (8 sectors)
[ 3494.598912] bash(12707): READ block 2456 on xvdb (8 sectors)
[ 3494.598917] bash(12707): READ block 2464 on xvdb (8 sectors)
[ 3494.598920] bash(12707): READ block 2472 on xvdb (8 sectors)
[ 3494.598922] bash(12707): READ block 2480 on xvdb (8 sectors)
[ 3494.598925] bash(12707): READ block 2488 on xvdb (8 sectors)
[ 3494.598927] bash(12707): READ block 2496 on xvdb (8 sectors)
[ 3494.598930] bash(12707): READ block 2504 on xvdb (8 sectors)
[ 3494.598933] bash(12707): READ block 2512 on xvdb (8 sectors)
[ 3494.598938] bash(12707): READ block 2520 on xvdb (8 sectors)
[ 3494.598941] bash(12707): READ block 2528 on xvdb (8 sectors)
[ 3494.598944] bash(12707): READ block 2536 on xvdb (8 sectors)
[ 3494.598948] bash(12707): READ block 2544 on xvdb (8 sectors)
[ 3494.598950] bash(12707): READ block 2552 on xvdb (8 sectors)
[ 3494.598954] bash(12707): READ block 2560 on xvdb (8 sectors)
[ 3494.598955] bash(12707): READ block 2568 on xvdb (8 sectors)
[ 3494.598957] bash(12707): READ block 2312 on xvdb (8 sectors)
[ 3494.601048] bash(12707): READ block 4268280 on xvda3 (8 sectors)
[ 3494.601256] bash(12707): READ block 4196104 on xvda3 (8 sectors)
[ 3494.601261] bash(12707): READ block 4196112 on xvda3 (8 sectors)
[ 3494.601263] bash(12707): READ block 4196120 on xvda3 (8 sectors)
[ 3494.601265] bash(12707): READ block 4196128 on xvda3 (8 sectors)
[ 3494.601267] bash(12707): READ block 4196136 on xvda3 (8 sectors)
[ 3494.601268] bash(12707): READ block 4196144 on xvda3 (8 sectors)
[ 3494.601270] bash(12707): READ block 4196152 on xvda3 (8 sectors)
[ 3494.601273] bash(12707): READ block 4196168 on xvda3 (8 sectors)
[ 3494.601277] bash(12707): READ block 4196176 on xvda3 (8 sectors)
[ 3494.601279] bash(12707): READ block 4196184 on xvda3 (8 sectors)
[ 3494.601281] bash(12707): READ block 4196192 on xvda3 (8 sectors)
[ 3494.601287] bash(12707): READ block 4196200 on xvda3 (8 sectors)
[ 3494.601290] bash(12707): READ block 4196208 on xvda3 (8 sectors)
[ 3494.601297] bash(12707): READ block 4196216 on xvda3 (8 sectors)
[ 3494.601301] bash(12707): READ block 4196232 on xvda3 (8 sectors)
[ 3494.601307] bash(12707): READ block 4196240 on xvda3 (8 sectors)
[ 3494.601310] bash(12707): READ block 4196248 on xvda3 (8 sectors)
[ 3494.601316] bash(12707): READ block 4196256 on xvda3 (8 sectors)
[ 3494.601319] bash(12707): READ block 4196264 on xvda3 (8 sectors)
[ 3494.601325] bash(12707): READ block 4196272 on xvda3 (8 sectors)
[ 3494.601328] bash(12707): READ block 4196288 on xvda3 (8 sectors)
[ 3494.601334] bash(12707): READ block 4196296 on xvda3 (8 sectors)
[ 3494.601339] bash(12707): READ block 4196304 on xvda3 (8 sectors)
[ 3494.601344] bash(12707): READ block 4196312 on xvda3 (8 sectors)
[ 3494.601347] bash(12707): READ block 4196320 on xvda3 (8 sectors)
[ 3494.601354] bash(12707): READ block 4196336 on xvda3 (8 sectors)
[ 3494.601357] bash(12707): READ block 4196344 on xvda3 (8 sectors)
[ 3494.601365] bash(12707): READ block 4196352 on xvda3 (8 sectors)
[ 3494.601367] bash(12707): READ block 4196328 on xvda3 (8 sectors)
[ 3494.602516] bash(12707): READ block 4617216 on xvda3 (32 sectors)
[ 3494.602828] bash(12707): READ block 8480 on xvda3 (8 sectors)
[ 3494.602835] bash(12707): READ block 8488 on xvda3 (8 sectors)
[ 3494.602838] bash(12707): READ block 8496 on xvda3 (8 sectors)
[ 3494.602840] bash(12707): READ block 8504 on xvda3 (8 sectors)
[ 3494.602842] bash(12707): READ block 8512 on xvda3 (8 sectors)
[ 3494.602845] bash(12707): READ block 8520 on xvda3 (8 sectors)
[ 3494.602850] bash(12707): READ block 8528 on xvda3 (8 sectors)
[ 3494.602853] bash(12707): READ block 8536 on xvda3 (8 sectors)
[ 3494.602857] bash(12707): READ block 8544 on xvda3 (8 sectors)
[ 3494.602860] bash(12707): READ block 8552 on xvda3 (8 sectors)
[ 3494.602863] bash(12707): READ block 8560 on xvda3 (8 sectors)
[ 3494.602866] bash(12707): READ block 8568 on xvda3 (8 sectors)
[ 3494.602869] bash(12707): READ block 8576 on xvda3 (8 sectors)
[ 3494.602871] bash(12707): READ block 8584 on xvda3 (8 sectors)
[ 3494.602874] bash(12707): READ block 8592 on xvda3 (8 sectors)
[ 3494.602876] bash(12707): READ block 8600 on xvda3 (8 sectors)
[ 3494.602878] bash(12707): READ block 8608 on xvda3 (8 sectors)
[ 3494.602881] bash(12707): READ block 8616 on xvda3 (8 sectors)
[ 3494.602883] bash(12707): READ block 8624 on xvda3 (8 sectors)
[ 3494.602886] bash(12707): READ block 8632 on xvda3 (8 sectors)
[ 3494.602889] bash(12707): READ block 8640 on xvda3 (8 sectors)
[ 3494.602892] bash(12707): READ block 8648 on xvda3 (8 sectors)
[ 3494.602895] bash(12707): READ block 8656 on xvda3 (8 sectors)
[ 3494.602898] bash(12707): READ block 8664 on xvda3 (8 sectors)
[ 3494.602901] bash(12707): READ block 8672 on xvda3 (8 sectors)
[ 3494.602903] bash(12707): READ block 8680 on xvda3 (8 sectors)
[ 3494.602905] bash(12707): READ block 8688 on xvda3 (8 sectors)
[ 3494.602907] bash(12707): READ block 8696 on xvda3 (8 sectors)
[ 3494.602910] bash(12707): READ block 8704 on xvda3 (8 sectors)
[ 3494.602913] bash(12707): READ block 8712 on xvda3 (8 sectors)
[ 3494.602915] bash(12707): READ block 8720 on xvda3 (8 sectors)
[ 3494.602917] bash(12707): READ block 8728 on xvda3 (8 sectors)
[ 3494.602919] bash(12707): READ block 8472 on xvda3 (8 sectors)
[ 3494.603825] bash(12707): READ block 4195584 on xvda3 (8 sectors)
[ 3494.603832] bash(12707): READ block 4195592 on xvda3 (8 sectors)
[ 3494.603833] bash(12707): READ block 4195600 on xvda3 (8 sectors)
[ 3494.603835] bash(12707): READ block 4195608 on xvda3 (8 sectors)
[ 3494.603837] bash(12707): READ block 4195616 on xvda3 (8 sectors)
[ 3494.603839] bash(12707): READ block 4195624 on xvda3 (8 sectors)
[ 3494.603841] bash(12707): READ block 4195632 on xvda3 (8 sectors)
[ 3494.603843] bash(12707): READ block 4195640 on xvda3 (8 sectors)
[ 3494.603847] bash(12707): READ block 4195648 on xvda3 (8 sectors)
[ 3494.603849] bash(12707): READ block 4195656 on xvda3 (8 sectors)
[ 3494.603850] bash(12707): READ block 4195664 on xvda3 (8 sectors)
[ 3494.603852] bash(12707): READ block 4195672 on xvda3 (8 sectors)
[ 3494.603854] bash(12707): READ block 4195680 on xvda3 (8 sectors)
[ 3494.603855] bash(12707): READ block 4195688 on xvda3 (8 sectors)
[ 3494.603857] bash(12707): READ block 4195696 on xvda3 (8 sectors)
[ 3494.603859] bash(12707): READ block 4195704 on xvda3 (8 sectors)
[ 3494.603861] bash(12707): READ block 4195712 on xvda3 (8 sectors)
[ 3494.603862] bash(12707): READ block 4195720 on xvda3 (8 sectors)
[ 3494.603867] bash(12707): READ block 4195728 on xvda3 (8 sectors)
[ 3494.603870] bash(12707): READ block 4195736 on xvda3 (8 sectors)
[ 3494.603873] bash(12707): READ block 4195744 on xvda3 (8 sectors)
[ 3494.603877] bash(12707): READ block 4195752 on xvda3 (8 sectors)
[ 3494.603879] bash(12707): READ block 4195768 on xvda3 (8 sectors)
[ 3494.603881] bash(12707): READ block 4195776 on xvda3 (8 sectors)
[ 3494.603882] bash(12707): READ block 4195784 on xvda3 (8 sectors)
[ 3494.603886] bash(12707): READ block 4195792 on xvda3 (8 sectors)
[ 3494.603888] bash(12707): READ block 4195800 on xvda3 (8 sectors)
[ 3494.603889] bash(12707): READ block 4195808 on xvda3 (8 sectors)
[ 3494.603891] bash(12707): READ block 4195824 on xvda3 (8 sectors)
[ 3494.603893] bash(12707): READ block 4195832 on xvda3 (8 sectors)
[ 3494.603895] bash(12707): READ block 4195840 on xvda3 (8 sectors)
[ 3494.603899] bash(12707): READ block 4195816 on xvda3 (8 sectors)
[ 3494.605570] sed(12707): READ block 4617304 on xvda3 (160 sectors)
[ 3494.606222] sed(12707): READ block 4471744 on xvda3 (56 sectors)
[ 3494.606962] sed(12707): READ block 4617248 on xvda3 (56 sectors)
[ 3494.606989] sed(12707): READ block 9032416 on xvda3 (192 sectors)
[ 3494.608183] sed(12707): READ block 4298744 on xvda3 (8 sectors)
[ 3494.608372] sed(12707): READ block 4200192 on xvda3 (8 sectors)
[ 3494.608379] sed(12707): READ block 4200208 on xvda3 (8 sectors)
[ 3494.608382] sed(12707): READ block 4200216 on xvda3 (8 sectors)
[ 3494.608385] sed(12707): READ block 4200224 on xvda3 (8 sectors)
[ 3494.608388] sed(12707): READ block 4200232 on xvda3 (8 sectors)
[ 3494.608394] sed(12707): READ block 4200240 on xvda3 (8 sectors)
[ 3494.608396] sed(12707): READ block 4200248 on xvda3 (8 sectors)
[ 3494.608399] sed(12707): READ block 4200256 on xvda3 (8 sectors)
[ 3494.608401] sed(12707): READ block 4200264 on xvda3 (8 sectors)
[ 3494.608404] sed(12707): READ block 4200272 on xvda3 (8 sectors)
[ 3494.608407] sed(12707): READ block 4200280 on xvda3 (8 sectors)
[ 3494.608411] sed(12707): READ block 4200288 on xvda3 (8 sectors)
[ 3494.608419] sed(12707): READ block 4200296 on xvda3 (8 sectors)
[ 3494.608422] sed(12707): READ block 4200304 on xvda3 (8 sectors)
[ 3494.608424] sed(12707): READ block 4200312 on xvda3 (8 sectors)
[ 3494.608427] sed(12707): READ block 4200320 on xvda3 (8 sectors)
[ 3494.608429] sed(12707): READ block 4200328 on xvda3 (8 sectors)
[ 3494.608432] sed(12707): READ block 4200336 on xvda3 (8 sectors)
[ 3494.608437] sed(12707): READ block 4200344 on xvda3 (8 sectors)
[ 3494.608440] sed(12707): READ block 4200352 on xvda3 (8 sectors)
[ 3494.608442] sed(12707): READ block 4200360 on xvda3 (8 sectors)
[ 3494.608444] sed(12707): READ block 4200368 on xvda3 (8 sectors)
[ 3494.608447] sed(12707): READ block 4200376 on xvda3 (8 sectors)
[ 3494.608449] sed(12707): READ block 4200384 on xvda3 (8 sectors)
[ 3494.608451] sed(12707): READ block 4200392 on xvda3 (8 sectors)
[ 3494.608454] sed(12707): READ block 4200400 on xvda3 (8 sectors)
[ 3494.608460] sed(12707): READ block 4200408 on xvda3 (8 sectors)
[ 3494.608465] sed(12707): READ block 4200416 on xvda3 (8 sectors)
[ 3494.608468] sed(12707): READ block 4200424 on xvda3 (8 sectors)
[ 3494.608471] sed(12707): READ block 4200432 on xvda3 (8 sectors)
[ 3494.608473] sed(12707): READ block 4200440 on xvda3 (8 sectors)
[ 3494.608479] sed(12707): READ block 4200448 on xvda3 (8 sectors)
[ 3494.608481] sed(12707): READ block 4200200 on xvda3 (8 sectors)
[ 3494.609369] sed(12707): READ block 5143736 on xvda3 (48 sectors)
[ 3494.610075] sed(12707): READ block 4477232 on xvda3 (48 sectors)
[ 3494.610550] sed(12707): READ block 4279536 on xvda3 (8 sectors)
[ 3494.610801] sed(12707): READ block 6647192 on xvda3 (256 sectors)
[ 3494.611752] sed(12707): READ block 4298808 on xvda3 (8 sectors)
[ 3494.611958] sed(12707): READ block 5160160 on xvda3 (24 sectors)
[ 3494.612604] sed(12707): READ block 4281296 on xvda3 (8 sectors)
[ 3494.612876] sed(12707): READ block 4210848 on xvda3 (8 sectors)
[ 3494.612882] sed(12707): READ block 4210856 on xvda3 (8 sectors)
[ 3494.612885] sed(12707): READ block 4210864 on xvda3 (8 sectors)
[ 3494.612888] sed(12707): READ block 4210872 on xvda3 (8 sectors)
[ 3494.612891] sed(12707): READ block 4210880 on xvda3 (8 sectors)
[ 3494.612896] sed(12707): READ block 4210888 on xvda3 (8 sectors)
[ 3494.612899] sed(12707): READ block 4210904 on xvda3 (8 sectors)
[ 3494.612902] sed(12707): READ block 4210912 on xvda3 (8 sectors)
[ 3494.612905] sed(12707): READ block 4210920 on xvda3 (8 sectors)
[ 3494.612908] sed(12707): READ block 4210928 on xvda3 (8 sectors)
[ 3494.612910] sed(12707): READ block 4210936 on xvda3 (8 sectors)
[ 3494.612918] sed(12707): READ block 4210944 on xvda3 (8 sectors)
[ 3494.612921] sed(12707): READ block 4210952 on xvda3 (8 sectors)
[ 3494.612924] sed(12707): READ block 4210960 on xvda3 (8 sectors)
[ 3494.612927] sed(12707): READ block 4210968 on xvda3 (8 sectors)
[ 3494.612929] sed(12707): READ block 4210976 on xvda3 (8 sectors)
[ 3494.612932] sed(12707): READ block 4210984 on xvda3 (8 sectors)
[ 3494.612934] sed(12707): READ block 4210992 on xvda3 (8 sectors)
[ 3494.612940] sed(12707): READ block 4211000 on xvda3 (8 sectors)
[ 3494.612943] sed(12707): READ block 4211008 on xvda3 (8 sectors)
[ 3494.612945] sed(12707): READ block 4211016 on xvda3 (8 sectors)
[ 3494.612947] sed(12707): READ block 4211024 on xvda3 (8 sectors)
[ 3494.612950] sed(12707): READ block 4211032 on xvda3 (8 sectors)
[ 3494.612953] sed(12707): READ block 4211040 on xvda3 (8 sectors)
[ 3494.612959] sed(12707): READ block 4211048 on xvda3 (8 sectors)
[ 3494.612961] sed(12707): READ block 4211056 on xvda3 (8 sectors)
[ 3494.612964] sed(12707): READ block 4211064 on xvda3 (8 sectors)
[ 3494.612966] sed(12707): READ block 4211072 on xvda3 (8 sectors)
[ 3494.612971] sed(12707): READ block 4211080 on xvda3 (8 sectors)
[ 3494.612977] sed(12707): READ block 4211088 on xvda3 (8 sectors)
[ 3494.612979] sed(12707): READ block 4211096 on xvda3 (8 sectors)
[ 3494.612982] sed(12707): READ block 4211104 on xvda3 (8 sectors)
[ 3494.612984] sed(12707): READ block 4210896 on xvda3 (8 sectors)
[ 3494.614613] sed(12707): READ block 4727160 on xvda3 (32 sectors)
[ 3494.615051] sed(12707): READ block 4298752 on xvda3 (8 sectors)
[ 3494.615307] sed(12707): READ block 4479176 on xvda3 (16 sectors)
[ 3494.615579] sed(12707): READ block 4195848 on xvda3 (8 sectors)
[ 3494.615586] sed(12707): READ block 4195856 on xvda3 (8 sectors)
[ 3494.615588] sed(12707): READ block 4195864 on xvda3 (8 sectors)
[ 3494.615589] sed(12707): READ block 4195872 on xvda3 (8 sectors)
[ 3494.615591] sed(12707): READ block 4195880 on xvda3 (8 sectors)
[ 3494.615593] sed(12707): READ block 4195888 on xvda3 (8 sectors)
[ 3494.615594] sed(12707): READ block 4195896 on xvda3 (8 sectors)
[ 3494.615596] sed(12707): READ block 4195904 on xvda3 (8 sectors)
[ 3494.615598] sed(12707): READ block 4195920 on xvda3 (8 sectors)
[ 3494.615600] sed(12707): READ block 4195928 on xvda3 (8 sectors)
[ 3494.615602] sed(12707): READ block 4195936 on xvda3 (8 sectors)
[ 3494.615603] sed(12707): READ block 4195944 on xvda3 (8 sectors)
[ 3494.615605] sed(12707): READ block 4195952 on xvda3 (8 sectors)
[ 3494.615607] sed(12707): READ block 4195960 on xvda3 (8 sectors)
[ 3494.615611] sed(12707): READ block 4195968 on xvda3 (8 sectors)
[ 3494.615614] sed(12707): READ block 4195976 on xvda3 (8 sectors)
[ 3494.615616] sed(12707): READ block 4195984 on xvda3 (8 sectors)
[ 3494.615618] sed(12707): READ block 4195992 on xvda3 (8 sectors)
[ 3494.615620] sed(12707): READ block 4196000 on xvda3 (8 sectors)
[ 3494.615621] sed(12707): READ block 4196008 on xvda3 (8 sectors)
[ 3494.615623] sed(12707): READ block 4196016 on xvda3 (8 sectors)
[ 3494.615624] sed(12707): READ block 4196024 on xvda3 (8 sectors)
[ 3494.615626] sed(12707): READ block 4196032 on xvda3 (8 sectors)
[ 3494.615632] sed(12707): READ block 4195912 on xvda3 (8 sectors)
[ 3494.616388] sed(12707): READ block 4481224 on xvda3 (104 sectors)
[ 3494.750545] top(6277): READ block 8408064 on xvda3 (8 sectors)
[ 3494.750556] top(6277): READ block 8408072 on xvda3 (8 sectors)
[ 3494.750558] top(6277): READ block 8408080 on xvda3 (8 sectors)
[ 3494.750560] top(6277): READ block 8408088 on xvda3 (8 sectors)
[ 3494.750561] top(6277): READ block 8408096 on xvda3 (8 sectors)
[ 3494.750563] top(6277): READ block 8408104 on xvda3 (8 sectors)
[ 3494.750565] top(6277): READ block 8408112 on xvda3 (8 sectors)
[ 3494.750566] top(6277): READ block 8408120 on xvda3 (8 sectors)
[ 3494.750568] top(6277): READ block 8408128 on xvda3 (8 sectors)
[ 3494.750570] top(6277): READ block 8408136 on xvda3 (8 sectors)
[ 3494.750571] top(6277): READ block 8408144 on xvda3 (8 sectors)
[ 3494.750573] top(6277): READ block 8408152 on xvda3 (8 sectors)
[ 3494.750574] top(6277): READ block 8408160 on xvda3 (8 sectors)
[ 3494.750577] top(6277): READ block 8408168 on xvda3 (8 sectors)
[ 3494.750579] top(6277): READ block 8408176 on xvda3 (8 sectors)
[ 3494.750581] top(6277): READ block 8408184 on xvda3 (8 sectors)
[ 3494.750582] top(6277): READ block 8408192 on xvda3 (8 sectors)
[ 3494.750584] top(6277): READ block 8408200 on xvda3 (8 sectors)
[ 3494.750585] top(6277): READ block 8408216 on xvda3 (8 sectors)
[ 3494.750590] top(6277): READ block 8408208 on xvda3 (8 sectors)
[ 3494.751396] top(6277): READ block 4194560 on xvda3 (8 sectors)
[ 3494.751403] top(6277): READ block 4194568 on xvda3 (8 sectors)
[ 3494.751405] top(6277): READ block 4194576 on xvda3 (8 sectors)
[ 3494.751407] top(6277): READ block 4194584 on xvda3 (8 sectors)
[ 3494.751409] top(6277): READ block 4194592 on xvda3 (8 sectors)
[ 3494.751410] top(6277): READ block 4194600 on xvda3 (8 sectors)
[ 3494.751412] top(6277): READ block 4194608 on xvda3 (8 sectors)
[ 3494.751414] top(6277): READ block 4194616 on xvda3 (8 sectors)
[ 3494.751417] top(6277): READ block 4194624 on xvda3 (8 sectors)
[ 3494.751418] top(6277): READ block 4194632 on xvda3 (8 sectors)
[ 3494.751422] top(6277): READ block 4194640 on xvda3 (8 sectors)
[ 3494.751424] top(6277): READ block 4194648 on xvda3 (8 sectors)
[ 3494.751426] top(6277): READ block 4194656 on xvda3 (8 sectors)
[ 3494.751428] top(6277): READ block 4194664 on xvda3 (8 sectors)
[ 3494.751445] top(6277): READ block 4194672 on xvda3 (8 sectors)
[ 3494.751446] top(6277): READ block 4194680 on xvda3 (8 sectors)
[ 3494.751448] top(6277): READ block 4194688 on xvda3 (8 sectors)
[ 3494.751450] top(6277): READ block 4194696 on xvda3 (8 sectors)
[ 3494.751467] top(6277): READ block 4194704 on xvda3 (8 sectors)
[ 3494.751468] top(6277): READ block 4194712 on xvda3 (8 sectors)
[ 3494.751470] top(6277): READ block 4194720 on xvda3 (8 sectors)
[ 3494.751472] top(6277): READ block 4194728 on xvda3 (8 sectors)
[ 3494.751475] top(6277): READ block 4194736 on xvda3 (8 sectors)
[ 3494.751478] top(6277): READ block 4194744 on xvda3 (8 sectors)
[ 3494.751480] top(6277): READ block 4194752 on xvda3 (8 sectors)
[ 3494.751483] top(6277): READ block 4194760 on xvda3 (8 sectors)
[ 3494.751485] top(6277): READ block 4194768 on xvda3 (8 sectors)
[ 3494.751487] top(6277): READ block 4194776 on xvda3 (8 sectors)
[ 3494.751491] top(6277): READ block 4194784 on xvda3 (8 sectors)
[ 3494.751493] top(6277): READ block 4194792 on xvda3 (8 sectors)
[ 3494.751495] top(6277): READ block 4194808 on xvda3 (8 sectors)
[ 3494.751497] top(6277): READ block 4194816 on xvda3 (8 sectors)
[ 3494.753138] top(6277): READ block 4194800 on xvda3 (8 sectors)
[ 3494.753629] top(6277): READ block 4260760 on xvda3 (8 sectors)
[ 3494.753973] top(6277): READ block 4194824 on xvda3 (8 sectors)
[ 3494.753979] top(6277): READ block 4194832 on xvda3 (8 sectors)
[ 3494.753981] top(6277): READ block 4194840 on xvda3 (8 sectors)
[ 3494.753983] top(6277): READ block 4194848 on xvda3 (8 sectors)
[ 3494.753986] top(6277): READ block 4194856 on xvda3 (8 sectors)
[ 3494.753994] top(6277): READ block 4194864 on xvda3 (8 sectors)
[ 3494.754016] top(6277): READ block 4194872 on xvda3 (8 sectors)
[ 3494.754024] top(6277): READ block 4194880 on xvda3 (8 sectors)
[ 3494.754027] top(6277): READ block 4194888 on xvda3 (8 sectors)
[ 3494.754029] top(6277): READ block 4194896 on xvda3 (8 sectors)
[ 3494.754032] top(6277): READ block 4194904 on xvda3 (8 sectors)
[ 3494.754035] top(6277): READ block 4194928 on xvda3 (8 sectors)
[ 3494.754038] top(6277): READ block 4194936 on xvda3 (8 sectors)
[ 3494.754043] top(6277): READ block 4194944 on xvda3 (8 sectors)
[ 3494.754046] top(6277): READ block 4194952 on xvda3 (8 sectors)
[ 3494.754049] top(6277): READ block 4194960 on xvda3 (8 sectors)
[ 3494.754055] top(6277): READ block 4194968 on xvda3 (8 sectors)
[ 3494.754057] top(6277): READ block 4194976 on xvda3 (8 sectors)
[ 3494.754062] top(6277): READ block 4194984 on xvda3 (8 sectors)
[ 3494.754072] top(6277): READ block 4194992 on xvda3 (8 sectors)
[ 3494.754078] top(6277): READ block 4195000 on xvda3 (8 sectors)
[ 3494.754081] top(6277): READ block 4195008 on xvda3 (8 sectors)
[ 3494.754087] top(6277): READ block 4195016 on xvda3 (8 sectors)
[ 3494.754090] top(6277): READ block 4194912 on xvda3 (8 sectors)
[ 3494.754697] top(6277): READ block 12648320 on xvda3 (8 sectors)
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 27, 2018

Here's a dom0 iotop, while the AppVM is disk thrashing:
dom0iotop_screenshot_2018-08-27_18-10-13

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 27, 2018

note that when I'm using VM I mean virtual machine not virtual memory
the latter is used here: https://www.kernel.org/doc/gorman/pdf/understand.pdf

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

in ~/rpmbuild/:

#!/bin/bash

#the following two will just kill this terminal and everything won't have enough memory to even shutdown qube(so don't use this!):
#sudo sysctl vm.overcommit_memory=2 #was 0
#sudo sysctl vm.overcommit_kbytes=1 #was 0, will override vm.overcommit_ratio (ie. will set it to 0)

sudo sysctl vm.overcommit_memory=1 #was 0
#sudo sysctl vm.overcommit_memory=0 #was 0
#sudo sysctl vm.overcommit_ratio=50 #was 50, will override vm.overcommit_kbytes (ie. will set it to 0)

sudo sysctl vm.vfs_cache_pressure=0 #was 100
#sudo sysctl vm.vfs_cache_pressure=100 #was 100
#sudo sysctl vm.watermark_scale_factor=1 #was 10
sudo sysctl vm.watermark_scale_factor=1000 #was 10
#sudo sysctl vm.watermark_scale_factor=10 #was 10
sudo sysctl vm.oom_kill_allocating_task=0 #was 0
sync
time rpmbuild -bi --noprep -- SPECS/firefox.spec

iotop vm:
6_iotopvm_screenshot_2018-08-28_08-42-20

iotop dom0:
6_iotop_dom0_screenshot_2018-08-28_08-42-37

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

solved: https://stackoverflow.com/q/52058914/10239615

Something's not right, during disk thrashing with vm.block_dump=1, using this:

#!/bin/bash

#the following two will just kill this terminal and everything won't have enough memory to even shutdown qube(so don't use this!):
#sudo sysctl vm.overcommit_memory=2 #was 0
#sudo sysctl vm.overcommit_kbytes=1 #was 0, will override vm.overcommit_ratio (ie. will set it to 0)

sudo sysctl vm.overcommit_memory=1 #was 0
#sudo sysctl vm.overcommit_memory=0 #was 0
#sudo sysctl vm.overcommit_ratio=50 #was 50, will override vm.overcommit_kbytes (ie. will set it to 0)

sudo sysctl vm.vfs_cache_pressure=0 #was 100
#sudo sysctl vm.vfs_cache_pressure=100 #was 100
#sudo sysctl vm.watermark_scale_factor=1 #was 10
sudo sysctl vm.watermark_scale_factor=1000 #was 10
#sudo sysctl vm.watermark_scale_factor=10 #was 10
sudo sysctl vm.oom_kill_allocating_task=0 #was 0
sudo sysctl vm.block_dump=1 #was 0
gnome-terminal -- sudo dmesg -w
sync
time rpmbuild -bi --noprep -- SPECS/firefox.spec

OS froze at this(which is expected):
screenshot_2018-08-28_09-37-47
but those block numbers associated with the processes are all messed up:

debugfs:  icheck 2154544
Block	Inode number
2154544	525013
debugfs:  ncheck 525013
Inode	Pathname
525013	/boot/grub2/themes/system/fireworks.png

that's cc1plus(2539) read of 16 sectors!
Why would cc1plus read that file?! And for sure that's a non-changing file on disk.
Either vm.block_dump=1 is reporting block numbers wrongly, or ... what?!

debugfs 1.44.2 (14-May-2018)
debugfs:  open /dev/xvda3
debugfs:  icheck 1704536
Block	Inode number
1704536	<block not found>
debugfs:  ncheck 1704536
Inode	Pathname
debugfs:  icheck 59282912
Block	Inode number
59282912	<block not found>
debugfs:  icheck 1716928
Block	Inode number
1716928	394546
debugfs:  ncheck 394546
Inode	Pathname
394546	/var/lib/rpm/Packages
debugfs:  icheck 171456
Block	Inode number
171456	<block not found>
debugfs:  icheck 1716456
Block	Inode number
1716456	394546
debugfs:  ncheck 394546
Inode	Pathname
394546	/var/lib/rpm/Packages
debugfs:  close
debugfs:  open /dev/xvdb
debugfs:  icheck 34306968
Block	Inode number
34306968	<block not found>
debugfs:  icheck 96548624
Block	Inode number
96548624	<block not found>
debugfs:  icheck 34307216
Block	Inode number
34307216	<block not found>
debugfs:  icheck 34194904
Block	Inode number
34194904	<block not found>
debugfs:  icheck 2270712
Block	Inode number
2270712	<block not found>
debugfs:  icheck 2270712
Block	Inode number
2270712	<block not found>
debugfs:  close
debugfs:  open /dev/xvda3
debugfs:  icheck 2270712
Block	Inode number
2270712	<block not found>
debugfs:  icheck 2270760
Block	Inode number
2270760	<block not found>
debugfs:  icheck 2270776
Block	Inode number
2270776	<block not found>
debugfs:  icheck 2281328
Block	Inode number
2281328	<block not found>
debugfs:  icheck 1706080
Block	Inode number
1706080	416394
debugfs:  ncheck 416394
Inode	Pathname
416394	/usr/src/kernels/4.17.14-202.fc28.x86_64/include/linux/ccp.h
debugfs:  icheck 1706128
Block	Inode number
1706128	416409
debugfs:  ncheck 416409
Inode	Pathname
416409	/usr/src/kernels/4.17.14-202.fc28.x86_64/include/linux/clocksource.h
debugfs:  icheck 2281352
Block	Inode number
2281352	<block not found>
debugfs:  icheck 1705848
Block	Inode number
1705848	416270
debugfs:  ncheck 416270
Inode	Pathname
416270	/usr/src/kernels/4.17.14-202.fc28.x86_64/include/linux/a.out.h
debugfs:  icheck 14404632
Block	Inode number
14404632	<block not found>

Like 1706128 is /usr/src/kernels/4.17.14-202.fc28.x86_64/include/linux/clocksource.h accessed by process systemd-journal(290), really though?
Hell, my running kernel is:

$ uname -a
Linux dev01-w-s-f-fdr28 4.18.5-2.pvops.qubes.x86_64 #1 SMP Mon Aug 27 16:30:27 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ stat /usr/src/kernels/4.17.14-202.fc28.x86_64/include/linux/clocksource.h
  File: /usr/src/kernels/4.17.14-202.fc28.x86_64/include/linux/clocksource.h
  Size: 8508      	Blocks: 24         IO Block: 4096   regular file
Device: ca03h/51715d	Inode: 416409      Links: 3
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2018-05-24 01:36:34.318000000 +0200
Modify: 2018-05-17 09:05:02.000000000 +0200
Change: 2018-08-27 10:34:44.268000000 +0200
 Birth: -
[user@dev01-w-s-f-fdr28 ~]$ filefrag -s -v  /usr/src/kernels/4.17.14-202.fc28.x86_64/include/linux/clocksource.h
Filesystem type is: ef53
File size of /usr/src/kernels/4.17.14-202.fc28.x86_64/include/linux/clocksource.h is 8508 (3 blocks of 4096 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..       2:    1706126..   1706128:      3:             last,eof
/usr/src/kernels/4.17.14-202.fc28.x86_64/include/linux/clocksource.h: 1 extent found
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

Found a good way to recreate some disk thrashing that does recover: https://unix.stackexchange.com/questions/423261/how-to-avoid-high-latency-near-oom-situation

Look at the date:

$ sudo nice -n -19 bash -c 'while true; do NS=$(date "+%N" | sed "s/^0*//"); let "S=998000000 - $NS"; S=$(( S > 0 ? S : 0)); LC_ALL=C sleep "0.$S"; date --iso=ns; done'

Apply virtual memory pressure periodically:

$ sudo sysctl -w vm.block_dump=1 && while true; do date; nice -n +20 stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 1 --timeout 10s; sleep 5s; done; sudo sysctl -w vm.block_dump=0

That date (as root above) doesn't skip if this is +20(lowest prio) as opposed to -20(highest prio). There is only an output delay in the former case. In the latter case however nice: cannot set niceness: Permission denied so I have to use sudo nice -n -20 there but then there are no output delays - weird, I'd think exactly the opposite would be the case, but I guess maybe it's something to do with stress now running as the same user (root) as the date outputter ? or what.

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

what the?! using the above, there's absolutely zero disk thrashing now that I've put /var/log/journal/ into tmpfs via fstab:
tmpfs /var/log/journal tmpfs defaults,size=3G 0 0

maybe the above method is just not effective anymore?
will try building firefox next...
oh yeah, building firefox definitely still works(causing disk thrashing) - the only problem is that I've still no way of terminating it (Pause/Unpause several times, doesn't work, probably due to 4000MB instead of 12000MB max RAM), other than Kill qube.

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

solved: https://stackoverflow.com/q/52058914/10239615

$ ./showblock 6040248
--------
dmesg block(512 byte sector number): 6040248
actual block(4096 bytes): 755031
inode: 158917
path : /usr/lib64/libvte-2.91.so.0.5200.2

In other words, I multiplied by 8 instead of dividing by 8 when I tried to assume that the dmesg block numbers are sectors! otherwise I would've gotten it long ago.
So sudo sysctl -w vm.block_dump=1 reports 512byte block numbers (aka sector numbers) on dmesg, but you need 4096byte block numbers for icheck in debugfs.

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

So now we can see that the process sleep is accessing itself (for reading):
[ 3140.736022] sleep(13522): READ block 5379184 on xvda3 (24 sectors)
which means that indeed it's rereading its code page(s) because some kswapd0 (most likely) expunged them due to memory pressure.
which explains the disk thrashing.
All this is what was already correctly said here(answer&comment): https://askubuntu.com/a/432827/861003

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

Back to causing a 1 sec disk thrashing periodically(after 5 sec of wait), enough for a log with, thanks to increasing workers by 1 (-m 2 from -m 1):

sudo sysctl -w vm.block_dump=1 && while true; do date; nice -n +20 stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 2 --timeout 10s; sleep 5s; done; sudo sysctl -w vm.block_dump=0
vm.block_dump = 1
Wed Aug 29 00:12:40 CEST 2018
stress: info: [11187] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
stress: FAIL: [11187] (415) <-- worker 11188 got signal 9
stress: WARN: [11187] (417) now reaping child worker processes
stress: FAIL: [11187] (451) failed run completed in 1s
Wed Aug 29 00:12:46 CEST 2018
stress: info: [11300] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
stress: FAIL: [11300] (415) <-- worker 11302 got signal 9
stress: WARN: [11300] (417) now reaping child worker processes
stress: FAIL: [11300] (451) failed run completed in 1s
^C
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

Ok so here's 1 sec of disk thrash, caused by this:

$ sudo sysctl -w vm.block_dump=1 && nice -n +20 stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 2 --timeout 10s; sudo sysctl -w vm.block_dump=0
vm.block_dump = 1
stress: info: [20758] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
stress: FAIL: [20758] (415) <-- worker 20760 got signal 9
stress: WARN: [20758] (417) now reaping child worker processes
stress: FAIL: [20758] (451) failed run completed in 1s
vm.block_dump = 0

and here's its corresponding dmesg: https://gist.github.com/constantoverride/bcf4a0d63817294cfd625ddb78537fb3

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

active_file:440kB inactive_file:464kB

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

So, all the executable code pages got evicted... how teh f to stop them from ever getting evicted? :D
You would think that sudo sysctl vm.vfs_cache_pressure=0 #was 100 would do it, but nope.

[ 6855.916850] active_anon:900290 inactive_anon:12877 isolated_anon:0
                active_file:197 inactive_file:131 isolated_file:0
                unevictable:15690 dirty:0 writeback:0 unstable:0
                slab_reclaimable:7053 slab_unreclaimable:16258
                mapped:1291 shmem:14832 pagetables:4964 bounce:0
                free:15361 free_pcp:89 free_cma:0

or the opposite: $ sudo sysctl vm.vfs_cache_pressure=2000 #was 100:

[ 6926.685109] Mem-Info:
[ 6926.685115] active_anon:900489 inactive_anon:12877 isolated_anon:0
                active_file:145 inactive_file:84 isolated_file:0
                unevictable:15690 dirty:0 writeback:0 unstable:0
                slab_reclaimable:7076 slab_unreclaimable:16305
                mapped:1295 shmem:14832 pagetables:4962 bounce:0
                free:15175 free_pcp:6 free_cma:0

What about: $ sudo sysctl vm.vfs_cache_pressure=1 #was 100:

[ 7027.663245] Mem-Info:
[ 7027.663253] active_anon:875285 inactive_anon:12877 isolated_anon:0
                active_file:136 inactive_file:79 isolated_file:0
                unevictable:15690 dirty:0 writeback:0 unstable:0
                slab_reclaimable:7087 slab_unreclaimable:16221
                mapped:1353 shmem:14832 pagetables:4973 bounce:0
                free:16368 free_pcp:281 free_cma:0
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

$ sudo sysctl vm.vfs_cache_pressure=0 #was 100
$ sudo sysctl vm.watermark_scale_factor=1000 #was 10
[ 7137.448428] Mem-Info:
[ 7137.448435] active_anon:900629 inactive_anon:12877 isolated_anon:0
                active_file:123 inactive_file:33 isolated_file:0
                unevictable:15690 dirty:0 writeback:0 unstable:0
                slab_reclaimable:7070 slab_unreclaimable:16223
                mapped:1370 shmem:14836 pagetables:4990 bounce:0
                free:15080 free_pcp:157 free_cma:0

What about:

$ sudo sysctl vm.watermark_scale_factor=10 #was 10
vm.watermark_scale_factor = 10
[ 7271.055673] Mem-Info:
[ 7271.055679] active_anon:900581 inactive_anon:12880 isolated_anon:0
                active_file:140 inactive_file:147 isolated_file:0
                unevictable:15726 dirty:0 writeback:0 unstable:0
                slab_reclaimable:7072 slab_unreclaimable:16257
                mapped:1374 shmem:14839 pagetables:5051 bounce:0
                free:15056 free_pcp:111 free_cma:0

$ sudo sysctl vm.watermark_scale_factor=1 #was 10
vm.watermark_scale_factor = 1
[ 7320.032034] Mem-Info:
[ 7320.032044] active_anon:900613 inactive_anon:12884 isolated_anon:0
                active_file:187 inactive_file:113 isolated_file:0
                unevictable:15726 dirty:0 writeback:0 unstable:0
                slab_reclaimable:7097 slab_unreclaimable:16300
                mapped:1387 shmem:14843 pagetables:5011 bounce:0
                free:15153 free_pcp:86 free_cma:0
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

Now if I could prevent the Active(file) from getting evicted...

$ grep -nrIE 'LRU_(IN)?ACTIVE_FILE'
include/trace/events/mmflags.h:237:		EM (LRU_INACTIVE_FILE, "inactive_file") \
include/trace/events/mmflags.h:238:		EM (LRU_ACTIVE_FILE, "active_file") \
include/linux/mmzone.h:203:	LRU_INACTIVE_FILE = LRU_BASE + LRU_FILE,
include/linux/mmzone.h:204:	LRU_ACTIVE_FILE = LRU_BASE + LRU_FILE + LRU_ACTIVE,
include/linux/mmzone.h:211:#define for_each_evictable_lru(lru) for (lru = 0; lru <= LRU_ACTIVE_FILE; lru++)
include/linux/mmzone.h:215:	return (lru == LRU_INACTIVE_FILE || lru == LRU_ACTIVE_FILE);
include/linux/mmzone.h:220:	return (lru == LRU_ACTIVE_ANON || lru == LRU_ACTIVE_FILE);
include/linux/mmzone.h:249:#define LRU_ALL_FILE (BIT(LRU_INACTIVE_FILE) | BIT(LRU_ACTIVE_FILE))
include/linux/mm_inline.h:79:		return LRU_INACTIVE_FILE;
mm/workingset.c:272:	active_file = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES);
mm/page_alloc.c:4799:	pagecache = pages[LRU_ACTIVE_FILE] + pages[LRU_INACTIVE_FILE];
mm/swap.c:564:		add_page_to_lru_list(page, lruvec, LRU_INACTIVE_FILE);
mm/vmscan.c:2209:	    lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, sc->reclaim_idx) >> sc->priority) {
mm/vmscan.c:2237:	file  = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES) +
mm/vmscan.c:2238:		lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, MAX_NR_ZONES);
mm/vmscan.c:2348:	while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
mm/vmscan.c:2349:					nr[LRU_INACTIVE_FILE]) {
mm/vmscan.c:2375:		nr_file = nr[LRU_INACTIVE_FILE] + nr[LRU_ACTIVE_FILE];
mm/vmscan.c:2393:			unsigned long scan_target = targets[LRU_INACTIVE_FILE] +
mm/vmscan.c:2394:						targets[LRU_ACTIVE_FILE] + 1;
mm/memcontrol.c:3630:	*pfilepages = mem_cgroup_nr_lru_pages(memcg, (1 << LRU_INACTIVE_FILE) |
mm/memcontrol.c:3631:						     (1 << LRU_ACTIVE_FILE));
fs/proc/meminfo.c:63:					   pages[LRU_ACTIVE_FILE]);
fs/proc/meminfo.c:65:					   pages[LRU_INACTIVE_FILE]);
fs/proc/meminfo.c:68:	show_val_kb(m, "Active(file):   ", pages[LRU_ACTIVE_FILE]);
fs/proc/meminfo.c:69:	show_val_kb(m, "Inactive(file): ", pages[LRU_INACTIVE_FILE]);
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

enum lru_list {
        LRU_INACTIVE_ANON = LRU_BASE,
        LRU_ACTIVE_ANON = LRU_BASE + LRU_ACTIVE,
        LRU_INACTIVE_FILE = LRU_BASE + LRU_FILE,
        LRU_ACTIVE_FILE = LRU_BASE + LRU_FILE + LRU_ACTIVE,
        LRU_UNEVICTABLE,
        NR_LRU_LISTS
};

#define for_each_lru(lru) for (lru = 0; lru < NR_LRU_LISTS; lru++)

#define for_each_evictable_lru(lru) for (lru = 0; lru <= LRU_ACTIVE_FILE; lru++)

uhm... that'd be too easy... I mean this:

#define for_each_evictable_lru(lru) for (lru = 0; lru <= LRU_INACTIVE_FILE; lru++)

(causes a hang with high cpu usage :D ofc)

changing include/linux/mmzone.h causes pretty much everything to ccache miss:

ache directory                     /home/user/.ccache
primary config                      /home/user/.ccache/ccache.conf
secondary config      (readonly)    /etc/ccache.conf
cache hit (direct)                174916
cache hit (preprocessed)           11793
cache miss                         35793
cache hit rate                     83.91 %
called for link                      894
called for preprocessing          338695
compiler produced no output           28
unsupported code directive            71
no input file                      37895
cleanups performed                     0
files in cache                     98068
cache size                           3.3 GB
max cache size                       5.0 GB

./build-restart took:

real	21m20.353s
user	150m30.031s
sys	21m16.823s
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

let's hope that's not the buffer for all files read! but rather just the exes/sos
because I mean, the buffer for all files is supposedly Buffers: Buffers The amount of physical RAM, in kilobytes, used for file buffers.
unless, active(file)+inactive(file) =buffers ... let's see:

Buffers:          158992 kB
Cached:          3570248 kB
SwapCached:            0 kB
Active:          2068236 kB
Inactive:        2684780 kB
Active(anon):    1024860 kB
Inactive(anon):    18848 kB
Active(file):    1043376 kB
Inactive(file):  2665932 kB

That doesn't look good... 1G of Active(file) ... that means it's caching every file! dang it! well that change would be useless then.

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

$ grep -nHiF VM_EXEC -- `grep -nrIE 'LRU_(IN)?ACTIVE_FILE'|cut -f1 -d':'|sort -u`
include/trace/events/mmflags.h:135:	{VM_EXEC,			"exec"		},		\
mm/vmscan.c:881:		if (vm_flags & VM_EXEC)
mm/vmscan.c:1969:			 * IO, plus JVM can create lots of anon VM_EXEC pages,
mm/vmscan.c:1972:			if ((vm_flags & VM_EXEC) && page_is_file_cache(page)) {
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 28, 2018

just one more trip? why not forever? https://stackoverflow.com/questions/52067753/how-to-keep-executable-code-in-memory-even-under-memory-pressure-in-linux

                if (page_referenced(page, 0, sc->target_mem_cgroup,
                                    &vm_flags)) {
                        nr_rotated += hpage_nr_pages(page);
                        /*
                         * Identify referenced, file-backed active pages and
                         * give them one more trip around the active list. So
                         * that executable code get better chances to stay in
                         * memory under moderate memory pressure.  Anon pages
                         * are not likely to be evicted by use-once streaming
                         * IO, plus JVM can create lots of anon VM_EXEC pages,
                         * so we ignore them here.
                         */
                        if ((vm_flags & VM_EXEC) && page_is_file_cache(page)) {
                                list_add(&page->lru, &l_active);
                                continue;
                        }
                }
@constantoverride

This comment has been minimized.

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 29, 2018

well I can't believe this works: constantoverride/qubes-linux-kernel@e656178
that is patch le9 and le9b both!

It was easy!
Now I wonder what are the side effects ...

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 30, 2018

So, with the patch, it takes less than 1 second(with a spike of 4M/sec read during that time, seen by sudo iotop on dom0) to trigger OOM-killer instead of minutes of disk thrashing. Tested on 4000M max RAM.

 0:55.92    Compiling style_traits v0.0.1 (file:///home/user/rpmbuild/BUILD/firefox-61.0.2/servo/components/style_traits)
 1:20.28    Compiling style v0.0.1 (file:///home/user/rpmbuild/BUILD/firefox-61.0.2/servo/components/style)
 4:22.82 error: Could not compile `style`.
 4:22.82 To learn more, run the command again with --verbose.
 4:22.82 gmake[4]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/rules.mk:950: force-cargo-library-build] Error 101
 4:22.82 gmake[3]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/recurse.mk:73: toolkit/library/rust/target] Error 2
 4:22.82 gmake[2]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/recurse.mk:33: compile] Error 2
 4:22.82 gmake[1]: *** [/home/user/rpmbuild/BUILD/firefox-61.0.2/config/rules.mk:418: default] Error 2
 4:22.82 gmake: *** [client.mk:172: build] Error 2
 4:22.89 0 compiler warnings present.
[  563.095690] rustc invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
[  563.095707] rustc cpuset=/ mems_allowed=0
[  563.095730] CPU: 6 PID: 24488 Comm: rustc Tainted: G           O      4.18.5-5.pvops.qubes.x86_64 #1
[  563.095741] Call Trace:
[  563.095762]  dump_stack+0x63/0x83
[  563.095784]  dump_header+0x6e/0x285
[  563.095790]  oom_kill_process+0x23c/0x450
[  563.095796]  out_of_memory+0x140/0x590
[  563.095802]  __alloc_pages_slowpath+0x134c/0x1590
[  563.095810]  __alloc_pages_nodemask+0x28b/0x2f0
[  563.095831]  alloc_pages_vma+0xac/0x4f0
[  563.095837]  do_anonymous_page+0x105/0x3f0
[  563.095858]  __handle_mm_fault+0xbc9/0xf10
[  563.095865]  ? do_mmap+0x463/0x5b0
[  563.095871]  handle_mm_fault+0x102/0x2c0
[  563.095883]  __do_page_fault+0x294/0x540
[  563.095889]  ? __audit_syscall_exit+0x2bf/0x3e0
[  563.095897]  do_page_fault+0x38/0x120
[  563.095903]  ? page_fault+0x8/0x30
[  563.095908]  page_fault+0x1e/0x30
[  563.095915] RIP: 0033:0x728e87e5e20c
[  563.095920] Code: 41 0f 11 7c 0d 10 41 0f 11 74 0d 20 4c 39 f3 74 66 4c 89 f2 4c 89 e9 0f 1f 80 00 00 00 00 f3 0f 6f 02 48 83 c2 30 48 83 c1 30 <0f> 11 41 d0 f3 0f 6f 4a e0 0f 11 49 e0 f3 0f 6f 52 f0 0f 11 51 f0 
[  563.095958] RSP: 002b:0000728e7f33df90 EFLAGS: 00010206
[  563.095965] RAX: 0000728e48fff040 RBX: 0000728e17000040 RCX: 0000728e4934b030
[  563.095975] RDX: 0000728e15b4c060 RSI: 0000000003001000 RDI: 0000000000000000
[  563.095985] RBP: 0000728e17000040 R08: 00000000ffffffff R09: 0000000000000000
[  563.095995] R10: 0000000000000022 R11: 0000000000000246 R12: 0000728e24bfe738
[  563.096009] R13: 0000728e48fff010 R14: 0000728e15800040 R15: 0000728e4bfff010
[  563.096067] Mem-Info:
[  563.096091] active_anon:819753 inactive_anon:18356 isolated_anon:0
                active_file:68597 inactive_file:0 isolated_file:0
                unevictable:6944 dirty:8 writeback:0 unstable:0
                slab_reclaimable:26004 slab_unreclaimable:13835
                mapped:51734 shmem:18358 pagetables:4537 bounce:0
                free:15092 free_pcp:61 free_cma:0
[  563.096156] Node 0 active_anon:3279012kB inactive_anon:73424kB active_file:274388kB inactive_file:0kB unevictable:27776kB isolated(anon):0kB isolated(file):0kB mapped:206936kB dirty:32kB writeback:0kB shmem:73432kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  563.096200] Node 0 DMA free:15680kB min:176kB low:1764kB high:3352kB active_anon:212kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:4kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  563.096242] lowmem_reserve[]: 0 3876 3876 3876 3876
[  563.096276] Node 0 DMA32 free:44688kB min:44876kB low:442020kB high:839164kB active_anon:3278464kB inactive_anon:73424kB active_file:274192kB inactive_file:0kB unevictable:27776kB writepending:32kB present:4079616kB managed:3971476kB mlocked:27776kB kernel_stack:5232kB pagetables:18144kB bounce:0kB free_pcp:244kB local_pcp:240kB free_cma:0kB
[  563.096365] lowmem_reserve[]: 0 0 0 0 0
[  563.096373] Node 0 DMA: 2*4kB (UM) 1*8kB (M) 1*16kB (U) 1*32kB (M) 2*64kB (U) 1*128kB (U) 2*256kB (UM) 1*512kB (M) 2*1024kB (UM) 0*2048kB 3*4096kB (M) = 15680kB
[  563.096422] Node 0 DMA32: 1357*4kB (UME) 356*8kB (UE) 303*16kB (UME) 195*32kB (UE) 118*64kB (UM) 51*128kB (UM) 29*256kB (UM) 9*512kB (U) 0*1024kB 0*2048kB 0*4096kB = 45476kB
[  563.096488] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  563.096516] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  563.096544] 86977 total pagecache pages
[  563.096552] 1023903 pages RAM
[  563.096575] 0 pages HighMem/MovableOnly
[  563.096581] 27058 pages reserved
[  563.096586] 0 pages cma reserved
[  563.096591] 0 pages hwpoisoned
[  563.096596] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[  563.096615] [  278]     0   278    31113     5323   241664        0             0 systemd-journal
[  563.096632] [  295]     0   295    30805      584   163840        0             0 qubesdb-daemon
[  563.096646] [  325]     0   325    23557     1961   212992        0         -1000 systemd-udevd
[  563.096660] [  448]     0   448    19356     1536   196608        0             0 systemd-logind
[  563.096676] [  449]    81   449    13254     1173   159744        0          -900 dbus-daemon
[  563.096703] [  452]     0   452     3042     1195    69632        0             0 haveged
[  563.096716] [  454]     0   454    10243       74   118784        0             0 meminfo-writer
[  563.096733] [  466]     0   466    34209      645   180224        0             0 xl
[  563.096747] [  482]     0   482    18919      921   196608        0             0 qubes-gui
[  563.096761] [  483]     0   483    16536      825   167936        0             0 qrexec-agent
[  563.096775] [  488]     0   488    52775      527    65536        0             0 agetty
[  563.096789] [  490]     0   490    52863      389    65536        0             0 agetty
[  563.096803] [  567]     0   567    73994     1315   225280        0             0 su
[  563.096815] [  574]  1000   574    21933     2062   217088        0             0 systemd
[  563.096828] [  575]  1000   575    34788      612   299008        0             0 (sd-pam)
[  563.096841] [  580]  1000   580    54160      849    86016        0             0 bash
[  563.296757] [  601]  1000   601     3500      275    73728        0             0 xinit
[  563.296769] [  602]  1000   602   316290    26304   712704        0             0 Xorg
[  563.296780] [  617]  1000   617    53597      741    77824        0             0 qubes-session
[  563.296808] [  622]  1000   622    13196     1085   139264        0             0 dbus-daemon
[  563.296835] [  634]  1000   634     7242      118    90112        0             0 ssh-agent
[  563.296845] [  720]  1000   720    16562      572   176128        0             0 qrexec-client-v
[  563.296858] [  749]  1000   749    48107     1279   143360        0             0 dconf-service
[  563.296871] [  753]  1000   753   428392    12318   811008        0             0 gsd-xsettings
[  563.296884] [  755]  1000   755   138640     1356   200704        0             0 agent
[  563.296895] [  756]  1000   756    62744     2957   143360        0             0 icon-sender
[  563.296908] [  757]  1000   757   122405     1485   192512        0             0 gnome-keyring-d
[  563.296920] [  773]  1000   773   438415    13981   909312        0             0 nm-applet
[  563.296931] [  783]  1000   783   128956     2127   409600        0             0 pulseaudio
[  563.296944] [  788]   172   788    47723      785   143360        0             0 rtkit-daemon
[  563.296956] [  791]   998   791   657135     5375   409600        0             0 polkitd
[  563.296967] [  794]  1000   794    16528      101   167936        0             0 qrexec-fork-ser
[  563.296979] [  798]  1000   798    52238      202    65536        0             0 sleep
[  563.296990] [  911]  1000   911    87397     1566   180224        0             0 at-spi-bus-laun
[  563.297021] [  916]  1000   916    13134      925   139264        0             0 dbus-daemon
[  563.297039] [  920]  1000   920    56364     1533   217088        0             0 at-spi2-registr
[  563.297057] [  925]  1000   925   211056    12187   593920        0             0 gnome-terminal-
[  563.297073] [  928]  1000   928   123835     1835   212992        0             0 gvfsd
[  563.297089] [  939]  1000   939    89299     1334   188416        0             0 gvfsd-fuse
[  563.297109] [  950]  1000   950   169492     2845   290816        0             0 xdg-desktop-por
[  563.297127] [  954]  1000   954   173633     1504   200704        0             0 xdg-document-po
[  563.297144] [  958]  1000   958   117667     1235   167936        0             0 xdg-permission-
[  563.297162] [  968]  1000   968   193290     5084   483328        0             0 xdg-desktop-por
[  563.297179] [  977]  1000   977    54288     1052    77824        0             0 bash
[  563.297194] [ 1016]  1000  1016    53876      255    81920        0             0 dmesg
[  563.297208] [ 1019]  1000  1019    54288     1072    90112        0             0 bash
[  563.297223] [ 1065]  1000  1065    54288     1029    94208        0             0 bash
[  563.297237] [ 1090]  1000  1090    53956      747    77824        0             0 watch
[  563.297253] [ 2458]  1000  2458    54288     1061    81920        0             0 bash
[  563.297267] [21105]  1000 21105    53597      776    77824        0             0 go
[  563.297282] [21124]     0 21124    80041     1663   278528        0             0 sudo
[  563.297296] [21128]     0 21128    53876      267    86016        0             0 dmesg
[  563.297311] [21129]  1000 21129    65645     2317   159744        0             0 rpmbuild
[  563.297326] [21234]  1000 21234    53597      792    77824        0             0 sh
[  563.297341] [21257]  1000 21257    67164     6625   290816        0             0 python2.7
[  563.297355] [21278]  1000 21278    30538     5569   270336        0             0 python2.7
[  563.297370] [21322]  1000 21322     9595     1299   118784        0             0 gmake
[  563.297384] [21325]  1000 21325     9526     1213   122880        0             0 gmake
[  563.297399] [21457]  1000 21457     9521     1260   118784        0             0 gmake
[  563.297414] [21461]  1000 21461     9587     1267   114688        0             0 gmake
[  563.297429] [21504]  1000 21504     9145      876   114688        0             0 gmake
[  563.297443] [21512]  1000 21512   148193     8240   327680        0             0 cargo
[  563.297458] [23738]  1000 23738   922184   796761  6893568        0             0 rustc
[  563.297473] [25511]  1000 25511    63291     1098   159744        0             0 top
[  563.497463] Out of memory: Kill process 23738 (rustc) score 800 or sacrifice child
[  563.497484] Killed process 23738 (rustc) total-vm:3688736kB, anon-rss:3073192kB, file-rss:113852kB, shmem-rss:0kB
[  563.579382] oom_reaper: reaped process 23738 (rustc), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[  563.771618] audit: type=1101 audit(1535596003.193:235): pid=27065 uid=1000 auid=1000 ses=1 msg='op=PAM:accounting grantors=pam_unix acct="user" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/3 res=success'
[  563.771908] audit: type=1123 audit(1535596003.193:236): pid=27065 uid=1000 auid=1000 ses=1 msg='cwd="/home/user/rpmbuild" cmd=73797363746C20766D2E626C6F636B5F64756D703D30 terminal=pts/3 res=success'
[  563.771934] audit: type=1110 audit(1535596003.193:237): pid=27065 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/3 res=success'
Active(file):     283972 kB
Inactive(file):     7980 kB
Unevictable:	   62176 kB
Mlocked:           62176 kB

I notice the number in /proc/meminfo is always higher than the one oom-killer reports, and it's not going down then up again right after oom.

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 30, 2018

Using memalloc_watchdog.patch (without le9b.patch), I get about 1 sec of stalling (terminals like watch -n0.1 -d cat /proc/meminfo don't refresh for 1 sec):

$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 3 --timeout 10s; echo $?
stress: info: [3656] dispatching hogs: 0 cpu, 0 io, 3 vm, 0 hdd
stress: FAIL: [3656] (415) <-- worker 3659 got signal 9
stress: WARN: [3656] (417) now reaping child worker processes
stress: FAIL: [3656] (451) failed run completed in 4s

real	0m3.248s
user	0m0.433s
sys	0m3.158s
1
[  167.656499] stress invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
[  167.656525] stress cpuset=/ mems_allowed=0
[  167.656536] CPU: 0 PID: 3657 Comm: stress Tainted: G           O      4.18.5-5.pvops.qubes.x86_64 #1
[  167.656554] Call Trace:
[  167.656564]  dump_stack+0x63/0x83
[  167.656574]  dump_header+0x6e/0x285
[  167.656584]  oom_kill_process+0x23c/0x450
[  167.656593]  out_of_memory+0x147/0x590
[  167.656602]  __alloc_pages_slowpath+0x134c/0x1590
[  167.656613]  __alloc_pages_nodemask+0x302/0x3c0
[  167.656625]  alloc_pages_vma+0xac/0x4f0
[  167.656635]  do_anonymous_page+0x105/0x3f0
[  167.656644]  __handle_mm_fault+0xbc9/0xf10
[  167.656653]  handle_mm_fault+0x102/0x2c0
[  167.656662]  __do_page_fault+0x294/0x540
[  167.656672]  ? page_fault+0x8/0x30
[  167.656679]  do_page_fault+0x38/0x120
[  167.656687]  ? page_fault+0x8/0x30
[  167.656695]  page_fault+0x1e/0x30
[  167.656705] RIP: 0033:0x564c82144dd0
[  167.656712] Code: Bad RIP value.
[  167.656723] RSP: 002b:00007ffd3f50c6b0 EFLAGS: 00010206
[  167.656734] RAX: 00000000a5490000 RBX: 00007556cffe9010 RCX: 00007556cffe9010
[  167.656745] RDX: 0000000000000001 RSI: 00000001696d3000 RDI: 0000000000000000
[  167.656755] RBP: 0000564c82145bb4 R08: 00000000ffffffff R09: 0000000000000000
[  167.656766] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  167.656776] R13: 0000000000000002 R14: 0000000000001000 R15: 00000001696d2000
[  167.656794] Mem-Info:
[  167.656802] active_anon:1966716 inactive_anon:4158 isolated_anon:0
                active_file:139 inactive_file:16 isolated_file:0
                unevictable:11358 dirty:0 writeback:0 unstable:0
                slab_reclaimable:6102 slab_unreclaimable:12608
                mapped:886 shmem:4293 pagetables:6100 bounce:0
                free:40760 free_pcp:576 free_cma:0
[  167.656867] Node 0 active_anon:7866864kB inactive_anon:16632kB active_file:556kB inactive_file:64kB unevictable:45432kB isolated(anon):0kB isolated(file):0kB mapped:3544kB dirty:0kB writeback:0kB shmem:17172kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[  167.656916] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  167.656967] lowmem_reserve[]: 0 3956 23499 23499 23499
[  167.656984] Node 0 DMA32 free:89276kB min:11368kB low:15416kB high:19464kB active_anon:3971832kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:20kB pagetables:7808kB bounce:0kB free_pcp:328kB local_pcp:192kB free_cma:0kB
[  167.657046] lowmem_reserve[]: 0 0 19543 19543 19543
[  167.657057] Node 0 Normal free:57860kB min:56168kB low:76180kB high:96192kB active_anon:3894044kB inactive_anon:16632kB active_file:624kB inactive_file:712kB unevictable:45432kB writepending:0kB present:20400128kB managed:4250296kB mlocked:45432kB kernel_stack:4992kB pagetables:16592kB bounce:0kB free_pcp:1848kB local_pcp:156kB free_cma:0kB
[  167.657118] lowmem_reserve[]: 0 0 0 0 0
[  167.657126] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  167.657159] Node 0 DMA32: 12*4kB (UME) 25*8kB (UE) 33*16kB (UME) 27*32kB (UME) 17*64kB (UE) 14*128kB (UME) 6*256kB (ME) 3*512kB (UE) 2*1024kB (UM) 1*2048kB (M) 19*4096kB (ME) = 89512kB
[  167.657199] Node 0 Normal: 301*4kB (UMEH) 303*8kB (UMEH) 294*16kB (UEH) 212*32kB (UMEH) 147*64kB (UE) 149*128kB (UME) 51*256kB (U) 0*512kB 1*1024kB (U) 0*2048kB 0*4096kB = 57676kB
[  167.657231] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  167.657247] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  167.657263] 4542 total pagecache pages
[  167.657272] 6143894 pages RAM
[  167.657280] 0 pages HighMem/MovableOnly
[  167.657286] 4059810 pages reserved
[  167.657294] 0 pages cma reserved
[  167.657301] 0 pages hwpoisoned
[  167.657309] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[  167.657337] [  278]     0   278    24952      946   204800        0             0 systemd-journal
[  167.657357] [  299]     0   299    30805       88   163840        0             0 qubesdb-daemon
[  167.657374] [  315]     0   315    23536      523   217088        0         -1000 systemd-udevd
[  167.657390] [  470]     0   470     3042      781    69632        0             0 haveged
[  167.657404] [  473]     0   473    19357      183   188416        0             0 systemd-logind
[  167.657423] [  478]    81   478    13254      216   147456        0          -900 dbus-daemon
[  167.657441] [  480]     0   480    10243       75   118784        0             0 meminfo-writer
[  167.657459] [  485]     0   485    34209      120   176128        0             0 xl
[  167.657476] [  489]     0   489    18919      146   196608        0             0 qubes-gui
[  167.856463] [  490]     0   490    16536      106   172032        0             0 qrexec-agent
[  167.856494] [  492]     0   492    73994      189   237568        0             0 su
[  167.856518] [  511]     0   511    52775       29    69632        0             0 agetty
[  167.856544] [  514]  1000   514    21961      335   221184        0             0 systemd
[  167.856568] [  516]     0   516    52863       28    61440        0             0 agetty
[  167.856595] [  522]  1000   522    34754      597   290816        0             0 (sd-pam)
[  167.856624] [  577]  1000   577    54160       88    81920        0             0 bash
[  167.856652] [  618]  1000   618     3500       30    77824        0             0 xinit
[  167.856679] [  619]  1000   619   310890    17119   704512        0             0 Xorg
[  167.856701] [  706]  1000   706    53597       67    69632        0             0 qubes-session
[  167.856730] [  716]  1000   716    13195      153   151552        0             0 dbus-daemon
[  167.856753] [  734]  1000   734     7242      118    94208        0             0 ssh-agent
[  167.856776] [  752]  1000   752    16562      104   176128        0             0 qrexec-client-v
[  167.856806] [  769]  1000   769    48107      146   147456        0             0 dconf-service
[  167.856830] [  774]  1000   774   428396     2599   827392        0             0 gsd-xsettings
[  167.856858] [  775]  1000   775    62744     1791   147456        0             0 icon-sender
[  167.856881] [  776]  1000   776   122406      247   188416        0             0 gnome-keyring-d
[  167.856907] [  779]  1000   779   120207      120   184320        0             0 agent
[  167.856933] [  791]  1000   791   438406     2853   897024        0             0 nm-applet
[  167.856954] [  796]  1000   796   128956      340   401408        0             0 pulseaudio
[  167.856972] [  799]   172   799    47723       80   143360        0             0 rtkit-daemon
[  167.856994] [  809]   998   809   657134     1538   421888        0             0 polkitd
[  167.857023] [  814]  1000   814    16528      101   167936        0             0 qrexec-fork-ser
[  167.857043] [  817]  1000   817    52238       16    61440        0             0 sleep
[  167.857059] [  894]  1000   894    87397      165   180224        0             0 at-spi-bus-laun
[  167.857082] [  899]  1000   899    13134      116   147456        0             0 dbus-daemon
[  167.857101] [  911]  1000   911    56364      195   208896        0             0 at-spi2-registr
[  167.857120] [  922]  1000   922   123835      203   208896        0             0 gvfsd
[  167.857137] [  948]  1000   948    89299      141   188416        0             0 gvfsd-fuse
[  167.857156] [  966]  1000   966   207212     3438   593920        0             0 gnome-terminal-
[  167.857175] [  972]  1000   972   169492      363   290816        0             0 xdg-desktop-por
[  167.857195] [  976]  1000   976   157249      146   204800        0             0 xdg-document-po
[  167.857213] [  979]  1000   979   117667      114   167936        0             0 xdg-permission-
[  167.857232] [  990]  1000   990   193292     1123   471040        0             0 xdg-desktop-por
[  167.857250] [  998]  1000   998    54289      252    77824        0             0 bash
[  167.857267] [ 1027]  1000  1027    54289      252    81920        0             0 bash
[  167.857284] [ 1052]     0  1052    80041      245   282624        0             0 sudo
[  167.857303] [ 1053]     0  1053    53876       63    81920        0             0 dmesg
[  167.857322] [ 1061]  1000  1061    54289      252    90112        0             0 bash
[  167.857338] [ 1086]  1000  1086    53923      106    73728        0             0 watch
[  168.056361] [ 3656]  1000  3656     2000       20    61440        0             0 stress
[  168.056399] [ 3657]  1000  3657  1482403   677009  5492736        0             0 stress
[  168.056425] [ 3658]  1000  3658  1482403   579096  4706304        0             0 stress
[  168.056450] [ 3659]  1000  3659  1482403   855406  6922240        0             0 stress
[  168.056479] Out of memory: Kill process 3659 (stress) score 328 or sacrifice child
[  168.056507] Killed process 3659 (stress) total-vm:5929612kB, anon-rss:3421868kB, file-rss:20kB, shmem-rss:0kB
[  169.183090] MemAlloc-Info: stalling=1 dying=1 exiting=0 victim=0 oom_count=3
[  169.183107] MemAlloc: kswapd0(109) flags=0xa20840 switches=585
[  169.183118] kswapd0         S    0   109      2 0x80000000
[  169.183127] Call Trace:
[  169.183138]  ? __schedule+0x3f3/0x8c0
[  169.183145]  schedule+0x36/0x80
[  169.183153]  kswapd+0x584/0x590
[  169.183161]  ? remove_wait_queue+0x70/0x70
[  169.183168]  kthread+0x105/0x140
[  169.183176]  ? balance_pgdat+0x3e0/0x3e0
[  169.183182]  ? kthread_stop+0x100/0x100
[  169.183191]  ret_from_fork+0x35/0x40
[  169.183205] MemAlloc: stress(3658) flags=0x404040 switches=5 seq=3116 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=1899 uninterruptible dying
[  169.183227] stress          D    0  3658   3656 0x00000084
[  169.183237] Call Trace:
[  169.183244]  ? __schedule+0x3f3/0x8c0
[  169.183252]  ? __switch_to_asm+0x40/0x70
[  169.183260]  ? __switch_to_asm+0x34/0x70
[  169.183268]  schedule+0x36/0x80
[  169.183277]  schedule_timeout+0x29b/0x4d0
[  169.183290]  ? __switch_to+0x13f/0x4d0
[  169.183297]  ? __switch_to_asm+0x40/0x70
[  169.183305]  ? finish_task_switch+0x75/0x2a0
[  169.183315]  wait_for_completion+0x121/0x190
[  169.183325]  ? wake_up_q+0x80/0x80
[  169.183334]  flush_work+0x18f/0x200
[  169.183343]  ? rcu_free_pwq+0x20/0x20
[  169.183353]  __alloc_pages_slowpath+0x766/0x1590
[  169.183365]  __alloc_pages_nodemask+0x302/0x3c0
[  169.183376]  alloc_pages_vma+0xac/0x4f0
[  169.183384]  ? lru_cache_add+0x134/0x1b0
[  169.183391]  do_anonymous_page+0x105/0x3f0
[  169.183400]  __handle_mm_fault+0xbc9/0xf10
[  169.183408]  handle_mm_fault+0x102/0x2c0
[  169.183418]  __do_page_fault+0x294/0x540
[  169.183430]  do_page_fault+0x38/0x120
[  169.183438]  ? page_fault+0x8/0x30
[  169.183446]  page_fault+0x1e/0x30
[  169.183454] RIP: 0033:0x564c82144dd0
[  169.183460] Code: Bad RIP value.
[  169.183470] RSP: 002b:00007ffd3f50c6b0 EFLAGS: 00010206
[  169.183479] RAX: 000000008d601000 RBX: 00007556cffe9010 RCX: 00007556cffe9010
[  169.183492] RDX: 0000000000000001 RSI: 00000001696d3000 RDI: 0000000000000000
[  169.183506] RBP: 0000564c82145bb4 R08: 00000000ffffffff R09: 0000000000000000
[  169.183520] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  169.183534] R13: 0000000000000002 R14: 0000000000001000 R15: 00000001696d2000
[  169.183547] Mem-Info:
[  169.183555] active_anon:607612 inactive_anon:4158 isolated_anon:0
                active_file:1221 inactive_file:3712 isolated_file:0
                unevictable:11358 dirty:0 writeback:0 unstable:0
                slab_reclaimable:6091 slab_unreclaimable:12613
                mapped:3826 shmem:4293 pagetables:3527 bounce:0
                free:2177187 free_pcp:1361 free_cma:0
[  169.183609] Node 0 active_anon:2430448kB inactive_anon:16632kB active_file:4884kB inactive_file:14848kB unevictable:45432kB isolated(anon):0kB isolated(file):0kB mapped:15304kB dirty:0kB writeback:0kB shmem:17172kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[  169.183651] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  169.183690] lowmem_reserve[]: 0 3956 23499 23499 23499
[  169.183700] Node 0 DMA32 free:2739408kB min:11368kB low:15416kB high:19464kB active_anon:1326560kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:2584kB bounce:0kB free_pcp:1480kB local_pcp:0kB free_cma:0kB
[  169.183743] lowmem_reserve[]: 0 0 19543 19543 19543
[  169.183753] Node 0 Normal free:5953436kB min:56168kB low:76180kB high:96192kB active_anon:1104180kB inactive_anon:16632kB active_file:4984kB inactive_file:15020kB unevictable:45432kB writepending:0kB present:20400128kB managed:7371448kB mlocked:45432kB kernel_stack:4960kB pagetables:11524kB bounce:0kB free_pcp:3964kB local_pcp:56kB free_cma:0kB
[  169.183798] lowmem_reserve[]: 0 0 0 0 0
[  169.183807] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  169.183831] Node 0 DMA32: 10744*4kB (UME) 10762*8kB (UME) 10748*16kB (UME) 10741*32kB (UME) 10719*64kB (UME) 9767*128kB (UME) 281*256kB (UME) 5*512kB (UME) 2*1024kB (UM) 0*2048kB 20*4096kB (ME) = 2739408kB
[  169.183863] Node 0 Normal: 13451*4kB (UMEH) 12409*8kB (UMEH) 11850*16kB (UMEH) 11012*32kB (UMEH) 9548*64kB (UME) 6478*128kB (UME) 703*256kB (UM) 266*512kB (UM) 188*1024kB (UM) 120*2048kB (UM) 748*4096kB (UM) = 5953556kB
[  169.183897] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  169.383895] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  169.383909] 9304 total pagecache pages
[  169.383916] 6143894 pages RAM
[  169.383923] 0 pages HighMem/MovableOnly
[  169.383929] 3154082 pages reserved
[  169.383936] 0 pages cma reserved
[  169.383943] 0 pages hwpoisoned
[  169.383950] Showing busy workqueues and worker pools:
[  169.383960] workqueue events: flags=0x0
[  169.383966]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  169.383996]     in-flight: 106:balloon_process
[  169.384015]     pending: balloon_process
[  169.384034] workqueue mm_percpu_wq: flags=0x8
[  169.384043]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  169.384054]     pending: drain_local_pages_wq BAR(3658), vmstat_update
[  169.384076] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=2s workers=3 idle: 1058 19
[  169.384100] MemAlloc-Info: stalling=1 dying=1 exiting=0 victim=0 oom_count=3
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 30, 2018

Here's the same but with -m 2 instead of -m 3 (forgot to mention that max RAM was set to 24000MB for the qube):
not only the disk thrashing was much more severe during like 3 seconds of stalling, but it also killed the running command watch -n0.1 -d cat /proc/meminfo like Slab: 76160 kB watch: unable to fork process: Cannot allocate memory

(actually in dmesg below, it's included the -m 1 one which did not get OOM-killed! yet was still detected hanging (for 9s?) or maybe there was no dmesg output from it as seen in my next comment, so all dmesg output from 756.1sec to 758.2sec is actually from the -m 2 stress command! but then WHYtef does it say [ 757.745018] flags=0x0 nice=0 hung=9s workers=2 idle: 1058 that is hung=9s is that not 9 seconds? but it only ran for 6.6 sec!)

$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 1 --timeout 10s; echo $?
stress: info: [6732] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: info: [6732] successful run completed in 10s

real	0m10.246s
user	0m8.842s
sys	0m1.305s
0

$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 2 --timeout 10s; echo $?
stress: info: [7004] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
stress: FAIL: [7004] (415) <-- worker 7005 got signal 9
stress: WARN: [7004] (417) now reaping child worker processes
stress: FAIL: [7004] (451) failed run completed in 7s

real	0m6.666s
user	0m1.396s
sys	0m7.593s
1
[  756.127088] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=3
[  756.127116] MemAlloc: kswapd0(109) flags=0xa20840 switches=1295
[  756.127129] kswapd0         R  running task        0   109      2 0x80000000
[  756.127144] Call Trace:
[  756.127153]  ? shrink_node+0x171/0x4b0
[  756.127160]  ? balance_pgdat+0x238/0x3e0
[  756.127169]  ? kswapd+0x1b5/0x590
[  756.127177]  ? remove_wait_queue+0x70/0x70
[  756.127185]  ? kthread+0x105/0x140
[  756.127193]  ? balance_pgdat+0x3e0/0x3e0
[  756.127201]  ? kthread_stop+0x100/0x100
[  756.127211]  ? ret_from_fork+0x35/0x40
[  756.127228] MemAlloc: stress(7006) flags=0x404040 switches=9 seq=6719 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=1126 uninterruptible
[  756.127251] stress          D    0  7006   7004 0x00000080
[  756.127261] Call Trace:
[  756.127268]  ? __schedule+0x3f3/0x8c0
[  756.127276]  ? __switch_to_asm+0x40/0x70
[  756.127284]  ? __switch_to_asm+0x34/0x70
[  756.127291]  schedule+0x36/0x80
[  756.127299]  schedule_timeout+0x29b/0x4d0
[  756.127307]  ? __switch_to+0x13f/0x4d0
[  756.127313]  ? __switch_to_asm+0x40/0x70
[  756.127321]  ? finish_task_switch+0x75/0x2a0
[  756.127331]  wait_for_completion+0x121/0x190
[  756.127344]  ? wake_up_q+0x80/0x80
[  756.127351]  flush_work+0x18f/0x200
[  756.127358]  ? rcu_free_pwq+0x20/0x20
[  756.127367]  __alloc_pages_slowpath+0x766/0x1590
[  756.127379]  __alloc_pages_nodemask+0x302/0x3c0
[  756.127391]  alloc_pages_vma+0xac/0x4f0
[  756.127399]  do_anonymous_page+0x105/0x3f0
[  756.127406]  __handle_mm_fault+0xbc9/0xf10
[  756.127414]  ? __switch_to_asm+0x34/0x70
[  756.127423]  ? __switch_to_asm+0x34/0x70
[  756.127431]  handle_mm_fault+0x102/0x2c0
[  756.127439]  __do_page_fault+0x294/0x540
[  756.127447]  do_page_fault+0x38/0x120
[  756.127455]  ? page_fault+0x8/0x30
[  756.127463]  page_fault+0x1e/0x30
[  756.127471] RIP: 0033:0x5d9437de4dd0
[  756.127478] Code: Bad RIP value.
[  756.127487] RSP: 002b:00007ffec4e2db40 EFLAGS: 00010206
[  756.127497] RAX: 00000001b45bc000 RBX: 000073a7dcffa010 RCX: 000073a7dcffa010
[  756.127512] RDX: 0000000000000001 RSI: 000000033f0ea000 RDI: 0000000000000000
[  756.127527] RBP: 00005d9437de5bb4 R08: 00000000ffffffff R09: 0000000000000000
[  756.127543] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  756.127565] R13: 0000000000000002 R14: 0000000000001000 R15: 000000033f0e9000
[  756.127583] Mem-Info:
[  756.127591] active_anon:4221753 inactive_anon:4670 isolated_anon:0
                active_file:212 inactive_file:31 isolated_file:0
                unevictable:14734 dirty:0 writeback:0 unstable:0
                slab_reclaimable:6091 slab_unreclaimable:12857
                mapped:1351 shmem:4810 pagetables:10690 bounce:0
                free:41768 free_pcp:581 free_cma:0
[  756.127649] Node 0 active_anon:16887012kB inactive_anon:18680kB active_file:848kB inactive_file:124kB unevictable:58936kB isolated(anon):0kB isolated(file):0kB mapped:5404kB dirty:0kB writeback:0kB shmem:19240kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[  756.127696] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  756.127735] lowmem_reserve[]: 0 3956 23499 23499 23499
[  756.127745] Node 0 DMA32 free:89456kB min:11368kB low:15416kB high:19464kB active_anon:3972464kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7776kB bounce:0kB free_pcp:440kB local_pcp:0kB free_cma:0kB
[  756.127786] lowmem_reserve[]: 0 0 19543 19543 19543
[  756.127795] Node 0 Normal free:61092kB min:56168kB low:76180kB high:96192kB active_anon:12915204kB inactive_anon:18680kB active_file:644kB inactive_file:1080kB unevictable:58936kB writepending:0kB present:20400128kB managed:13308616kB mlocked:58936kB kernel_stack:4900kB pagetables:34984kB bounce:0kB free_pcp:1936kB local_pcp:0kB free_cma:0kB
[  756.127834] lowmem_reserve[]: 0 0 0 0 0
[  756.127844] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  756.127870] Node 0 DMA32: 25*4kB (UME) 47*8kB (UE) 31*16kB (UME) 24*32kB (UME) 13*64kB (UE) 10*128kB (UE) 6*256kB (UE) 2*512kB (E) 1*1024kB (M) 2*2048kB (UM) 19*4096kB (ME) = 89356kB
[  756.127897] Node 0 Normal: 385*4kB (UEH) 305*8kB (UMEH) 298*16kB (UME) 328*32kB (UEH) 205*64kB (UME) 176*128kB (UE) 10*256kB (UM) 3*512kB (U) 1*1024kB (U) 0*2048kB 0*4096kB = 60012kB
[  756.127925] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  756.127938] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  756.127950] 5178 total pagecache pages
[  756.127957] 6143894 pages RAM
[  756.127963] 0 pages HighMem/MovableOnly
[  756.127970] 1795230 pages reserved
[  756.127977] 0 pages cma reserved
[  756.328018] 0 pages hwpoisoned
[  756.328037] Showing busy workqueues and worker pools:
[  756.328050] workqueue events: flags=0x0
[  756.328062]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  756.328078]     in-flight: 106:balloon_process
[  756.328093]     pending: balloon_process
[  756.328113] workqueue mm_percpu_wq: flags=0x8
[  756.328122]   pwq 10: cpus=5 node=0 flags=0x0 nice=0 active=1/256
[  756.328133]     pending: vmstat_update
[  756.328142]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  756.328154]     pending: vmstat_update, drain_local_pages_wq BAR(7006)
[  756.328173] workqueue dm_bufio_cache: flags=0x8
[  756.328181]   pwq 10: cpus=5 node=0 flags=0x0 nice=0 active=1/256
[  756.328192]     pending: work_fn
[  756.328202] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=7s workers=2 idle: 1058
[  756.328217] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=3
[  757.343086] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=3
[  757.343114] MemAlloc: kswapd0(109) flags=0xa20840 switches=2186
[  757.343129] kswapd0         R  running task        0   109      2 0x80000000
[  757.343146] Call Trace:
[  757.343157]  ? shrink_node+0xd7/0x4b0
[  757.343167]  ? shrink_node+0x171/0x4b0
[  757.343178]  ? balance_pgdat+0x238/0x3e0
[  757.343188]  ? kswapd+0x1b5/0x590
[  757.343199]  ? remove_wait_queue+0x70/0x70
[  757.343208]  ? kthread+0x105/0x140
[  757.343218]  ? balance_pgdat+0x3e0/0x3e0
[  757.343228]  ? kthread_stop+0x100/0x100
[  757.343237]  ? ret_from_fork+0x35/0x40
[  757.343258] MemAlloc: stress(7006) flags=0x404040 switches=9 seq=6719 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=2342 uninterruptible
[  757.343286] stress          D    0  7006   7004 0x00000080
[  757.343299] Call Trace:
[  757.343308]  ? __schedule+0x3f3/0x8c0
[  757.343317]  ? __switch_to_asm+0x40/0x70
[  757.343327]  ? __switch_to_asm+0x34/0x70
[  757.343340]  schedule+0x36/0x80
[  757.343350]  schedule_timeout+0x29b/0x4d0
[  757.343360]  ? __switch_to+0x13f/0x4d0
[  757.343370]  ? __switch_to_asm+0x40/0x70
[  757.343380]  ? finish_task_switch+0x75/0x2a0
[  757.343393]  wait_for_completion+0x121/0x190
[  757.343405]  ? wake_up_q+0x80/0x80
[  757.343415]  flush_work+0x18f/0x200
[  757.343425]  ? rcu_free_pwq+0x20/0x20
[  757.343435]  __alloc_pages_slowpath+0x766/0x1590
[  757.343448]  __alloc_pages_nodemask+0x302/0x3c0
[  757.343460]  alloc_pages_vma+0xac/0x4f0
[  757.343475]  do_anonymous_page+0x105/0x3f0
[  757.343486]  __handle_mm_fault+0xbc9/0xf10
[  757.343496]  ? __switch_to_asm+0x34/0x70
[  757.343506]  ? __switch_to_asm+0x34/0x70
[  757.343515]  handle_mm_fault+0x102/0x2c0
[  757.343525]  __do_page_fault+0x294/0x540
[  757.343535]  do_page_fault+0x38/0x120
[  757.343544]  ? page_fault+0x8/0x30
[  757.343553]  page_fault+0x1e/0x30
[  757.343565] RIP: 0033:0x5d9437de4dd0
[  757.343573] Code: Bad RIP value.
[  757.343584] RSP: 002b:00007ffec4e2db40 EFLAGS: 00010206
[  757.343595] RAX: 00000001b45bc000 RBX: 000073a7dcffa010 RCX: 000073a7dcffa010
[  757.343611] RDX: 0000000000000001 RSI: 000000033f0ea000 RDI: 0000000000000000
[  757.343627] RBP: 00005d9437de5bb4 R08: 00000000ffffffff R09: 0000000000000000
[  757.343644] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  757.343661] R13: 0000000000000002 R14: 0000000000001000 R15: 000000033f0e9000
[  757.343679] Mem-Info:
[  757.343688] active_anon:4947984 inactive_anon:4670 isolated_anon:0
                active_file:230 inactive_file:117 isolated_file:0
                unevictable:14734 dirty:0 writeback:0 unstable:0
                slab_reclaimable:6115 slab_unreclaimable:12834
                mapped:1464 shmem:4810 pagetables:12069 bounce:0
                free:42528 free_pcp:665 free_cma:0
[  757.343754] Node 0 active_anon:19792228kB inactive_anon:18680kB active_file:920kB inactive_file:468kB unevictable:58936kB isolated(anon):0kB isolated(file):0kB mapped:5856kB dirty:0kB writeback:0kB shmem:19240kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[  757.343806] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  757.343857] lowmem_reserve[]: 0 3956 23499 23499 23499
[  757.343871] Node 0 DMA32 free:89356kB min:11368kB low:15416kB high:19464kB active_anon:3972492kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7684kB bounce:0kB free_pcp:440kB local_pcp:0kB free_cma:0kB
[  757.343918] lowmem_reserve[]: 0 0 19543 19543 19543
[  757.343929] Node 0 Normal free:65628kB min:56168kB low:76180kB high:96192kB active_anon:15819800kB inactive_anon:18680kB active_file:788kB inactive_file:200kB unevictable:58936kB writepending:0kB present:20400128kB managed:16224968kB mlocked:58936kB kernel_stack:4880kB pagetables:40592kB bounce:0kB free_pcp:2168kB local_pcp:0kB free_cma:0kB
[  757.509304] stress invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
[  757.543913] lowmem_reserve[]:
[  757.543932] stress cpuset=
[  757.543933]  0 0
[  757.543942] / mems_allowed=0
[  757.543949]  0
[  757.543956] CPU: 4 PID: 7005 Comm: stress Tainted: G           O      4.18.5-5.pvops.qubes.x86_64 #1
[  757.543962]  0 0
[  757.543967] Call Trace:
[  757.543985] Node 0 
[  757.543994]  dump_stack+0x63/0x83
[  757.543997]  dump_header+0x6e/0x285
[  757.544000] DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB 
[  757.544015]  oom_kill_process+0x23c/0x450
[  757.544017]  out_of_memory+0x147/0x590
[  757.544019]  __alloc_pages_slowpath+0x134c/0x1590
[  757.544022]  __alloc_pages_nodemask+0x302/0x3c0
[  757.544024]  alloc_pages_vma+0xac/0x4f0
[  757.544026]  do_anonymous_page+0x105/0x3f0
[  757.544034] (U) 
[  757.544045]  __handle_mm_fault+0xbc9/0xf10
[  757.544051] 1*128kB (U) 
[  757.544062]  handle_mm_fault+0x102/0x2c0
[  757.544070] 1*256kB (U) 
[  757.544085]  __do_page_fault+0x294/0x540
[  757.544089] 0*512kB 1*1024kB (U) 
[  757.544102]  do_page_fault+0x38/0x120
[  757.544105] 1*2048kB (M) 3*4096kB 
[  757.544113]  ? page_fault+0x8/0x30
[  757.544115]  page_fault+0x1e/0x30
[  757.544119] (M) = 15904kB
[  757.544128] RIP: 0033:0x5d9437de4dd0
[  757.544128] Code: 
[  757.544134] Node 0 DMA32: 
[  757.544147] Bad RIP value.
[  757.544151] 25*4kB (UME) 
[  757.544158] RSP: 002b:00007ffec4e2db40 EFLAGS: 00010206
[  757.544167] 47*8kB (UE) 
[  757.544175] RAX: 0000000312d9c000 RBX: 000073a7dcffa010 RCX: 000073a7dcffa010
[  757.544176] RDX: 0000000000000001 RSI: 000000033f0ea000 RDI: 0000000000000000
[  757.544184] 31*16kB (UME) 
[  757.544189] RBP: 00005d9437de5bb4 R08: 00000000ffffffff R09: 0000000000000000
[  757.544191] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  757.544200] 24*32kB (UME) 
[  757.544206] R13: 0000000000000002 R14: 0000000000001000 R15: 000000033f0e9000
[  757.544211] 13*64kB (UE) 
[  757.544224] Mem-Info:
[  757.544233] 10*128kB 
[  757.544245] active_anon:5038285 inactive_anon:4670 isolated_anon:0
                active_file:310 inactive_file:1424 isolated_file:0
                unevictable:14734 dirty:0 writeback:0 unstable:0
                slab_reclaimable:6123 slab_unreclaimable:12834
                mapped:2449 shmem:4810 pagetables:12257 bounce:0
                free:58641 free_pcp:691 free_cma:0
[  757.544256] (UE) 6*256kB (UE) 
[  757.544275] Node 0 active_anon:20153140kB inactive_anon:18680kB active_file:1240kB inactive_file:5696kB unevictable:58936kB isolated(anon):0kB isolated(file):0kB mapped:9796kB dirty:0kB writeback:0kB shmem:19240kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[  757.544277] 2*512kB (E) 
[  757.544289] Node 0 
[  757.544302] 1*1024kB (M) 
[  757.544309] DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  757.544319] 2*2048kB (UM) 19*4096kB 
[  757.544324] lowmem_reserve[]: 0
[  757.544331] (ME) 
[  757.544336]  3956 23499
[  757.544404] = 89356kB
[  757.544405] Node 0 Normal: 
[  757.544413]  23499 23499
[  757.544465] 837*4kB 
[  757.544475] (UMEH) 558*8kB (UMEH) 
[  757.544481] Node 0 DMA32 free:89356kB min:11368kB low:15416kB high:19464kB active_anon:3972492kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7684kB bounce:0kB free_pcp:484kB local_pcp:0kB free_cma:0kB
[  757.544528] 436*16kB (UME) 
[  757.544535] lowmem_reserve[]: 0 0
[  757.544545] 353*32kB (UEH) 
[  757.544550]  19543 19543
[  757.544556] 211*64kB (UE) 
[  757.544561]  19543
[  757.544567] 139*128kB (U) 
[  757.544573] Node 0 Normal free:130816kB min:56168kB low:76180kB high:96192kB active_anon:16180664kB inactive_anon:18680kB active_file:212kB inactive_file:4728kB unevictable:58936kB writepending:0kB present:20400128kB managed:16657096kB mlocked:58936kB kernel_stack:4880kB pagetables:41344kB bounce:0kB free_pcp:2340kB local_pcp:312kB free_cma:0kB
[  757.544578] 7*256kB (UM) 6*512kB 
[  757.544586] lowmem_reserve[]: 0
[  757.544639] (UM) 3*1024kB 
[  757.544644]  0 0
[  757.544652] (U) 4*2048kB 
[  757.544658]  0 0
[  757.544659] Node 0 
[  757.544664] (U) 14*4096kB 
[  757.544670] DMA: 0*4kB 
[  757.544676] (U) 
[  757.544681] 0*8kB 0*16kB 
[  757.744645] = 130852kB
[  757.744647] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  757.744647] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  757.744649] 12284 total pagecache pages
[  757.744655] 1*32kB (U) 
[  757.744664] 6143894 pages RAM
[  757.744670] 2*64kB (U) 1*128kB 
[  757.744675] 0 pages HighMem/MovableOnly
[  757.744676] 844958 pages reserved
[  757.744680] (U) 1*256kB (U) 
[  757.744685] 0 pages cma reserved
[  757.744686] 0 pages hwpoisoned
[  757.744693] 0*512kB 
[  757.744699] Showing busy workqueues and worker pools:
[  757.744702] 1*1024kB (U) 
[  757.744709] workqueue events: flags=0x0
[  757.744713] 1*2048kB (M) 
[  757.744719]   pwq 2:
[  757.744736] 3*4096kB 
[  757.744751]  cpus=1 node=0
[  757.744759] (M) = 15904kB
[  757.744761] Node 0 
[  757.744767]  flags=0x0 nice=0 active=2/256
[  757.744777] DMA32: 25*4kB 
[  757.744784]     in-flight:
[  757.744791] (UME) 48*8kB 
[  757.744801]  106:balloon_process
[  757.744806] (UE) 31*16kB 
[  757.744813]     pending: balloon_process
[  757.744823] (UME) 24*32kB 
[  757.744831] workqueue mm_percpu_wq: flags=0x8
[  757.744839] (UME) 13*64kB (UE) 
[  757.744846]   pwq 8:
[  757.744853] 10*128kB 
[  757.744857]  cpus=4 node=0 flags=0x0 nice=0
[  757.744863] (UE) 6*256kB 
[  757.744872]  active=1/256
[  757.744875] (UE) 2*512kB 
[  757.744883]     pending:
[  757.744889] (E) 
[  757.744899]  vmstat_update
[  757.744904] 1*1024kB (M) 
[  757.744910]   pwq 2:
[  757.744915] 2*2048kB 
[  757.744921]  cpus=1 node=0
[  757.744926] (UM) 19*4096kB 
[  757.744932]  flags=0x0 nice=0 active=2/256
[  757.744938] (ME) = 89364kB
[  757.744946]     pending:
[  757.744953] Node 0 
[  757.744957]  vmstat_update, drain_local_pages_wq
[  757.744962] Normal: 1031*4kB 
[  757.744969]  BAR(7006)
[  757.744975] (UEH) 1080*8kB 
[  757.744988] pool 2:
[  757.744992] (UMEH) 690*16kB 
[  757.744997]  cpus=1 node=0
[  757.745012] (UE) 537*32kB 
[  757.745018]  flags=0x0 nice=0 hung=9s workers=2 idle: 1058
[  757.745026] (UMEH) 
[  757.745040] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=4
[  757.745043] 355*64kB (UE) 251*128kB (UM) 93*256kB (UM) 73*512kB (U) 54*1024kB (UM) 30*2048kB (U) 70*4096kB (U) = 560476kB
[  757.745167] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  757.745181] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  757.745193] 12284 total pagecache pages
[  757.745200] 6143894 pages RAM
[  757.745206] 0 pages HighMem/MovableOnly
[  757.745212] 844446 pages reserved
[  757.745218] 0 pages cma reserved
[  757.745224] 0 pages hwpoisoned
[  757.745230] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[  757.745253] [  278]     0   278    27001     1588   208896        0             0 systemd-journal
[  757.745269] [  299]     0   299    30805      102   163840        0             0 qubesdb-daemon
[  757.745283] [  315]     0   315    23536      523   217088        0         -1000 systemd-udevd
[  757.745297] [  470]     0   470     3042      781    69632        0             0 haveged
[  757.745309] [  473]     0   473    19357      185   188416        0             0 systemd-logind
[  757.745323] [  478]    81   478    13254      216   147456        0          -900 dbus-daemon
[  757.745336] [  480]     0   480    10243       75   118784        0             0 meminfo-writer
[  757.745350] [  485]     0   485    34209      120   176128        0             0 xl
[  757.745367] [  489]     0   489    18947      163   196608        0             0 qubes-gui
[  757.745383] [  490]     0   490    16536      106   172032        0             0 qrexec-agent
[  757.745400] [  492]     0   492    73994      189   237568        0             0 su
[  757.745417] [  511]     0   511    52775       29    69632        0             0 agetty
[  757.745431] [  514]  1000   514    21961      335   221184        0             0 systemd
[  757.745444] [  516]     0   516    52863       28    61440        0             0 agetty
[  757.944425] [  522]  1000   522    34754      597   290816        0             0 (sd-pam)
[  757.944444] [  577]  1000   577    54160       88    81920        0             0 bash
[  757.944459] [  618]  1000   618     3500       30    77824        0             0 xinit
[  757.944474] [  619]  1000   619   316380    19701   716800        0             0 Xorg
[  757.944490] [  706]  1000   706    53597       67    69632        0             0 qubes-session
[  757.944507] [  716]  1000   716    13195      633   151552        0             0 dbus-daemon
[  757.944523] [  734]  1000   734     7242      118    94208        0             0 ssh-agent
[  757.944541] [  752]  1000   752    16562      104   176128        0             0 qrexec-client-v
[  757.944560] [  769]  1000   769    48107      146   147456        0             0 dconf-service
[  757.944577] [  774]  1000   774   428396     2599   827392        0             0 gsd-xsettings
[  757.944595] [  775]  1000   775    62744     1792   147456        0             0 icon-sender
[  757.944611] [  776]  1000   776   122406      247   188416        0             0 gnome-keyring-d
[  757.944624] [  779]  1000   779   120207      120   184320        0             0 agent
[  757.944636] [  791]  1000   791   438406     2853   897024        0             0 nm-applet
[  757.944647] [  796]  1000   796   128956      340   401408        0             0 pulseaudio
[  757.944661] [  799]   172   799    47723       80   143360        0             0 rtkit-daemon
[  757.944673] [  809]   998   809   657134     1538   421888        0             0 polkitd
[  757.944690] [  814]  1000   814    16528      101   167936        0             0 qrexec-fork-ser
[  757.944708] [  817]  1000   817    52238       16    61440        0             0 sleep
[  757.944723] [  894]  1000   894    87397      165   180224        0             0 at-spi-bus-laun
[  757.944740] [  899]  1000   899    13134      116   147456        0             0 dbus-daemon
[  757.944757] [  911]  1000   911    56364      196   208896        0             0 at-spi2-registr
[  757.944774] [  922]  1000   922   123835      263   208896        0             0 gvfsd
[  757.944788] [  948]  1000   948    89299      141   188416        0             0 gvfsd-fuse
[  757.944804] [  966]  1000   966   211794     6410   602112        0             0 gnome-terminal-
[  757.944822] [  972]  1000   972   169492      363   290816        0             0 xdg-desktop-por
[  757.944838] [  976]  1000   976   157249      268   204800        0             0 xdg-document-po
[  757.944856] [  979]  1000   979   117667      114   167936        0             0 xdg-permission-
[  757.944871] [  990]  1000   990   193292     1123   471040        0             0 xdg-desktop-por
[  757.944887] [  998]  1000   998    54289      252    77824        0             0 bash
[  757.944902] [ 1027]  1000  1027    54289      252    81920        0             0 bash
[  757.944920] [ 1052]     0  1052    80041      245   282624        0             0 sudo
[  757.944936] [ 1053]     0  1053    53876       63    81920        0             0 dmesg
[  757.944950] [ 1061]  1000  1061    54289      715    90112        0             0 bash
[  757.944962] [ 6380]  1000  6380    54289      252    81920        0             0 bash
[  757.944973] [ 6431]  1000  6431    60092      852   122880        0             0 vim
[  757.944985] [ 7004]  1000  7004     2000       20    61440        0             0 stress
[  757.944996] [ 7005]  1000  7005  3406010  3222910 25899008        0             0 stress
[  757.945034] [ 7006]  1000  7006  3406010  1787294 14393344        0             0 stress
[  757.945048] [ 7077]  1000  7077   160641     5233   483328        0             0 mate-notificati
[  757.945061] Out of memory: Kill process 7005 (stress) score 625 or sacrifice child
[  757.945073] Killed process 7005 (stress) total-vm:13624040kB, anon-rss:12891640kB, file-rss:0kB, shmem-rss:0kB
[  758.284384] oom_reaper: reaped process 7005 (stress), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 30, 2018

ok I ran these(with -m 1) again but no dmesg output or disk thrashing (because there's still like 9G MemFree):

[user@dev01-w-s-f-fdr28 ~]$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 1 --timeout 10s; echo $?
stress: info: [7295] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: info: [7295] successful run completed in 10s

real	0m10.198s
user	0m8.779s
sys	0m1.323s
0
[user@dev01-w-s-f-fdr28 ~]$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 1 --timeout 10s; echo $?
stress: info: [7635] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: info: [7635] successful run completed in 10s

real	0m10.512s
user	0m7.053s
sys	0m3.384s
0
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 30, 2018

EDIT: ignore the following comments, skip to this instead: https://gist.github.com/constantoverride/84eba764f487049ed642eb2111a20830#gistcomment-2694173
(because it seems to me that I ran them against a previous kernel version which only had le9b.patch applied! why? I had to switch to different kernel in the AppVM settings before I can uninstall the current one, because I don't always increase the kernel release number on every kernel build, so I used the same release build number 5 to test kernel with only malloc stall patch and then rebuilt it as 5 again to test with malloc stall and le9b patches, however, looks like I forgot to switch back to 5 the last time)

Here's with both patches ( memalloc_watchdog.patch and le9b.patch):

$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 2 --timeout 10s; echo $?
stress: info: [1669] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
stress: FAIL: [1669] (415) <-- worker 1671 got signal 9
stress: WARN: [1669] (417) now reaping child worker processes
stress: FAIL: [1669] (451) failed run completed in 5s

real	0m4.637s
user	0m1.363s
sys	0m7.218s
1
[   50.273667] stress invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
[   50.273685] stress cpuset=/ mems_allowed=0
[   50.273695] CPU: 9 PID: 1671 Comm: stress Tainted: G           O      4.18.5-4.pvops.qubes.x86_64 #1
[   50.273707] Call Trace:
[   50.273715]  dump_stack+0x63/0x83
[   50.273722]  dump_header+0x6e/0x285
[   50.273729]  oom_kill_process+0x23c/0x450
[   50.273735]  out_of_memory+0x140/0x590
[   50.273745]  __alloc_pages_slowpath+0x134c/0x1590
[   50.273757]  __alloc_pages_nodemask+0x28b/0x2f0
[   50.273767]  alloc_pages_vma+0xac/0x4f0
[   50.273777]  do_anonymous_page+0x105/0x3f0
[   50.273786]  __handle_mm_fault+0xbc9/0xf10
[   50.273812]  handle_mm_fault+0x102/0x2c0
[   50.273822]  __do_page_fault+0x294/0x540
[   50.273831]  do_page_fault+0x38/0x120
[   50.273853]  ? page_fault+0x8/0x30
[   50.273878]  page_fault+0x1e/0x30
[   50.273886] RIP: 0033:0x61e381b52dd0
[   50.273893] Code: 0f 84 3c 02 00 00 8b 54 24 0c 31 c0 85 d2 0f 94 c0 89 04 24 41 83 fd 02 0f 8f f6 00 00 00 31 c0 4d 85 ff 7e 11 0f 1f 44 00 00 <c6> 04 03 5a 4c 01 f0 49 39 c7 7f f4 4d 85 e4 0f 84 e3 01 00 00 7e 
[   50.273984] RSP: 002b:00007ffe7773f450 EFLAGS: 00010206
[   50.274010] RAX: 00000002ce67d000 RBX: 0000736d8285f010 RCX: 0000736d8285f010
[   50.274024] RDX: 0000000000000001 RSI: 00000004bf82a000 RDI: 0000000000000000
[   50.274067] RBP: 000061e381b53bb4 R08: 00000000ffffffff R09: 0000000000000000
[   50.274080] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[   50.274091] R13: 0000000000000002 R14: 0000000000001000 R15: 00000004bf829000
[   50.274132] Mem-Info:
[   50.274143] active_anon:5875323 inactive_anon:4671 isolated_anon:0
                active_file:29088 inactive_file:103 isolated_file:0
                unevictable:3361 dirty:4 writeback:0 unstable:0
                slab_reclaimable:5830 slab_unreclaimable:12198
                mapped:22971 shmem:4805 pagetables:13692 bounce:0
                free:40339 free_pcp:86 free_cma:0
[   50.274223] Node 0 active_anon:23501292kB inactive_anon:18684kB active_file:116352kB inactive_file:412kB unevictable:13444kB isolated(anon):0kB isolated(file):0kB mapped:91884kB dirty:16kB writeback:0kB shmem:19220kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[   50.274270] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[   50.274317] lowmem_reserve[]: 0 3956 23499 23499 23499
[   50.274328] Node 0 DMA32 free:89228kB min:11368kB low:15416kB high:19464kB active_anon:3971788kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070076kB mlocked:0kB kernel_stack:16kB pagetables:7556kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[   50.274376] lowmem_reserve[]: 0 0 19543 19543 19543
[   50.274387] Node 0 Normal free:56224kB min:56168kB low:76180kB high:96192kB active_anon:19528804kB inactive_anon:18684kB active_file:115708kB inactive_file:672kB unevictable:13444kB writepending:16kB present:20400128kB managed:19995856kB mlocked:13444kB kernel_stack:4912kB pagetables:47212kB bounce:0kB free_pcp:344kB local_pcp:120kB free_cma:0kB
[   50.274440] lowmem_reserve[]: 0 0 0 0 0
[   50.274449] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[   50.274479] Node 0 DMA32: 13*4kB (UM) 17*8kB (UM) 16*16kB (UM) 15*32kB (UM) 3*64kB (U) 2*128kB (UM) 1*256kB (U) 0*512kB 2*1024kB (UM) 2*2048kB (UM) 20*4096kB (M) = 89692kB
[   50.274511] Node 0 Normal: 742*4kB (UE) 547*8kB (UME) 411*16kB (UME) 319*32kB (UME) 207*64kB (UE) 113*128kB (UME) 20*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 56960kB
[   50.274550] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[   50.274569] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[   50.274584] 34169 total pagecache pages
[   50.274592] 6143894 pages RAM
[   50.274599] 0 pages HighMem/MovableOnly
[   50.274606] 123435 pages reserved
[   50.274613] 0 pages cma reserved
[   50.274620] 0 pages hwpoisoned
[   50.274627] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[   50.274651] [  289]     0   289    24936     2148   204800        0             0 systemd-journal
[   50.274668] [  308]     0   308    30805      566   159744        0             0 qubesdb-daemon
[   50.473605] [  315]     0   315    23522     1955   212992        0         -1000 systemd-udevd
[   50.473618] [  497]     0   497     3042     1221    69632        0             0 haveged
[   50.473629] [  499]     0   499    10243       73   118784        0             0 meminfo-writer
[   50.473642] [  500]    81   500    13255     1172   155648        0          -900 dbus-daemon
[   50.473654] [  501]     0   501    19358     1497   184320        0             0 systemd-logind
[   50.473666] [  510]     0   510    34209      657   180224        0             0 xl
[   50.473677] [  516]     0   516    18919      931   196608        0             0 qubes-gui
[   50.473688] [  517]     0   517    16536      828   176128        0             0 qrexec-agent
[   50.473704] [  524]     0   524    73994     1311   241664        0             0 su
[   50.473715] [  556]     0   556    52775      526    69632        0             0 agetty
[   50.473725] [  558]     0   558    52863      369    69632        0             0 agetty
[   50.473736] [  560]  1000   560    21961     2072   217088        0             0 systemd
[   50.473750] [  567]  1000   567    34753      607   294912        0             0 (sd-pam)
[   50.473761] [  679]  1000   679    54160      855    77824        0             0 bash
[   50.473772] [  702]  1000   702     3500      288    77824        0             0 xinit
[   50.473783] [  703]  1000   703   314739    24875   700416        0             0 Xorg
[   50.473795] [  719]  1000   719    53597      771    77824        0             0 qubes-session
[   50.473808] [  729]  1000   729    13194     1111   143360        0             0 dbus-daemon
[   50.473821] [  747]  1000   747     7233      118    98304        0             0 ssh-agent
[   50.473834] [  768]  1000   768    16562      563   176128        0             0 qrexec-client-v
[   50.473847] [  784]  1000   784    48107     1270   147456        0             0 dconf-service
[   50.473859] [  789]  1000   789    62744     2928   139264        0             0 icon-sender
[   50.473871] [  791]  1000   791   428389    12341   831488        0             0 gsd-xsettings
[   50.473883] [  792]  1000   792   122405     1532   192512        0             0 gnome-keyring-d
[   50.473896] [  796]  1000   796   120207     1340   188416        0             0 agent
[   50.473908] [  808]  1000   808   438400    13924   909312        0             0 nm-applet
[   50.473918] [  812]  1000   812   128956     2137   397312        0             0 pulseaudio
[   50.473933] [  814]   172   814    47723      829   143360        0             0 rtkit-daemon
[   50.473945] [  823]   998   823   657134     5418   421888        0             0 polkitd
[   50.473956] [  830]  1000   830    16528      101   167936        0             0 qrexec-fork-ser
[   50.473968] [  833]  1000   833    52238      175    69632        0             0 sleep
[   50.473979] [  942]  1000   942    87397     1521   180224        0             0 at-spi-bus-laun
[   50.473991] [  947]  1000   947    13134      954   147456        0             0 dbus-daemon
[   50.474011] [  951]  1000   951    56364     1479   208896        0             0 at-spi2-registr
[   50.474023] [  959]  1000   959   123835     1782   208896        0             0 gvfsd
[   50.474036] [  969]  1000   969    89299     1552   192512        0             0 gvfsd-fuse
[   50.474050] [  984]  1000   984   208420     9987   565248        0             0 gnome-terminal-
[   50.474062] [  991]  1000   991   169492     2887   278528        0             0 xdg-desktop-por
[   50.474074] [  995]  1000   995   157249     1477   192512        0             0 xdg-document-po
[   50.474086] [  998]  1000   998   117667     1234   155648        0             0 xdg-permission-
[   50.474098] [ 1009]  1000  1009   193297     5010   458752        0             0 xdg-desktop-por
[   50.474111] [ 1018]  1000  1018    54289     1064    77824        0             0 bash
[   50.474121] [ 1046]  1000  1046    53923      701    81920        0             0 watch
[   50.474132] [ 1062]  1000  1062    54289     1023    77824        0             0 bash
[   50.474143] [ 1188]  1000  1188    54289     1055    81920        0             0 bash
[   50.474173] [ 1235]     0  1235    80041     1681   278528        0             0 sudo
[   50.474188] [ 1240]     0  1240    53876      267    77824        0             0 dmesg
[   50.474201] [ 1669]  1000  1669     2000      258    65536        0             0 stress
[   50.474212] [ 1670]  1000  1670  4980730  2897005 23289856        0             0 stress
[   50.474223] [ 1671]  1000  1671  4980730  2942611 23654400        0             0 stress
[   50.474234] Out of memory: Kill process 1671 (stress) score 489 or sacrifice child
[   50.474245] Killed process 1671 (stress) total-vm:19922920kB, anon-rss:11770232kB, file-rss:212kB, shmem-rss:0kB
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 30, 2018

Second run didn't hit OOM ((n)or disk thrashing, obviously):

$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 2 --timeout 10s; echo $?
stress: info: [5078] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
stress: info: [5078] successful run completed in 11s

real	0m10.234s
user	0m9.583s
sys	0m4.400s
0
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 30, 2018

With -m 3:

$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 3 --timeout 10s; echo $?
stress: info: [6418] dispatching hogs: 0 cpu, 0 io, 3 vm, 0 hdd
stress: FAIL: [6418] (415) <-- worker 6420 got signal 9
stress: WARN: [6418] (417) now reaping child worker processes
stress: FAIL: [6418] (451) failed run completed in 4s

real	0m4.207s
user	0m1.520s
sys	0m9.474s
1
[   52.713389] audit: type=1131 audit(1535641537.009:73): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=qubes-sync-time comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[  296.871791] stress invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
[  296.871809] stress cpuset=/ mems_allowed=0
[  296.871817] CPU: 4 PID: 6419 Comm: stress Tainted: G           O      4.18.5-4.pvops.qubes.x86_64 #1
[  296.871829] Call Trace:
[  296.871838]  dump_stack+0x63/0x83
[  296.871845]  dump_header+0x6e/0x285
[  296.871852]  oom_kill_process+0x23c/0x450
[  296.871858]  out_of_memory+0x140/0x590
[  296.871865]  __alloc_pages_slowpath+0x134c/0x1590
[  296.871873]  __alloc_pages_nodemask+0x28b/0x2f0
[  296.871882]  alloc_pages_vma+0xac/0x4f0
[  296.871889]  do_anonymous_page+0x105/0x3f0
[  296.871895]  __handle_mm_fault+0xbc9/0xf10
[  296.871902]  ? __switch_to_asm+0x34/0x70
[  296.871908]  ? __switch_to_asm+0x34/0x70
[  296.871914]  handle_mm_fault+0x102/0x2c0
[  296.871922]  __do_page_fault+0x294/0x540
[  296.871929]  do_page_fault+0x38/0x120
[  296.871935]  ? page_fault+0x8/0x30
[  296.871942]  page_fault+0x1e/0x30
[  296.871948] RIP: 0033:0x56c3128dfdd0
[  296.871955] Code: 0f 84 3c 02 00 00 8b 54 24 0c 31 c0 85 d2 0f 94 c0 89 04 24 41 83 fd 02 0f 8f f6 00 00 00 31 c0 4d 85 ff 7e 11 0f 1f 44 00 00 <c6> 04 03 5a 4c 01 f0 49 39 c7 7f f4 4d 85 e4 0f 84 e3 01 00 00 7e 
[  296.872013] RSP: 002b:00007ffc2576d210 EFLAGS: 00010206
[  296.872021] RAX: 00000001aab23000 RBX: 00007d2dd423f010 RCX: 00007d2dd423f010
[  296.872032] RDX: 0000000000000001 RSI: 00000004b2c8e000 RDI: 0000000000000000
[  296.872043] RBP: 000056c3128e0bb4 R08: 00000000ffffffff R09: 0000000000000000
[  296.872054] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  296.872064] R13: 0000000000000002 R14: 0000000000001000 R15: 00000004b2c8d000
[  296.872088] Mem-Info:
[  296.872094] active_anon:5861653 inactive_anon:8253 isolated_anon:0
                active_file:30165 inactive_file:0 isolated_file:0
                unevictable:11533 dirty:0 writeback:0 unstable:0
                slab_reclaimable:5857 slab_unreclaimable:12511
                mapped:22873 shmem:8389 pagetables:13700 bounce:0
                free:40289 free_pcp:1010 free_cma:0
[  296.872138] Node 0 active_anon:23446612kB inactive_anon:33012kB active_file:120660kB inactive_file:0kB unevictable:46132kB isolated(anon):0kB isolated(file):0kB mapped:91492kB dirty:0kB writeback:0kB shmem:33556kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  296.872174] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  296.872208] lowmem_reserve[]: 0 3956 23499 23499 23499
[  296.872217] Node 0 DMA32 free:89376kB min:11368kB low:15416kB high:19464kB active_anon:3972160kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070076kB mlocked:0kB kernel_stack:0kB pagetables:7804kB bounce:0kB free_pcp:648kB local_pcp:0kB free_cma:0kB
[  296.872256] lowmem_reserve[]: 0 0 19543 19543 19543
[  296.872264] Node 0 Normal free:55876kB min:56168kB low:76180kB high:96192kB active_anon:19474092kB inactive_anon:33012kB active_file:120660kB inactive_file:132kB unevictable:46132kB writepending:0kB present:20400128kB managed:19995856kB mlocked:46132kB kernel_stack:4960kB pagetables:46996kB bounce:0kB free_pcp:3392kB local_pcp:44kB free_cma:0kB
[  296.872305] lowmem_reserve[]: 0 0 0 0 0
[  296.872312] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  296.872334] Node 0 DMA32: 2*4kB (UM) 0*8kB 14*16kB (UM) 12*32kB (U) 9*64kB (UM) 1*128kB (M) 1*256kB (M) 1*512kB (M) 1*1024kB (U) 2*2048kB (UM) 20*4096kB (M) = 89128kB
[  296.872359] Node 0 Normal: 526*4kB (UME) 467*8kB (UME) 367*16kB (UE) 323*32kB (UE) 223*64kB (UE) 128*128kB (UM) 11*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 55520kB
[  296.872386] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  296.872398] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  296.872410] 38455 total pagecache pages
[  296.872415] 6143894 pages RAM
[  296.872420] 0 pages HighMem/MovableOnly
[  296.872426] 123435 pages reserved
[  296.872431] 0 pages cma reserved
[  296.872436] 0 pages hwpoisoned
[  296.872443] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[  296.872461] [  289]     0   289    26985     2041   208896        0             0 systemd-journal
[  296.872474] [  308]     0   308    30805      566   159744        0             0 qubesdb-daemon
[  296.872487] [  315]     0   315    23522     1955   212992        0         -1000 systemd-udevd
[  296.872500] [  497]     0   497     3042     1221    69632        0             0 haveged
[  296.872510] [  499]     0   499    10243       73   118784        0             0 meminfo-writer
[  296.872523] [  500]    81   500    13255     1172   155648        0          -900 dbus-daemon
[  296.872535] [  501]     0   501    19358     1497   184320        0             0 systemd-logind
[  297.071572] [  510]     0   510    34209      657   180224        0             0 xl
[  297.071583] [  516]     0   516    18947     1185   196608        0             0 qubes-gui
[  297.071595] [  517]     0   517    16536      828   176128        0             0 qrexec-agent
[  297.071608] [  524]     0   524    73994     1311   241664        0             0 su
[  297.071619] [  556]     0   556    52775      526    69632        0             0 agetty
[  297.071630] [  558]     0   558    52863      369    69632        0             0 agetty
[  297.071642] [  560]  1000   560    21961     2072   217088        0             0 systemd
[  297.071653] [  567]  1000   567    34753      607   294912        0             0 (sd-pam)
[  297.071679] [  679]  1000   679    54160      855    77824        0             0 bash
[  297.071690] [  702]  1000   702     3500      288    77824        0             0 xinit
[  297.071701] [  703]  1000   703   310734    24441   696320        0             0 Xorg
[  297.071713] [  719]  1000   719    53597      771    77824        0             0 qubes-session
[  297.071725] [  729]  1000   729    13194     1111   143360        0             0 dbus-daemon
[  297.071739] [  747]  1000   747     7233      118    98304        0             0 ssh-agent
[  297.071767] [  768]  1000   768    16562      563   176128        0             0 qrexec-client-v
[  297.071781] [  784]  1000   784    48107     1270   147456        0             0 dconf-service
[  297.071794] [  789]  1000   789    62744     2928   139264        0             0 icon-sender
[  297.071807] [  791]  1000   791   428389    12341   831488        0             0 gsd-xsettings
[  297.071819] [  792]  1000   792   122405     1532   192512        0             0 gnome-keyring-d
[  297.071832] [  796]  1000   796   120207     1340   188416        0             0 agent
[  297.071848] [  808]  1000   808   438400    13924   909312        0             0 nm-applet
[  297.071862] [  812]  1000   812   128956     2137   397312        0             0 pulseaudio
[  297.071875] [  814]   172   814    47723      829   143360        0             0 rtkit-daemon
[  297.071887] [  823]   998   823   657134     5418   421888        0             0 polkitd
[  297.071898] [  830]  1000   830    16528      101   167936        0             0 qrexec-fork-ser
[  297.071910] [  833]  1000   833    52238      175    69632        0             0 sleep
[  297.071921] [  942]  1000   942    87397     1521   180224        0             0 at-spi-bus-laun
[  297.071937] [  947]  1000   947    13134      954   147456        0             0 dbus-daemon
[  297.071950] [  951]  1000   951    56364     1479   208896        0             0 at-spi2-registr
[  297.071962] [  959]  1000   959   123835     1782   208896        0             0 gvfsd
[  297.071973] [  969]  1000   969    89299     1552   192512        0             0 gvfsd-fuse
[  297.071987] [  984]  1000   984   207199    10210   589824        0             0 gnome-terminal-
[  297.071999] [  991]  1000   991   169492     2887   278528        0             0 xdg-desktop-por
[  297.072041] [  995]  1000   995   157249     1477   192512        0             0 xdg-document-po
[  297.072054] [  998]  1000   998   117667     1234   155648        0             0 xdg-permission-
[  297.072067] [ 1009]  1000  1009   193297     5010   458752        0             0 xdg-desktop-por
[  297.072080] [ 1018]  1000  1018    54289     1064    77824        0             0 bash
[  297.072091] [ 1046]  1000  1046    53923      701    81920        0             0 watch
[  297.072102] [ 1062]  1000  1062    54289     1023    77824        0             0 bash
[  297.072113] [ 1188]  1000  1188    54289     1055    81920        0             0 bash
[  297.072124] [ 1235]     0  1235    80041     1681   278528        0             0 sudo
[  297.072135] [ 1240]     0  1240    53876      267    77824        0             0 dmesg
[  297.072146] [ 6418]  1000  6418     2000      287    61440        0             0 stress
[  297.072157] [ 6419]  1000  6419  4928606  1747842 14069760        0             0 stress
[  297.072169] [ 6420]  1000  6420  4928606  2043852 16445440        0             0 stress
[  297.072179] [ 6421]  1000  6421  4928606  2042136 16433152        0             0 stress
[  297.072190] Out of memory: Kill process 6420 (stress) score 340 or sacrifice child
[  297.072201] Killed process 6420 (stress) total-vm:19714424kB, anon-rss:8175072kB, file-rss:336kB, shmem-rss:0kB
[  297.292712] oom_reaper: reaped process 6420 (stress), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 30, 2018

And next run didn't OOM:

$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 3 --timeout 10s; echo $?
stress: info: [8005] dispatching hogs: 0 cpu, 0 io, 3 vm, 0 hdd
stress: info: [8005] successful run completed in 10s

real	0m10.272s
user	0m12.240s
sys	0m11.214s
0

There was a slight 2.5M/sec actual disk read (according to sudo iotop -d 3, for two updates), just nothing on dmesg.

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 30, 2018

Here's with -m 4:

$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 4 --timeout 10s; echo $?
stress: info: [10767] dispatching hogs: 0 cpu, 0 io, 4 vm, 0 hdd
stress: FAIL: [10767] (415) <-- worker 10771 got signal 9
stress: WARN: [10767] (417) now reaping child worker processes
stress: FAIL: [10767] (451) failed run completed in 4s

real	0m4.159s
user	0m1.343s
sys	0m12.634s
1

[  352.504613] audit: type=1131 audit(1535641836.800:75): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=qubes-update-check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[  523.241038] stress invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
[  523.241057] stress cpuset=/ mems_allowed=0
[  523.241066] CPU: 3 PID: 10771 Comm: stress Tainted: G           O      4.18.5-4.pvops.qubes.x86_64 #1
[  523.241077] Call Trace:
[  523.241086]  dump_stack+0x63/0x83
[  523.241093]  dump_header+0x6e/0x285
[  523.241100]  oom_kill_process+0x23c/0x450
[  523.241106]  out_of_memory+0x140/0x590
[  523.241113]  __alloc_pages_slowpath+0x134c/0x1590
[  523.241122]  __alloc_pages_nodemask+0x28b/0x2f0
[  523.241131]  alloc_pages_vma+0xac/0x4f0
[  523.241137]  ? lru_cache_add+0x134/0x1b0
[  523.241143]  do_anonymous_page+0x105/0x3f0
[  523.241150]  __handle_mm_fault+0xbc9/0xf10
[  523.241156]  handle_mm_fault+0x102/0x2c0
[  523.241163]  __do_page_fault+0x294/0x540
[  523.241170]  do_page_fault+0x38/0x120
[  523.241176]  ? page_fault+0x8/0x30
[  523.241183]  page_fault+0x1e/0x30
[  523.241189] RIP: 0033:0x5eda59689dd0
[  523.241195] Code: 0f 84 3c 02 00 00 8b 54 24 0c 31 c0 85 d2 0f 94 c0 89 04 24 41 83 fd 02 0f 8f f6 00 00 00 31 c0 4d 85 ff 7e 11 0f 1f 44 00 00 <c6> 04 03 5a 4c 01 f0 49 39 c7 7f f4 4d 85 e4 0f 84 e3 01 00 00 7e 
[  523.241234] RSP: 002b:00007ffde4ea2080 EFLAGS: 00010206
[  523.241242] RAX: 000000017b9e8000 RBX: 00007fb9c917d010 RCX: 00007fb9c917d010
[  523.241252] RDX: 0000000000000001 RSI: 000000041d834000 RDI: 0000000000000000
[  523.241262] RBP: 00005eda5968abb4 R08: 00000000ffffffff R09: 0000000000000000
[  523.241276] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  523.241287] R13: 0000000000000002 R14: 0000000000001000 R15: 000000041d833000
[  523.241297] Mem-Info:
[  523.241303] active_anon:5861695 inactive_anon:8253 isolated_anon:0
                active_file:30175 inactive_file:0 isolated_file:0
                unevictable:11533 dirty:0 writeback:0 unstable:0
                slab_reclaimable:5948 slab_unreclaimable:12727
                mapped:22787 shmem:8389 pagetables:13735 bounce:0
                free:40384 free_pcp:49 free_cma:0
[  523.241347] Node 0 active_anon:23446780kB inactive_anon:33012kB active_file:120700kB inactive_file:0kB unevictable:46132kB isolated(anon):0kB isolated(file):0kB mapped:91148kB dirty:0kB writeback:0kB shmem:33556kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  523.241381] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  523.241414] lowmem_reserve[]: 0 3956 23499 23499 23499
[  523.241423] Node 0 DMA32 free:89536kB min:11368kB low:15416kB high:19464kB active_anon:3972280kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070076kB mlocked:0kB kernel_stack:0kB pagetables:7708kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  523.241457] lowmem_reserve[]: 0 0 19543 19543 19543
[  523.241465] Node 0 Normal free:56096kB min:56168kB low:76180kB high:96192kB active_anon:19473992kB inactive_anon:33012kB active_file:120700kB inactive_file:152kB unevictable:46132kB writepending:0kB present:20400128kB managed:19995856kB mlocked:46132kB kernel_stack:4864kB pagetables:47232kB bounce:0kB free_pcp:196kB local_pcp:72kB free_cma:0kB
[  523.241502] lowmem_reserve[]: 0 0 0 0 0
[  523.241509] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  523.241531] Node 0 DMA32: 25*4kB (UM) 22*8kB (UM) 17*16kB (UM) 14*32kB (UM) 9*64kB (UM) 1*128kB (M) 1*256kB (U) 2*512kB (U) 1*1024kB (M) 2*2048kB (UM) 20*4096kB (M) = 90020kB
[  523.241557] Node 0 Normal: 259*4kB (UME) 233*8kB (UME) 289*16kB (UME) 298*32kB (UME) 240*64kB (UME) 193*128kB (UM) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 57124kB
[  523.241581] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  523.241593] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  523.241605] 38680 total pagecache pages
[  523.241610] 6143894 pages RAM
[  523.241617] 0 pages HighMem/MovableOnly
[  523.241622] 123435 pages reserved
[  523.241627] 0 pages cma reserved
[  523.241633] 0 pages hwpoisoned
[  523.241638] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[  523.241656] [  289]     0   289    26985     2041   208896        0             0 systemd-journal
[  523.241668] [  308]     0   308    30805      566   159744        0             0 qubesdb-daemon
[  523.241681] [  315]     0   315    23522     1955   212992        0         -1000 systemd-udevd
[  523.241693] [  497]     0   497     3042     1221    69632        0             0 haveged
[  523.241704] [  499]     0   499    10243       73   118784        0             0 meminfo-writer
[  523.241717] [  500]    81   500    13255     1172   155648        0          -900 dbus-daemon
[  523.241729] [  501]     0   501    19358     1497   184320        0             0 systemd-logind
[  523.241741] [  510]     0   510    34209      657   180224        0             0 xl
[  523.241752] [  516]     0   516    18947     1185   196608        0             0 qubes-gui
[  523.241762] [  517]     0   517    16536      828   176128        0             0 qrexec-agent
[  523.241775] [  524]     0   524    73994     1311   241664        0             0 su
[  523.241787] [  556]     0   556    52775      526    69632        0             0 agetty
[  523.241798] [  558]     0   558    52863      369    69632        0             0 agetty
[  523.241812] [  560]  1000   560    21961     2072   217088        0             0 systemd
[  523.241822] [  567]  1000   567    34753      607   294912        0             0 (sd-pam)
[  523.241833] [  679]  1000   679    54160      855    77824        0             0 bash
[  523.241844] [  702]  1000   702     3500      288    77824        0             0 xinit
[  523.241854] [  703]  1000   703   310734    24441   696320        0             0 Xorg
[  523.241865] [  719]  1000   719    53597      771    77824        0             0 qubes-session
[  523.241877] [  729]  1000   729    13194     1111   143360        0             0 dbus-daemon
[  523.241889] [  747]  1000   747     7233      118    98304        0             0 ssh-agent
[  523.441952] [  768]  1000   768    16562      563   176128        0             0 qrexec-client-v
[  523.441972] [  784]  1000   784    48107     1270   147456        0             0 dconf-service
[  523.441992] [  789]  1000   789    62744     2928   139264        0             0 icon-sender
[  523.442012] [  791]  1000   791   428389    12341   831488        0             0 gsd-xsettings
[  523.442035] [  792]  1000   792   122405     1532   192512        0             0 gnome-keyring-d
[  523.442064] [  796]  1000   796   120207     1340   188416        0             0 agent
[  523.442081] [  808]  1000   808   438400    13924   909312        0             0 nm-applet
[  523.442098] [  812]  1000   812   128956     2137   397312        0             0 pulseaudio
[  523.442118] [  814]   172   814    47723      829   143360        0             0 rtkit-daemon
[  523.442137] [  823]   998   823   657134     5418   421888        0             0 polkitd
[  523.442154] [  830]  1000   830    16528      101   167936        0             0 qrexec-fork-ser
[  523.442173] [  833]  1000   833    52238      175    69632        0             0 sleep
[  523.442190] [  942]  1000   942    87397     1521   180224        0             0 at-spi-bus-laun
[  523.442209] [  947]  1000   947    13134      954   147456        0             0 dbus-daemon
[  523.442228] [  951]  1000   951    56364     1479   208896        0             0 at-spi2-registr
[  523.442247] [  959]  1000   959   123835     1782   208896        0             0 gvfsd
[  523.442266] [  969]  1000   969    89299     1552   192512        0             0 gvfsd-fuse
[  523.442285] [  984]  1000   984   207199    10223   589824        0             0 gnome-terminal-
[  523.442303] [  991]  1000   991   169492     2887   278528        0             0 xdg-desktop-por
[  523.442322] [  995]  1000   995   157249     1477   192512        0             0 xdg-document-po
[  523.442342] [  998]  1000   998   117667     1234   155648        0             0 xdg-permission-
[  523.442361] [ 1009]  1000  1009   193297     5010   458752        0             0 xdg-desktop-por
[  523.442379] [ 1018]  1000  1018    54289     1064    77824        0             0 bash
[  523.442396] [ 1046]  1000  1046    53923      701    81920        0             0 watch
[  523.442412] [ 1062]  1000  1062    54289     1023    77824        0             0 bash
[  523.442429] [ 1188]  1000  1188    54289     1055    81920        0             0 bash
[  523.442445] [ 1235]     0  1235    80041     1681   278528        0             0 sudo
[  523.442461] [ 1240]     0  1240    53876      267    77824        0             0 dmesg
[  523.442473] [10767]  1000 10767     2000      284    57344        0             0 stress
[  523.442484] [10768]  1000 10768  4317188  1553046 12509184        0             0 stress
[  523.442495] [10769]  1000 10769  4317188  1184766  9555968        0             0 stress
[  523.442506] [10770]  1000 10770  4317188  1541100 12414976        0             0 stress
[  523.442517] [10771]  1000 10771  4317188  1554960 12525568        0             0 stress
[  523.442531] [10836]  1000 10836    53923      105    65536        0             0 watch
[  523.442542] [10837]  1000 10837       43        1    20480        0             0 cat
[  523.442553] Out of memory: Kill process 10771 (stress) score 258 or sacrifice child
[  523.442564] Killed process 10771 (stress) total-vm:17268752kB, anon-rss:6219628kB, file-rss:212kB, shmem-rss:0kB
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 30, 2018

and the next -m 4 call finally still OOMs:

$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 4 --timeout 10s; echo $?
stress: info: [11890] dispatching hogs: 0 cpu, 0 io, 4 vm, 0 hdd
stress: FAIL: [11890] (415) <-- worker 11891 got signal 9
stress: WARN: [11890] (417) now reaping child worker processes
stress: FAIL: [11890] (451) failed run completed in 3s

real	0m3.605s
user	0m0.430s
sys	0m3.326s
1
[  578.643938] stress invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
[  578.643977] stress cpuset=/ mems_allowed=0
[  578.643987] CPU: 6 PID: 11893 Comm: stress Tainted: G           O      4.18.5-4.pvops.qubes.x86_64 #1
[  578.644004] Call Trace:
[  578.644015]  dump_stack+0x63/0x83
[  578.644025]  dump_header+0x6e/0x285
[  578.644035]  oom_kill_process+0x23c/0x450
[  578.644044]  out_of_memory+0x140/0x590
[  578.644053]  __alloc_pages_slowpath+0x134c/0x1590
[  578.644065]  __alloc_pages_nodemask+0x28b/0x2f0
[  578.644076]  alloc_pages_vma+0xac/0x4f0
[  578.644086]  do_anonymous_page+0x105/0x3f0
[  578.644096]  __handle_mm_fault+0xbc9/0xf10
[  578.644105]  handle_mm_fault+0x102/0x2c0
[  578.644114]  __do_page_fault+0x294/0x540
[  578.644124]  ? __audit_syscall_exit+0x2bf/0x3e0
[  578.644135]  do_page_fault+0x38/0x120
[  578.644144]  ? page_fault+0x8/0x30
[  578.644154]  page_fault+0x1e/0x30
[  578.644163] RIP: 0033:0x61764453ddd0
[  578.644171] Code: 0f 84 3c 02 00 00 8b 54 24 0c 31 c0 85 d2 0f 94 c0 89 04 24 41 83 fd 02 0f 8f f6 00 00 00 31 c0 4d 85 ff 7e 11 0f 1f 44 00 00 <c6> 04 03 5a 4c 01 f0 49 39 c7 7f f4 4d 85 e4 0f 84 e3 01 00 00 7e 
[  578.644225] RSP: 002b:00007ffd94fec5c0 EFLAGS: 00010206
[  578.644235] RAX: 000000006058e000 RBX: 00007fabf9d8c010 RCX: 00007fabf9d8c010
[  578.644249] RDX: 0000000000000001 RSI: 00000001b1143000 RDI: 0000000000000000
[  578.644263] RBP: 000061764453ebb4 R08: 00000000ffffffff R09: 0000000000000000
[  578.644278] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  578.644292] R13: 0000000000000002 R14: 0000000000001000 R15: 00000001b1142000
[  578.644323] Mem-Info:
[  578.644333] active_anon:1793467 inactive_anon:8253 isolated_anon:0
                active_file:30204 inactive_file:92 isolated_file:0
                unevictable:11533 dirty:0 writeback:0 unstable:0
                slab_reclaimable:5977 slab_unreclaimable:12752
                mapped:22775 shmem:8389 pagetables:5813 bounce:0
                free:40307 free_pcp:252 free_cma:0
[  578.644392] Node 0 active_anon:7173868kB inactive_anon:33012kB active_file:120816kB inactive_file:368kB unevictable:46132kB isolated(anon):0kB isolated(file):0kB mapped:91100kB dirty:0kB writeback:0kB shmem:33556kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  578.644438] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  578.644482] lowmem_reserve[]: 0 3956 23499 23499 23499
[  578.644493] Node 0 DMA32 free:89284kB min:11368kB low:15416kB high:19464kB active_anon:3971508kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070076kB mlocked:0kB kernel_stack:0kB pagetables:7580kB bounce:0kB free_pcp:396kB local_pcp:204kB free_cma:0kB
[  578.644540] lowmem_reserve[]: 0 0 19543 19543 19543
[  578.644552] Node 0 Normal free:56040kB min:56168kB low:76180kB high:96192kB active_anon:3200836kB inactive_anon:33012kB active_file:120816kB inactive_file:852kB unevictable:46132kB writepending:0kB present:20400128kB managed:3691172kB mlocked:46132kB kernel_stack:4816kB pagetables:15672kB bounce:0kB free_pcp:612kB local_pcp:120kB free_cma:0kB
[  578.644601] lowmem_reserve[]: 0 0 0 0 0
[  578.644610] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  578.644641] Node 0 DMA32: 2*4kB (UM) 23*8kB (U) 22*16kB (UM) 18*32kB (UM) 7*64kB (U) 4*128kB (U) 3*256kB (U) 1*512kB (U) 2*1024kB (UM) 1*2048kB (M) 20*4096kB (M) = 89376kB
[  578.644679] Node 0 Normal: 306*4kB (UME) 109*8kB (UME) 83*16kB (UE) 40*32kB (UE) 9*64kB (UME) 1*128kB (M) 1*256kB (U) 1*512kB (M) 1*1024kB (U) 10*2048kB (UM) 7*4096kB (M) = 56352kB
[  578.644715] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  578.644731] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  578.644747] 38645 total pagecache pages
[  578.644755] 6143894 pages RAM
[  578.644762] 0 pages HighMem/MovableOnly
[  578.644769] 4199606 pages reserved
[  578.644776] 0 pages cma reserved
[  578.844750] 0 pages hwpoisoned
[  578.844760] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[  578.844797] [  289]     0   289    26985     2041   208896        0             0 systemd-journal
[  578.844820] [  308]     0   308    30805      566   159744        0             0 qubesdb-daemon
[  578.844843] [  315]     0   315    23522     1955   212992        0         -1000 systemd-udevd
[  578.844865] [  497]     0   497     3042     1221    69632        0             0 haveged
[  578.844884] [  499]     0   499    10243       73   118784        0             0 meminfo-writer
[  578.844905] [  500]    81   500    13255     1172   155648        0          -900 dbus-daemon
[  578.844931] [  501]     0   501    19358     1497   184320        0             0 systemd-logind
[  578.844954] [  510]     0   510    34209      657   180224        0             0 xl
[  578.844974] [  516]     0   516    18947     1185   196608        0             0 qubes-gui
[  578.844997] [  517]     0   517    16536      828   176128        0             0 qrexec-agent
[  578.845030] [  524]     0   524    73994     1311   241664        0             0 su
[  578.845052] [  556]     0   556    52775      526    69632        0             0 agetty
[  578.845072] [  558]     0   558    52863      369    69632        0             0 agetty
[  578.845093] [  560]  1000   560    21961     2072   217088        0             0 systemd
[  578.845112] [  567]  1000   567    34753      607   294912        0             0 (sd-pam)
[  578.845136] [  679]  1000   679    54160      855    77824        0             0 bash
[  578.845161] [  702]  1000   702     3500      288    77824        0             0 xinit
[  578.845186] [  703]  1000   703   310734    24441   696320        0             0 Xorg
[  578.845208] [  719]  1000   719    53597      771    77824        0             0 qubes-session
[  578.845233] [  729]  1000   729    13194     1111   143360        0             0 dbus-daemon
[  578.845258] [  747]  1000   747     7233      118    98304        0             0 ssh-agent
[  578.845278] [  768]  1000   768    16562      563   176128        0             0 qrexec-client-v
[  578.845302] [  784]  1000   784    48107     1270   147456        0             0 dconf-service
[  578.845327] [  789]  1000   789    62744     2928   139264        0             0 icon-sender
[  578.845348] [  791]  1000   791   428389    12341   831488        0             0 gsd-xsettings
[  578.845372] [  792]  1000   792   122405     1532   192512        0             0 gnome-keyring-d
[  578.845397] [  796]  1000   796   120207     1340   188416        0             0 agent
[  578.845417] [  808]  1000   808   438400    13924   909312        0             0 nm-applet
[  578.845439] [  812]  1000   812   128956     2137   397312        0             0 pulseaudio
[  578.845465] [  814]   172   814    47723      829   143360        0             0 rtkit-daemon
[  578.845486] [  823]   998   823   657134     5418   421888        0             0 polkitd
[  578.845505] [  830]  1000   830    16528      101   167936        0             0 qrexec-fork-ser
[  578.845528] [  833]  1000   833    52238      175    69632        0             0 sleep
[  578.845555] [  942]  1000   942    87397     1521   180224        0             0 at-spi-bus-laun
[  578.845581] [  947]  1000   947    13134      954   147456        0             0 dbus-daemon
[  578.845608] [  951]  1000   951    56364     1479   208896        0             0 at-spi2-registr
[  578.845635] [  959]  1000   959   123835     1782   208896        0             0 gvfsd
[  578.845655] [  969]  1000   969    89299     1552   192512        0             0 gvfsd-fuse
[  578.845675] [  984]  1000   984   207199    10232   589824        0             0 gnome-terminal-
[  578.845696] [  991]  1000   991   169492     2887   278528        0             0 xdg-desktop-por
[  578.845718] [  995]  1000   995   157249     1477   192512        0             0 xdg-document-po
[  578.845745] [  998]  1000   998   117667     1234   155648        0             0 xdg-permission-
[  578.845772] [ 1009]  1000  1009   193297     5010   458752        0             0 xdg-desktop-por
[  578.845799] [ 1018]  1000  1018    54289     1064    77824        0             0 bash
[  578.845813] [ 1046]  1000  1046    53923      701    81920        0             0 watch
[  578.845828] [ 1062]  1000  1062    54289     1023    77824        0             0 bash
[  578.845845] [ 1188]  1000  1188    54289     1055    81920        0             0 bash
[  578.845862] [ 1235]     0  1235    80041     1681   278528        0             0 sudo
[  578.845878] [ 1240]     0  1240    53876      267    77824        0             0 dmesg
[  578.845895] [11890]  1000 11890     2000      288    57344        0             0 stress
[  579.044792] [11891]  1000 11891  1775891   548461  4456448        0             0 stress
[  579.044810] [11892]  1000 11892  1775891   550573  4481024        0             0 stress
[  579.044825] [11893]  1000 11893  1775891   394681  3227648        0             0 stress
[  579.044842] [11894]  1000 11894  1775891   396265  3239936        0             0 stress
[  579.044857] Out of memory: Kill process 11891 (stress) score 254 or sacrifice child
[  579.044879] Killed process 11891 (stress) total-vm:7103564kB, anon-rss:2193632kB, file-rss:212kB, shmem-rss:0kB
[  579.125062] oom_reaper: reaped process 11891 (stress), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 31, 2018

Hmm, now I'm having doubts as to whether I've tested the right kernel with both patches above ! Because I've detected running release 4(had only le9b.patch) instead of 5(had both patches) of the kernel, which would explain why the output didn't include any output from the malloc watchdog patch.

So either way, let's retry (both patches), output:

$ uname -a
Linux dev01-w-s-f-fdr28 4.18.5-5.pvops.qubes.x86_64 #1 SMP Thu Aug 30 14:06:46 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 3 --timeout 10s; echo $?
stress: info: [1290] dispatching hogs: 0 cpu, 0 io, 3 vm, 0 hdd
stress: info: [1290] successful run completed in 10s

real	0m10.413s
user	0m11.958s
sys	0m10.992s
0
dmesg
[   21.139438] audit: type=1131 audit(1535713846.006:71): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=qubes-sync-time comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   44.063549] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[   44.063581] MemAlloc: kswapd0(108) flags=0xa20840 switches=26
[   44.063600] kswapd0         R  running task        0   108      2 0x80000000
[   44.063623] Call Trace:
[   44.063635]  ? shrink_node+0x171/0x4b0
[   44.063650]  ? balance_pgdat+0x238/0x3e0
[   44.063666]  ? kswapd+0x1b5/0x590
[   44.063680]  ? remove_wait_queue+0x70/0x70
[   44.063697]  ? kthread+0x105/0x140
[   44.063708]  ? balance_pgdat+0x3e0/0x3e0
[   44.063719]  ? kthread_stop+0x100/0x100
[   44.063733]  ? ret_from_fork+0x35/0x40
[   44.063763] MemAlloc: stress(1292) flags=0x404040 switches=6 seq=3640 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=1063 uninterruptible
[   44.063799] stress          D    0  1292   1290 0x00000080
[   44.063814] Call Trace:
[   44.063825]  ? __schedule+0x3f3/0x8c0
[   44.063838]  ? __switch_to_asm+0x40/0x70
[   44.063850]  ? __switch_to_asm+0x34/0x70
[   44.063863]  schedule+0x36/0x80
[   44.063886]  schedule_timeout+0x29b/0x4d0
[   44.063897]  ? __switch_to+0x13f/0x4d0
[   44.063909]  ? __switch_to_asm+0x40/0x70
[   44.063932]  ? finish_task_switch+0x75/0x2a0
[   44.063948]  wait_for_completion+0x121/0x190
[   44.063963]  ? wake_up_q+0x80/0x80
[   44.063978]  flush_work+0x18f/0x200
[   44.063995]  ? rcu_free_pwq+0x20/0x20
[   44.064019]  __alloc_pages_slowpath+0x766/0x1590
[   44.064039]  __alloc_pages_nodemask+0x302/0x3c0
[   44.064060]  alloc_pages_vma+0xac/0x4f0
[   44.064073]  do_anonymous_page+0x105/0x3f0
[   44.064087]  __handle_mm_fault+0xbc9/0xf10
[   44.064099]  ? __switch_to_asm+0x34/0x70
[   44.064112]  handle_mm_fault+0x102/0x2c0
[   44.064122]  __do_page_fault+0x294/0x540
[   44.064133]  do_page_fault+0x38/0x120
[   44.064154]  ? page_fault+0x8/0x30
[   44.064165]  page_fault+0x1e/0x30
[   44.064184] RIP: 0033:0x565c568c6dd0
[   44.064197] Code: Bad RIP value.
[   44.064215] RSP: 002b:00007fff0029ee00 EFLAGS: 00010206
[   44.064229] RAX: 00000000658ce000 RBX: 00007f79812a6010 RCX: 00007f79812a6010
[   44.064248] RDX: 0000000000000001 RSI: 000000013f9a9000 RDI: 0000000000000000
[   44.064270] RBP: 0000565c568c7bb4 R08: 00000000ffffffff R09: 0000000000000000
[   44.064291] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[   44.064311] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013f9a8000
[   44.064332] Mem-Info:
[   44.064343] active_anon:1671990 inactive_anon:8765 isolated_anon:0
                active_file:91268 inactive_file:0 isolated_file:0
                unevictable:1339 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8197 slab_unreclaimable:12316
                mapped:22809 shmem:8901 pagetables:5500 bounce:0
                free:40204 free_pcp:1519 free_cma:0
[   44.064428] Node 0 active_anon:6687960kB inactive_anon:35060kB active_file:365072kB inactive_file:0kB unevictable:5356kB isolated(anon):0kB isolated(file):0kB mapped:91236kB dirty:0kB writeback:0kB shmem:35604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[   44.064491] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[   44.064565] lowmem_reserve[]: 0 3956 23499 23499 23499
[   44.064580] Node 0 DMA32 free:89264kB min:11368kB low:15416kB high:19464kB active_anon:3971840kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7612kB bounce:0kB free_pcp:360kB local_pcp:4kB free_cma:0kB
[   44.064648] lowmem_reserve[]: 0 0 19543 19543 19543
[   44.064665] Node 0 Normal free:55648kB min:56168kB low:76180kB high:96192kB active_anon:2715684kB inactive_anon:35060kB active_file:364748kB inactive_file:0kB unevictable:5356kB writepending:0kB present:20400128kB managed:3420644kB mlocked:5356kB kernel_stack:5008kB pagetables:14388kB bounce:0kB free_pcp:5716kB local_pcp:472kB free_cma:0kB
[   44.064736] lowmem_reserve[]: 0 0 0 0 0
[   44.064745] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[   44.064779] Node 0 DMA32: 2*4kB (UM) 14*8kB (UM) 13*16kB (U) 11*32kB (U) 5*64kB (U) 1*128kB (M) 2*256kB (UM) 0*512kB 2*1024kB (UM) 2*2048kB (UM) 20*4096kB (M) = 89704kB
[   44.064833] Node 0 Normal: 1000*4kB (UME) 345*8kB (UE) 153*16kB (UE) 216*32kB (UE) 154*64kB (UME) 122*128kB (UME) 47*256kB (UME) 4*512kB (UM) 0*1024kB 1*2048kB (U) 0*4096kB = 57720kB
[   44.263766] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[   44.263797] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[   44.263816] 100233 total pagecache pages
[   44.263828] 6143894 pages RAM
[   44.263840] 0 pages HighMem/MovableOnly
[   44.263849] 4200151 pages reserved
[   44.263861] 0 pages cma reserved
[   44.263872] 0 pages hwpoisoned
[   44.263883] Showing busy workqueues and worker pools:
[   44.263896] workqueue events: flags=0x0
[   44.263910]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[   44.263932]     in-flight: 228:balloon_process
[   44.263960]     pending: balloon_process
[   44.263974]   pwq 6: cpus=3 node=0 flags=0x0 nice=0 active=1/256
[   44.263996]     in-flight: 129:slab_caches_to_rcu_destroy_workfn
[   44.264033] workqueue events_power_efficient: flags=0x80
[   44.264054]   pwq 8: cpus=4 node=0 flags=0x0 nice=0 active=1/256
[   44.264072]     pending: gc_worker [nf_conntrack]
[   44.264100] workqueue mm_percpu_wq: flags=0x8
[   44.264115]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[   44.264132]     pending: vmstat_update, drain_local_pages_wq BAR(1292)
[   44.264181] pool 6: cpus=3 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 1312 31
[   44.264199] pool 16: cpus=8 node=0 flags=0x0 nice=0 hung=1s workers=4 idle: 380 61 277
[   44.264219] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[   45.279086] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[   45.279105] MemAlloc: kswapd0(108) flags=0xa20840 switches=51
[   45.279117] kswapd0         R  running task        0   108      2 0x80000000
[   45.279137] Call Trace:
[   45.279148]  ? balance_pgdat+0x238/0x3e0
[   45.279159]  ? kswapd+0x1b5/0x590
[   45.279170]  ? remove_wait_queue+0x70/0x70
[   45.279178]  ? kthread+0x105/0x140
[   45.279189]  ? balance_pgdat+0x3e0/0x3e0
[   45.279208]  ? kthread_stop+0x100/0x100
[   45.279217]  ? ret_from_fork+0x35/0x40
[   45.279435] MemAlloc: stress(1292) flags=0x404040 switches=6 seq=3640 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=2279 uninterruptible
[   45.279463] stress          D    0  1292   1290 0x00000080
[   45.279475] Call Trace:
[   45.279485]  ? __schedule+0x3f3/0x8c0
[   45.279498]  ? __switch_to_asm+0x40/0x70
[   45.279508]  ? __switch_to_asm+0x34/0x70
[   45.279518]  schedule+0x36/0x80
[   45.279529]  schedule_timeout+0x29b/0x4d0
[   45.279538]  ? __switch_to+0x13f/0x4d0
[   45.279549]  ? __switch_to_asm+0x40/0x70
[   45.279560]  ? finish_task_switch+0x75/0x2a0
[   45.279572]  wait_for_completion+0x121/0x190
[   45.279584]  ? wake_up_q+0x80/0x80
[   45.279593]  flush_work+0x18f/0x200
[   45.279602]  ? rcu_free_pwq+0x20/0x20
[   45.279611]  __alloc_pages_slowpath+0x766/0x1590
[   45.279625]  __alloc_pages_nodemask+0x302/0x3c0
[   45.279636]  alloc_pages_vma+0xac/0x4f0
[   45.279644]  do_anonymous_page+0x105/0x3f0
[   45.279653]  __handle_mm_fault+0xbc9/0xf10
[   45.279662]  ? __switch_to_asm+0x34/0x70
[   45.279670]  handle_mm_fault+0x102/0x2c0
[   45.279680]  __do_page_fault+0x294/0x540
[   45.279690]  do_page_fault+0x38/0x120
[   45.279698]  ? page_fault+0x8/0x30
[   45.279706]  page_fault+0x1e/0x30
[   45.279716] RIP: 0033:0x565c568c6dd0
[   45.279723] Code: Bad RIP value.
[   45.279732] RSP: 002b:00007fff0029ee00 EFLAGS: 00010206
[   45.279741] RAX: 00000000658ce000 RBX: 00007f79812a6010 RCX: 00007f79812a6010
[   45.279757] RDX: 0000000000000001 RSI: 000000013f9a9000 RDI: 0000000000000000
[   45.279773] RBP: 0000565c568c7bb4 R08: 00000000ffffffff R09: 0000000000000000
[   45.279788] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[   45.279802] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013f9a8000
[   45.279817] Mem-Info:
[   45.279823] active_anon:2189802 inactive_anon:8765 isolated_anon:0
                active_file:91247 inactive_file:0 isolated_file:0
                unevictable:1670 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8100 slab_unreclaimable:12345
                mapped:22833 shmem:8901 pagetables:6374 bounce:0
                free:40271 free_pcp:1673 free_cma:0
[   45.279884] Node 0 active_anon:8759208kB inactive_anon:35060kB active_file:364988kB inactive_file:0kB unevictable:6680kB isolated(anon):0kB isolated(file):0kB mapped:91332kB dirty:0kB writeback:0kB shmem:35604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[   45.279920] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[   45.279956] lowmem_reserve[]: 0 3956 23499 23499 23499
[   45.279966] Node 0 DMA32 free:89332kB min:11368kB low:15416kB high:19464kB active_anon:3972232kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:16kB pagetables:7716kB bounce:0kB free_pcp:656kB local_pcp:4kB free_cma:0kB
[   45.280021] lowmem_reserve[]: 0 0 19543 19543 19543
[   45.280035] Node 0 Normal free:55848kB min:56168kB low:76180kB high:96192kB active_anon:4786468kB inactive_anon:35060kB active_file:364776kB inactive_file:0kB unevictable:6680kB writepending:0kB present:20400128kB managed:5497316kB mlocked:6680kB kernel_stack:5152kB pagetables:17780kB bounce:0kB free_pcp:6036kB local_pcp:448kB free_cma:0kB
[   45.280079] lowmem_reserve[]: 0 0 0 0 0
[   45.280087] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[   45.280111] Node 0 DMA32: 1*4kB (M) 0*8kB 12*16kB (U) 12*32kB (UM) 5*64kB (U) 1*128kB (M) 1*256kB (U) 1*512kB (M) 1*1024kB (U) 2*2048kB (UM) 20*4096kB (M) = 88836kB
[   45.280136] Node 0 Normal: 1029*4kB (UE) 378*8kB (UME) 137*16kB (UME) 185*32kB (UME) 152*64kB (UE) 123*128kB (UE) 46*256kB (UE) 4*512kB (UM) 1*1024kB (M) 0*2048kB 0*4096kB = 55572kB
[   45.479175] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[   45.479195] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[   45.479213] 100242 total pagecache pages
[   45.479220] 6143894 pages RAM
[   45.479227] 0 pages HighMem/MovableOnly
[   45.479234] 3650107 pages reserved
[   45.479241] 0 pages cma reserved
[   45.479248] 0 pages hwpoisoned
[   45.479256] Showing busy workqueues and worker pools:
[   45.479268] workqueue events: flags=0x0
[   45.479278]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[   45.479295]     in-flight: 228:balloon_process
[   45.479323]     pending: balloon_process
[   45.479336] workqueue events_power_efficient: flags=0x80
[   45.479347]   pwq 8: cpus=4 node=0 flags=0x0 nice=0 active=1/256
[   45.479363]     pending: gc_worker [nf_conntrack]
[   45.479383] workqueue mm_percpu_wq: flags=0x8
[   45.479394]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[   45.479409]     pending: vmstat_update, drain_local_pages_wq BAR(1292)
[   45.479441] pool 16: cpus=8 node=0 flags=0x0 nice=0 hung=2s workers=4 idle: 380 61 277
[   45.479464] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[   46.495088] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[   46.495130] MemAlloc: kswapd0(108) flags=0xa20840 switches=99
[   46.495144] kswapd0         S    0   108      2 0x80000000
[   46.495159] Call Trace:
[   46.495171]  ? __schedule+0x3f3/0x8c0
[   46.495182]  schedule+0x36/0x80
[   46.495194]  schedule_timeout+0x22b/0x4d0
[   46.495207]  ? __bpf_trace_tick_stop+0x10/0x10
[   46.495221]  kswapd+0x2fe/0x590
[   46.495231]  ? remove_wait_queue+0x70/0x70
[   46.495241]  kthread+0x105/0x140
[   46.495251]  ? balance_pgdat+0x3e0/0x3e0
[   46.495260]  ? kthread_stop+0x100/0x100
[   46.495272]  ret_from_fork+0x35/0x40
[   46.495299] MemAlloc: stress(1292) flags=0x404040 switches=6 seq=3640 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=3495 uninterruptible
[   46.495326] stress          D    0  1292   1290 0x00000080
[   46.495337] Call Trace:
[   46.495352]  ? __schedule+0x3f3/0x8c0
[   46.495361]  ? __switch_to_asm+0x40/0x70
[   46.495372]  ? __switch_to_asm+0x34/0x70
[   46.495384]  schedule+0x36/0x80
[   46.495397]  schedule_timeout+0x29b/0x4d0
[   46.495409]  ? __switch_to+0x13f/0x4d0
[   46.495420]  ? __switch_to_asm+0x40/0x70
[   46.495432]  ? finish_task_switch+0x75/0x2a0
[   46.495449]  wait_for_completion+0x121/0x190
[   46.495464]  ? wake_up_q+0x80/0x80
[   46.495477]  flush_work+0x18f/0x200
[   46.495490]  ? rcu_free_pwq+0x20/0x20
[   46.495502]  __alloc_pages_slowpath+0x766/0x1590
[   46.495518]  __alloc_pages_nodemask+0x302/0x3c0
[   46.495534]  alloc_pages_vma+0xac/0x4f0
[   46.495547]  do_anonymous_page+0x105/0x3f0
[   46.495561]  __handle_mm_fault+0xbc9/0xf10
[   46.495574]  ? __switch_to_asm+0x34/0x70
[   46.495588]  handle_mm_fault+0x102/0x2c0
[   46.495601]  __do_page_fault+0x294/0x540
[   46.495614]  do_page_fault+0x38/0x120
[   46.495628]  ? page_fault+0x8/0x30
[   46.495640]  page_fault+0x1e/0x30
[   46.495657] RIP: 0033:0x565c568c6dd0
[   46.495668] Code: Bad RIP value.
[   46.495684] RSP: 002b:00007fff0029ee00 EFLAGS: 00010206
[   46.495696] RAX: 00000000658ce000 RBX: 00007f79812a6010 RCX: 00007f79812a6010
[   46.495713] RDX: 0000000000000001 RSI: 000000013f9a9000 RDI: 0000000000000000
[   46.495729] RBP: 0000565c568c7bb4 R08: 00000000ffffffff R09: 0000000000000000
[   46.495747] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[   46.495768] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013f9a8000
[   46.495790] Mem-Info:
[   46.495805] active_anon:2798956 inactive_anon:8765 isolated_anon:0
                active_file:91275 inactive_file:0 isolated_file:0
                unevictable:1670 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8132 slab_unreclaimable:12398
                mapped:22844 shmem:8901 pagetables:7706 bounce:0
                free:40180 free_pcp:1615 free_cma:0
[   46.495881] Node 0 active_anon:11195824kB inactive_anon:35060kB active_file:365100kB inactive_file:0kB unevictable:6680kB isolated(anon):0kB isolated(file):0kB mapped:91376kB dirty:0kB writeback:0kB shmem:35604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[   46.495941] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[   46.495983] lowmem_reserve[]: 0 3956 23499 23499 23499
[   46.495997] Node 0 DMA32 free:88836kB min:11368kB low:15416kB high:19464kB active_anon:3972776kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7716kB bounce:0kB free_pcp:656kB local_pcp:4kB free_cma:0kB
[   46.496078] lowmem_reserve[]: 0 0 19543 19543 19543
[   46.496093] Node 0 Normal free:57880kB min:56168kB low:76180kB high:96192kB active_anon:7222828kB inactive_anon:35060kB active_file:365364kB inactive_file:0kB unevictable:6680kB writepending:0kB present:20400128kB managed:7940580kB mlocked:6680kB kernel_stack:4992kB pagetables:23108kB bounce:0kB free_pcp:5668kB local_pcp:552kB free_cma:0kB
[   46.696020] lowmem_reserve[]: 0 0 0 0 0
[   46.696037] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[   46.696074] Node 0 DMA32: 1*4kB (M) 0*8kB 12*16kB (U) 12*32kB (UM) 5*64kB (U) 1*128kB (M) 1*256kB (U) 1*512kB (M) 1*1024kB (U) 2*2048kB (UM) 20*4096kB (M) = 88836kB
[   46.696111] Node 0 Normal: 1036*4kB (ME) 354*8kB (UME) 134*16kB (ME) 170*32kB (E) 154*64kB (UME) 132*128kB (UE) 51*256kB (UE) 1*512kB (M) 1*1024kB (M) 0*2048kB 0*4096kB = 55904kB
[   46.696149] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[   46.696168] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[   46.696186] 100320 total pagecache pages
[   46.696195] 6143894 pages RAM
[   46.696206] 0 pages HighMem/MovableOnly
[   46.696215] 3017943 pages reserved
[   46.696224] 0 pages cma reserved
[   46.696233] 0 pages hwpoisoned
[   46.696243] Showing busy workqueues and worker pools:
[   46.696257] workqueue events: flags=0x0
[   46.696268]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[   46.696285]     in-flight: 228:balloon_process
[   46.696299]     pending: balloon_process
[   46.696312]   pwq 10: cpus=5 node=0 flags=0x0 nice=0 active=1/256
[   46.696327]     pending: vmpressure_work_fn
[   46.696341] workqueue events_power_efficient: flags=0x80
[   46.696353]   pwq 8: cpus=4 node=0 flags=0x0 nice=0 active=1/256
[   46.696369]     pending: gc_worker [nf_conntrack]
[   46.696388] workqueue mm_percpu_wq: flags=0x8
[   46.696399]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[   46.696413]     pending: vmstat_update, drain_local_pages_wq BAR(1292)
[   46.696438] pool 16: cpus=8 node=0 flags=0x0 nice=0 hung=3s workers=4 idle: 380 61 277
[   46.696462] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[   47.711144] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[   47.711183] MemAlloc: kswapd0(108) flags=0xa20840 switches=111
[   47.711199] kswapd0         S    0   108      2 0x80000000
[   47.711212] Call Trace:
[   47.711244]  ? __schedule+0x3f3/0x8c0
[   47.711255]  schedule+0x36/0x80
[   47.711268]  kswapd+0x584/0x590
[   47.711278]  ? remove_wait_queue+0x70/0x70
[   47.711289]  kthread+0x105/0x140
[   47.711301]  ? balance_pgdat+0x3e0/0x3e0
[   47.711312]  ? kthread_stop+0x100/0x100
[   47.711322]  ret_from_fork+0x35/0x40
[   47.711360] MemAlloc: stress(1292) flags=0x404040 switches=6 seq=3640 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=4711 uninterruptible
[   47.711390] stress          D    0  1292   1290 0x00000080
[   47.711401] Call Trace:
[   47.711410]  ? __schedule+0x3f3/0x8c0
[   47.711417]  ? __switch_to_asm+0x40/0x70
[   47.711426]  ? __switch_to_asm+0x34/0x70
[   47.711434]  schedule+0x36/0x80
[   47.711442]  schedule_timeout+0x29b/0x4d0
[   47.711466]  ? __switch_to+0x13f/0x4d0
[   47.711475]  ? __switch_to_asm+0x40/0x70
[   47.711487]  ? finish_task_switch+0x75/0x2a0
[   47.711498]  wait_for_completion+0x121/0x190
[   47.711514]  ? wake_up_q+0x80/0x80
[   47.711525]  flush_work+0x18f/0x200
[   47.711535]  ? rcu_free_pwq+0x20/0x20
[   47.711545]  __alloc_pages_slowpath+0x766/0x1590
[   47.711557]  __alloc_pages_nodemask+0x302/0x3c0
[   47.711569]  alloc_pages_vma+0xac/0x4f0
[   47.711580]  do_anonymous_page+0x105/0x3f0
[   47.711589]  __handle_mm_fault+0xbc9/0xf10
[   47.711599]  ? __switch_to_asm+0x34/0x70
[   47.711610]  handle_mm_fault+0x102/0x2c0
[   47.711620]  __do_page_fault+0x294/0x540
[   47.711630]  do_page_fault+0x38/0x120
[   47.711639]  ? page_fault+0x8/0x30
[   47.711649]  page_fault+0x1e/0x30
[   47.711660] RIP: 0033:0x565c568c6dd0
[   47.711668] Code: Bad RIP value.
[   47.711687] RSP: 002b:00007fff0029ee00 EFLAGS: 00010206
[   47.711703] RAX: 00000000658ce000 RBX: 00007f79812a6010 RCX: 00007f79812a6010
[   47.711719] RDX: 0000000000000001 RSI: 000000013f9a9000 RDI: 0000000000000000
[   47.711737] RBP: 0000565c568c7bb4 R08: 00000000ffffffff R09: 0000000000000000
[   47.711756] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[   47.711773] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013f9a8000
[   47.711793] Mem-Info:
[   47.711802] active_anon:3067646 inactive_anon:8765 isolated_anon:0
                active_file:91353 inactive_file:76 isolated_file:0
                unevictable:1670 dirty:4 writeback:0 unstable:0
                slab_reclaimable:8137 slab_unreclaimable:12410
                mapped:22862 shmem:8901 pagetables:8273 bounce:0
                free:470004 free_pcp:1681 free_cma:0
[   47.711869] Node 0 active_anon:12270584kB inactive_anon:35060kB active_file:365412kB inactive_file:304kB unevictable:6680kB isolated(anon):0kB isolated(file):0kB mapped:91448kB dirty:16kB writeback:0kB shmem:35604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[   47.711930] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[   47.711981] lowmem_reserve[]: 0 3956 23499 23499 23499
[   47.711994] Node 0 DMA32 free:88836kB min:11368kB low:15416kB high:19464kB active_anon:3972776kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7716kB bounce:0kB free_pcp:656kB local_pcp:4kB free_cma:0kB
[   47.712069] lowmem_reserve[]: 0 0 19543 19543 19543
[   47.712083] Node 0 Normal free:1775276kB min:56168kB low:76180kB high:96192kB active_anon:8297484kB inactive_anon:35060kB active_file:365384kB inactive_file:272kB unevictable:6680kB writepending:16kB present:20400128kB managed:10736100kB mlocked:6680kB kernel_stack:5024kB pagetables:25376kB bounce:0kB free_pcp:6068kB local_pcp:276kB free_cma:0kB
[   47.712154] lowmem_reserve[]: 0 0 0 0 0
[   47.712164] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[   47.712200] Node 0 DMA32: 1*4kB (M) 0*8kB 12*16kB (U) 12*32kB (UM) 5*64kB (U) 1*128kB (M) 1*256kB (U) 1*512kB (M) 1*1024kB (U) 2*2048kB (UM) 20*4096kB (M) = 88836kB
[   47.911175] Node 0 Normal: 1103*4kB (UME) 407*8kB (UME) 185*16kB (UME) 222*32kB (UE) 194*64kB (UME) 176*128kB (UE) 95*256kB (UME) 44*512kB (UM) 33*1024kB (UM) 32*2048kB (UM) 500*4096kB (U) = 2246852kB
[   47.911217] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[   47.911232] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[   47.911248] 100366 total pagecache pages
[   47.911269] 6143894 pages RAM
[   47.911277] 0 pages HighMem/MovableOnly
[   47.911284] 2320599 pages reserved
[   47.911292] 0 pages cma reserved
[   47.911300] 0 pages hwpoisoned
[   47.911309] Showing busy workqueues and worker pools:
[   47.911320] workqueue events: flags=0x0
[   47.911328]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[   47.911342]     in-flight: 228:balloon_process
[   47.911356]     pending: balloon_process
[   47.911370] workqueue events_power_efficient: flags=0x80
[   47.911381]   pwq 8: cpus=4 node=0 flags=0x0 nice=0 active=1/256
[   47.911394]     pending: gc_worker [nf_conntrack]
[   47.911412] workqueue mm_percpu_wq: flags=0x8
[   47.911421]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[   47.911434]     pending: vmstat_update, drain_local_pages_wq BAR(1292)
[   47.911463] pool 16: cpus=8 node=0 flags=0x0 nice=0 hung=4s workers=4 idle: 380 61 277
[   47.911480] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[   48.927136] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[   48.927157] MemAlloc: kswapd0(108) flags=0xa20840 switches=111
[   48.927167] kswapd0         S    0   108      2 0x80000000
[   48.927178] Call Trace:
[   48.927188]  ? __schedule+0x3f3/0x8c0
[   48.927195]  schedule+0x36/0x80
[   48.927203]  kswapd+0x584/0x590
[   48.927211]  ? remove_wait_queue+0x70/0x70
[   48.927217]  kthread+0x105/0x140
[   48.927224]  ? balance_pgdat+0x3e0/0x3e0
[   48.927233]  ? kthread_stop+0x100/0x100
[   48.927242]  ret_from_fork+0x35/0x40
[   48.927258] MemAlloc: stress(1292) flags=0x404040 switches=6 seq=3640 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=5927 uninterruptible
[   48.927275] stress          D    0  1292   1290 0x00000080
[   48.927282] Call Trace:
[   48.927287]  ? __schedule+0x3f3/0x8c0
[   48.927293]  ? __switch_to_asm+0x40/0x70
[   48.927299]  ? __switch_to_asm+0x34/0x70
[   48.927306]  schedule+0x36/0x80
[   48.927313]  schedule_timeout+0x29b/0x4d0
[   48.927322]  ? __switch_to+0x13f/0x4d0
[   48.927328]  ? __switch_to_asm+0x40/0x70
[   48.927336]  ? finish_task_switch+0x75/0x2a0
[   48.927346]  wait_for_completion+0x121/0x190
[   48.927355]  ? wake_up_q+0x80/0x80
[   48.927363]  flush_work+0x18f/0x200
[   48.927371]  ? rcu_free_pwq+0x20/0x20
[   48.927378]  __alloc_pages_slowpath+0x766/0x1590
[   48.927388]  __alloc_pages_nodemask+0x302/0x3c0
[   48.927398]  alloc_pages_vma+0xac/0x4f0
[   48.927406]  do_anonymous_page+0x105/0x3f0
[   48.927413]  __handle_mm_fault+0xbc9/0xf10
[   48.927420]  ? __switch_to_asm+0x34/0x70
[   48.927427]  handle_mm_fault+0x102/0x2c0
[   48.927435]  __do_page_fault+0x294/0x540
[   48.927442]  do_page_fault+0x38/0x120
[   48.927449]  ? page_fault+0x8/0x30
[   48.927456]  page_fault+0x1e/0x30
[   48.927469] RIP: 0033:0x565c568c6dd0
[   48.927476] Code: Bad RIP value.
[   48.927486] RSP: 002b:00007fff0029ee00 EFLAGS: 00010206
[   48.927495] RAX: 00000000658ce000 RBX: 00007f79812a6010 RCX: 00007f79812a6010
[   48.927512] RDX: 0000000000000001 RSI: 000000013f9a9000 RDI: 0000000000000000
[   48.927524] RBP: 0000565c568c7bb4 R08: 00000000ffffffff R09: 0000000000000000
[   48.927536] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[   48.927547] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013f9a8000
[   48.927559] Mem-Info:
[   48.927566] active_anon:3067632 inactive_anon:8765 isolated_anon:0
                active_file:91387 inactive_file:115 isolated_file:0
                unevictable:1670 dirty:4 writeback:0 unstable:0
                slab_reclaimable:8137 slab_unreclaimable:12426
                mapped:22944 shmem:8901 pagetables:8251 bounce:0
                free:1146929 free_pcp:1584 free_cma:0
[   48.927616] Node 0 active_anon:12270528kB inactive_anon:35060kB active_file:365548kB inactive_file:460kB unevictable:6680kB isolated(anon):0kB isolated(file):0kB mapped:91776kB dirty:16kB writeback:0kB shmem:35604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[   48.927654] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[   48.927689] lowmem_reserve[]: 0 3956 23499 23499 23499
[   48.927698] Node 0 DMA32 free:88836kB min:11368kB low:15416kB high:19464kB active_anon:3972776kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7716kB bounce:0kB free_pcp:656kB local_pcp:4kB free_cma:0kB
[   48.927738] lowmem_reserve[]: 0 0 19543 19543 19543
[   48.927746] Node 0 Normal free:4482976kB min:56168kB low:76180kB high:96192kB active_anon:8297752kB inactive_anon:35060kB active_file:365520kB inactive_file:460kB unevictable:6680kB writepending:16kB present:20400128kB managed:13443556kB mlocked:6680kB kernel_stack:4992kB pagetables:25288kB bounce:0kB free_pcp:5680kB local_pcp:276kB free_cma:0kB
[   48.927786] lowmem_reserve[]: 0 0 0 0 0
[   48.927794] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[   48.927817] Node 0 DMA32: 1*4kB (M) 0*8kB 12*16kB (U) 12*32kB (UM) 5*64kB (U) 1*128kB (M) 1*256kB (U) 1*512kB (M) 1*1024kB (U) 2*2048kB (UM) 20*4096kB (M) = 88836kB
[   48.927843] Node 0 Normal: 1150*4kB (UME) 446*8kB (UME) 214*16kB (UME) 247*32kB (UE) 225*64kB (UME) 204*128kB (UE) 123*256kB (UME) 74*512kB (UM) 68*1024kB (UM) 66*2048kB (UM) 1013*4096kB (U) = 4483432kB
[   48.927873] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[   49.127960] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[   49.127985] 100404 total pagecache pages
[   49.127994] 6143894 pages RAM
[   49.128004] 0 pages HighMem/MovableOnly
[   49.128014] 1641687 pages reserved
[   49.128023] 0 pages cma reserved
[   49.128032] 0 pages hwpoisoned
[   49.128043] Showing busy workqueues and worker pools:
[   49.128068] workqueue events: flags=0x0
[   49.128077]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[   49.128095]     in-flight: 228:balloon_process
[   49.128115]     pending: balloon_process
[   49.128130] workqueue events_unbound: flags=0x2
[   49.128142]   pwq 24: cpus=0-11 flags=0x4 nice=0 active=1/512
[   49.128157]     pending: flush_to_ldisc
[   49.128170] workqueue events_power_efficient: flags=0x80
[   49.128185]   pwq 8: cpus=4 node=0 flags=0x0 nice=0 active=1/256
[   49.128199]     pending: gc_worker [nf_conntrack]
[   49.128240] workqueue mm_percpu_wq: flags=0x8
[   49.128253]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[   49.128269]     pending: vmstat_update, drain_local_pages_wq BAR(1292)
[   49.128314] pool 16: cpus=8 node=0 flags=0x0 nice=0 hung=6s workers=4 idle: 380 61 277
[   49.128338] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[   50.143173] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[   50.143196] MemAlloc: kswapd0(108) flags=0xa20840 switches=111
[   50.143209] kswapd0         S    0   108      2 0x80000000
[   50.143219] Call Trace:
[   50.143233]  ? __schedule+0x3f3/0x8c0
[   50.143242]  schedule+0x36/0x80
[   50.143251]  kswapd+0x584/0x590
[   50.143260]  ? remove_wait_queue+0x70/0x70
[   50.143268]  kthread+0x105/0x140
[   50.143277]  ? balance_pgdat+0x3e0/0x3e0
[   50.143284]  ? kthread_stop+0x100/0x100
[   50.143292]  ret_from_fork+0x35/0x40
[   50.143312] MemAlloc: stress(1292) flags=0x404040 switches=6 seq=3640 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=7143 uninterruptible
[   50.143338] stress          D    0  1292   1290 0x00000080
[   50.143349] Call Trace:
[   50.143355]  ? __schedule+0x3f3/0x8c0
[   50.143363]  ? __switch_to_asm+0x40/0x70
[   50.143372]  ? __switch_to_asm+0x34/0x70
[   50.143381]  schedule+0x36/0x80
[   50.143388]  schedule_timeout+0x29b/0x4d0
[   50.143398]  ? __switch_to+0x13f/0x4d0
[   50.143405]  ? __switch_to_asm+0x40/0x70
[   50.143414]  ? finish_task_switch+0x75/0x2a0
[   50.143426]  wait_for_completion+0x121/0x190
[   50.143437]  ? wake_up_q+0x80/0x80
[   50.143446]  flush_work+0x18f/0x200
[   50.143455]  ? rcu_free_pwq+0x20/0x20
[   50.143465]  __alloc_pages_slowpath+0x766/0x1590
[   50.143477]  __alloc_pages_nodemask+0x302/0x3c0
[   50.143487]  alloc_pages_vma+0xac/0x4f0
[   50.143496]  do_anonymous_page+0x105/0x3f0
[   50.143505]  __handle_mm_fault+0xbc9/0xf10
[   50.143514]  ? __switch_to_asm+0x34/0x70
[   50.143522]  handle_mm_fault+0x102/0x2c0
[   50.143532]  __do_page_fault+0x294/0x540
[   50.143541]  do_page_fault+0x38/0x120
[   50.143549]  ? page_fault+0x8/0x30
[   50.143557]  page_fault+0x1e/0x30
[   50.143564] RIP: 0033:0x565c568c6dd0
[   50.143571] Code: Bad RIP value.
[   50.143583] RSP: 002b:00007fff0029ee00 EFLAGS: 00010206
[   50.143594] RAX: 00000000658ce000 RBX: 00007f79812a6010 RCX: 00007f79812a6010
[   50.143607] RDX: 0000000000000001 RSI: 000000013f9a9000 RDI: 0000000000000000
[   50.143619] RBP: 0000565c568c7bb4 R08: 00000000ffffffff R09: 0000000000000000
[   50.143634] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[   50.143648] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013f9a8000
[   50.143663] Mem-Info:
[   50.143674] active_anon:3067697 inactive_anon:8765 isolated_anon:0
                active_file:91387 inactive_file:118 isolated_file:0
                unevictable:1670 dirty:4 writeback:0 unstable:0
                slab_reclaimable:8137 slab_unreclaimable:12436
                mapped:22968 shmem:8901 pagetables:8265 bounce:0
                free:1883641 free_pcp:1670 free_cma:0
[   50.143749] Node 0 active_anon:12270788kB inactive_anon:35060kB active_file:365548kB inactive_file:472kB unevictable:6680kB isolated(anon):0kB isolated(file):0kB mapped:91872kB dirty:16kB writeback:0kB shmem:35604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[   50.143809] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[   50.143856] lowmem_reserve[]: 0 3956 23499 23499 23499
[   50.143868] Node 0 DMA32 free:88836kB min:11368kB low:15416kB high:19464kB active_anon:3972776kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7716kB bounce:0kB free_pcp:656kB local_pcp:4kB free_cma:0kB
[   50.143911] lowmem_reserve[]: 0 0 19543 19543 19543
[   50.143921] Node 0 Normal free:7429824kB min:56168kB low:76180kB high:96192kB active_anon:8298012kB inactive_anon:35060kB active_file:365520kB inactive_file:472kB unevictable:6680kB writepending:16kB present:20400128kB managed:16390628kB mlocked:6680kB kernel_stack:5008kB pagetables:25344kB bounce:0kB free_pcp:6024kB local_pcp:276kB free_cma:0kB
[   50.143967] lowmem_reserve[]: 0 0 0 0 0
[   50.143977] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[   50.144034] Node 0 DMA32: 1*4kB (M) 0*8kB 12*16kB (U) 12*32kB (UM) 5*64kB (U) 1*128kB (M) 1*256kB (U) 1*512kB (M) 1*1024kB (U) 2*2048kB (UM) 20*4096kB (M) = 88836kB
[   50.144059] Node 0 Normal: 2860*4kB (UME) 1682*8kB (UME) 950*16kB (UME) 909*32kB (UME) 743*64kB (UE) 648*128kB (UE) 482*256kB (UME) 351*512kB (UM) 260*1024kB (UM) 159*2048kB (UM) 1547*4096kB (U) = 7431168kB
[   50.144127] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[   50.144149] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[   50.144170] 100404 total pagecache pages
[   50.144180] 6143894 pages RAM
[   50.144191] 0 pages HighMem/MovableOnly
[   50.144201] 1024215 pages reserved
[   50.144210] 0 pages cma reserved
[   50.144220] 0 pages hwpoisoned
[   50.144229] Showing busy workqueues and worker pools:
[   50.144246] workqueue events: flags=0x0
[   50.144256]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[   50.343138]     in-flight: 228:balloon_process
[   50.343155]     pending: balloon_process
[   50.343201] workqueue events_power_efficient: flags=0x80
[   50.343214]   pwq 8: cpus=4 node=0 flags=0x0 nice=0 active=1/256
[   50.343228]     pending: gc_worker [nf_conntrack]
[   50.343255] workqueue mm_percpu_wq: flags=0x8
[   50.343266]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[   50.343282]     pending: vmstat_update, drain_local_pages_wq BAR(1292)
[   50.343324] pool 16: cpus=8 node=0 flags=0x0 nice=0 hung=7s workers=4 idle: 380 61 277
[   50.343352] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[  308.065439] audit: type=1130 audit(1535714132.932:72): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=qubes-update-check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 3 --timeout 10s; echo $?
stress: info: [9478] dispatching hogs: 0 cpu, 0 io, 3 vm, 0 hdd
stress: FAIL: [9478] (415) <-- worker 9479 got signal 9
stress: WARN: [9478] (417) now reaping child worker processes
stress: FAIL: [9478] (451) failed run completed in 1s

real	0m0.852s
user	0m0.285s
sys	0m2.058s
1
dmesg
[  308.065583] audit: type=1131 audit(1535714132.932:73): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=qubes-update-check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[  472.945470] stress invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
[  472.945489] stress cpuset=/ mems_allowed=0
[  472.945498] CPU: 10 PID: 9481 Comm: stress Tainted: G           O      4.18.5-5.pvops.qubes.x86_64 #1
[  472.945510] Call Trace:
[  472.945518]  dump_stack+0x63/0x83
[  472.945526]  dump_header+0x6e/0x285
[  472.945532]  oom_kill_process+0x23c/0x450
[  472.945539]  out_of_memory+0x147/0x590
[  472.945545]  __alloc_pages_slowpath+0x134c/0x1590
[  472.945555]  __alloc_pages_nodemask+0x302/0x3c0
[  472.945564]  alloc_pages_vma+0xac/0x4f0
[  472.945571]  do_anonymous_page+0x105/0x3f0
[  472.945578]  __handle_mm_fault+0xbc9/0xf10
[  472.945584]  handle_mm_fault+0x102/0x2c0
[  472.945591]  __do_page_fault+0x294/0x540
[  472.945598]  do_page_fault+0x38/0x120
[  472.945604]  ? page_fault+0x8/0x30
[  472.945611]  page_fault+0x1e/0x30
[  472.945619] RIP: 0033:0x6226aaf35dd0
[  472.945624] Code: 0f 84 3c 02 00 00 8b 54 24 0c 31 c0 85 d2 0f 94 c0 89 04 24 41 83 fd 02 0f 8f f6 00 00 00 31 c0 4d 85 ff 7e 11 0f 1f 44 00 00 <c6> 04 03 5a 4c 01 f0 49 39 c7 7f f4 4d 85 e4 0f 84 e3 01 00 00 7e 
[  472.945665] RSP: 002b:00007ffd2bd999b0 EFLAGS: 00010206
[  472.945673] RAX: 0000000062bdf000 RBX: 000071c056bad010 RCX: 000071c056bad010
[  472.945684] RDX: 0000000000000001 RSI: 000000013b3ff000 RDI: 0000000000000000
[  472.945694] RBP: 00006226aaf36bb4 R08: 00000000ffffffff R09: 0000000000000000
[  472.945705] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  472.945716] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013b3fe000
[  472.945726] Mem-Info:
[  472.945735] active_anon:1255815 inactive_anon:8253 isolated_anon:0
                active_file:92372 inactive_file:98 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8238 slab_unreclaimable:12768
                mapped:22903 shmem:8397 pagetables:4556 bounce:0
                free:40380 free_pcp:63 free_cma:0
[  472.945782] Node 0 active_anon:5023260kB inactive_anon:33012kB active_file:369488kB inactive_file:392kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:91612kB dirty:0kB writeback:0kB shmem:33588kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  472.945818] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  472.945852] lowmem_reserve[]: 0 3956 23499 23499 23499
[  472.945860] Node 0 DMA32 free:89460kB min:11368kB low:15416kB high:19464kB active_anon:3971448kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:48kB pagetables:7736kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  472.945897] lowmem_reserve[]: 0 0 19543 19543 19543
[  472.945906] Node 0 Normal free:56156kB min:56168kB low:76180kB high:96192kB active_anon:1050764kB inactive_anon:33012kB active_file:369008kB inactive_file:988kB unevictable:47380kB writepending:0kB present:20400128kB managed:1795572kB mlocked:47380kB kernel_stack:4816kB pagetables:10488kB bounce:0kB free_pcp:252kB local_pcp:132kB free_cma:0kB
[  472.945943] lowmem_reserve[]: 0 0 0 0 0
[  472.945951] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  472.945973] Node 0 DMA32: 33*4kB (UM) 29*8kB (UM) 22*16kB (UM) 16*32kB (UM) 4*64kB (U) 0*128kB 0*256kB 1*512kB (M) 2*1024kB (UM) 2*2048kB (UM) 20*4096kB (M) = 90060kB
[  472.945996] Node 0 Normal: 1586*4kB (UME) 475*8kB (UME) 162*16kB (UME) 172*32kB (ME) 130*64kB (UME) 113*128kB (E) 34*256kB (ME) 3*512kB (UM) 2*1024kB (U) 2*2048kB (U) 0*4096kB = 57408kB
[  472.946054] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  472.946066] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  472.946077] 100907 total pagecache pages
[  472.946083] 6143894 pages RAM
[  472.946088] 0 pages HighMem/MovableOnly
[  472.946094] 4673491 pages reserved
[  472.946099] 0 pages cma reserved
[  472.946105] 0 pages hwpoisoned
[  472.946110] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[  472.946129] [  290]     0   290    31100     2450   221184        0             0 systemd-journal
[  472.946142] [  306]     0   306    30805      592   163840        0             0 qubesdb-daemon
[  472.946155] [  313]     0   313    23524     1975   208896        0         -1000 systemd-udevd
[  472.946168] [  477]     0   477    19358     1510   188416        0             0 systemd-logind
[  472.946180] [  479]    81   479    13254     1167   151552        0          -900 dbus-daemon
[  472.946194] [  483]     0   483     3042     1176    69632        0             0 haveged
[  472.946205] [  492]     0   492    10243       73   118784        0             0 meminfo-writer
[  472.946217] [  516]     0   516    34209      118   180224        0             0 xl
[  472.946228] [  522]     0   522    18947     1187   192512        0             0 qubes-gui
[  472.946242] [  524]     0   524    16536      851   180224        0             0 qrexec-agent
[  472.946255] [  525]     0   525    73994     1310   225280        0             0 su
[  473.145331] [  556]  1000   556    21961     2048   204800        0             0 systemd
[  473.145353] [  559]     0   559    52863      369    65536        0             0 agetty
[  473.145372] [  560]     0   560    52775      527    73728        0             0 agetty
[  473.145394] [  565]  1000   565    34755      601   286720        0             0 (sd-pam)
[  473.145412] [  604]  1000   604    54160      848    77824        0             0 bash
[  473.145431] [  667]  1000   667     3500      276    73728        0             0 xinit
[  473.145451] [  675]  1000   675   310902    24505   692224        0             0 Xorg
[  473.145470] [  703]  1000   703    53597      767    81920        0             0 qubes-session
[  473.145491] [  713]  1000   713    13194     1147   151552        0             0 dbus-daemon
[  473.145511] [  731]  1000   731     7233      117    90112        0             0 ssh-agent
[  473.145530] [  751]  1000   751    16562      573   167936        0             0 qrexec-client-v
[  473.145551] [  766]  1000   766    48107     1271   139264        0             0 dconf-service
[  473.145573] [  772]  1000   772    62744     2971   139264        0             0 icon-sender
[  473.145596] [  774]  1000   774   428389    12268   827392        0             0 gsd-xsettings
[  473.145618] [  775]  1000   775   122405     1522   184320        0             0 gnome-keyring-d
[  473.145639] [  778]  1000   778   120207     1392   184320        0             0 agent
[  473.145657] [  793]  1000   793   128956     2128   401408        0             0 pulseaudio
[  473.145677] [  795]  1000   795   438416    13965   913408        0             0 nm-applet
[  473.145693] [  796]   172   796    47723      787   143360        0             0 rtkit-daemon
[  473.145712] [  801]   998   801   657132     5386   421888        0             0 polkitd
[  473.145730] [  814]  1000   814    16528      101   167936        0             0 qrexec-fork-ser
[  473.145750] [  817]  1000   817    52238      190    69632        0             0 sleep
[  473.145769] [  892]  1000   892    87397     1561   180224        0             0 at-spi-bus-laun
[  473.145791] [  897]  1000   897    13134      945   151552        0             0 dbus-daemon
[  473.145816] [  902]  1000   902    56364     1478   212992        0             0 at-spi2-registr
[  473.145837] [  909]  1000   909   123835     1751   208896        0             0 gvfsd
[  473.145852] [  938]  1000   938    89299     1342   184320        0             0 gvfsd-fuse
[  473.145871] [  969]  1000   969   207857    10351   598016        0             0 gnome-terminal-
[  473.145891] [  974]  1000   974   206359     2820   290816        0             0 xdg-desktop-por
[  473.145909] [  978]  1000   978   157249     1483   196608        0             0 xdg-document-po
[  473.145925] [  981]  1000   981   117667     1267   163840        0             0 xdg-permission-
[  473.145943] [  993]  1000   993   193292     5055   471040        0             0 xdg-desktop-por
[  473.145960] [ 1002]  1000  1002    54290     1061    81920        0             0 bash
[  473.145976] [ 1042]  1000  1042    54290     1063    81920        0             0 bash
[  473.145992] [ 1067]  1000  1067    53989      730    90112        0             0 watch
[  473.146030] [ 1332]  1000  1332    54290     1049    86016        0             0 bash
[  473.146056] [ 1417]  1000  1417    53876      256    77824        0             0 dmesg
[  473.146073] [ 9478]  1000  9478     2000      288    61440        0             0 stress
[  473.146090] [ 9479]  1000  9479  1293263   440118  3596288        0             0 stress
[  473.146106] [ 9480]  1000  9480  1293263   421242  3440640        0             0 stress
[  473.146123] [ 9481]  1000  9481  1293263   404544  3305472        0             0 stress
[  473.146140] Out of memory: Kill process 9479 (stress) score 286 or sacrifice child
[  473.146158] Killed process 9479 (stress) total-vm:5173052kB, anon-rss:1760136kB, file-rss:336kB, shmem-rss:0kB

github: You can't comment at this time — your comment is too long (maximum is 65535 characters).

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 31, 2018

continuing...

$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 3 --timeout 10s; echo $?
stress: info: [10257] dispatching hogs: 0 cpu, 0 io, 3 vm, 0 hdd
stress: FAIL: [10257] (415) <-- worker 10260 got signal 9
stress: WARN: [10257] (417) now reaping child worker processes
stress: FAIL: [10257] (451) failed run completed in 8s

real	0m7.893s
user	0m9.546s
sys	0m6.530s
1

dmesg

(first line below is from prev. dmesg)

[  473.146158] Killed process 9479 (stress) total-vm:5173052kB, anon-rss:1760136kB, file-rss:336kB, shmem-rss:0kB
[  513.421725] stress invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
[  513.421751] stress cpuset=/ mems_allowed=0
[  513.421765] CPU: 9 PID: 10259 Comm: stress Tainted: G           O      4.18.5-5.pvops.qubes.x86_64 #1
[  513.421781] Call Trace:
[  513.421791]  dump_stack+0x63/0x83
[  513.421802]  dump_header+0x6e/0x285
[  513.421811]  oom_kill_process+0x23c/0x450
[  513.421822]  out_of_memory+0x147/0x590
[  513.421832]  __alloc_pages_slowpath+0x134c/0x1590
[  513.421846]  __alloc_pages_nodemask+0x302/0x3c0
[  513.421858]  alloc_pages_vma+0xac/0x4f0
[  513.421869]  do_anonymous_page+0x105/0x3f0
[  513.421880]  __handle_mm_fault+0xbc9/0xf10
[  513.421890]  handle_mm_fault+0x102/0x2c0
[  513.421901]  __do_page_fault+0x294/0x540
[  513.421912]  do_page_fault+0x38/0x120
[  513.421921]  ? page_fault+0x8/0x30
[  513.421930]  page_fault+0x1e/0x30
[  513.421940] RIP: 0033:0x5c503a41fdd0
[  513.421949] Code: 0f 84 3c 02 00 00 8b 54 24 0c 31 c0 85 d2 0f 94 c0 89 04 24 41 83 fd 02 0f 8f f6 00 00 00 31 c0 4d 85 ff 7e 11 0f 1f 44 00 00 <c6> 04 03 5a 4c 01 f0 49 39 c7 7f f4 4d 85 e4 0f 84 e3 01 00 00 7e 
[  513.422011] RSP: 002b:00007fffaa7493c0 EFLAGS: 00010206
[  513.422023] RAX: 0000000062216000 RBX: 00007b658ae22010 RCX: 00007b658ae22010
[  513.422038] RDX: 0000000000000001 RSI: 000000013b2d8000 RDI: 0000000000000000
[  513.422053] RBP: 00005c503a420bb4 R08: 00000000ffffffff R09: 0000000000000000
[  513.422068] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  513.422083] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013b2d7000
[  513.422118] Mem-Info:
[  513.422128] active_anon:1254269 inactive_anon:8253 isolated_anon:0
                active_file:92500 inactive_file:68 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8166 slab_unreclaimable:12805
                mapped:22886 shmem:8397 pagetables:4662 bounce:0
                free:40229 free_pcp:2295 free_cma:0
[  513.422195] Node 0 active_anon:5017076kB inactive_anon:33012kB active_file:370000kB inactive_file:272kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:91544kB dirty:0kB writeback:0kB shmem:33588kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  513.422243] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  513.422294] lowmem_reserve[]: 0 3956 23499 23499 23499
[  513.422308] Node 0 DMA32 free:89436kB min:11368kB low:15416kB high:19464kB active_anon:3969432kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:32kB pagetables:7472kB bounce:0kB free_pcp:3340kB local_pcp:188kB free_cma:0kB
[  513.422361] lowmem_reserve[]: 0 0 19543 19543 19543
[  513.422374] Node 0 Normal free:55576kB min:56168kB low:76180kB high:96192kB active_anon:1047032kB inactive_anon:33012kB active_file:369972kB inactive_file:408kB unevictable:47380kB writepending:0kB present:20400128kB managed:1795492kB mlocked:47380kB kernel_stack:4864kB pagetables:11176kB bounce:0kB free_pcp:5840kB local_pcp:132kB free_cma:0kB
[  513.422414] lowmem_reserve[]: 0 0 0 0 0
[  513.422421] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  513.422447] Node 0 DMA32: 0*4kB 1*8kB (M) 1*16kB (M) 1*32kB (U) 1*64kB (M) 1*128kB (U) 1*256kB (M) 1*512kB (M) 0*1024kB 1*2048kB (U) 21*4096kB (M) = 89080kB
[  513.422472] Node 0 Normal: 815*4kB (UME) 248*8kB (UE) 120*16kB (UE) 140*32kB (UE) 76*64kB (E) 70*128kB (UME) 14*256kB (UME) 2*512kB (UM) 25*1024kB (M) 0*2048kB 0*4096kB = 55676kB
[  513.422500] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  513.422513] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  513.422525] 100910 total pagecache pages
[  513.422531] 6143894 pages RAM
[  513.422536] 0 pages HighMem/MovableOnly
[  513.422542] 4673511 pages reserved
[  513.422549] 0 pages cma reserved
[  513.422556] 0 pages hwpoisoned
[  513.422562] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[  513.422583] [  290]     0   290    31100     2450   221184        0             0 systemd-journal
[  513.422597] [  306]     0   306    30805      592   163840        0             0 qubesdb-daemon
[  513.422609] [  313]     0   313    23524     1975   208896        0         -1000 systemd-udevd
[  513.422622] [  477]     0   477    19358     1510   188416        0             0 systemd-logind
[  513.422636] [  479]    81   479    13254     1167   151552        0          -900 dbus-daemon
[  513.422648] [  483]     0   483     3042     1176    69632        0             0 haveged
[  513.622684] [  492]     0   492    10243       73   118784        0             0 meminfo-writer
[  513.622706] [  516]     0   516    34209      118   180224        0             0 xl
[  513.622724] [  522]     0   522    18947     1187   192512        0             0 qubes-gui
[  513.622744] [  524]     0   524    16536      851   180224        0             0 qrexec-agent
[  513.622763] [  525]     0   525    73994     1310   225280        0             0 su
[  513.622779] [  556]  1000   556    21961     2048   204800        0             0 systemd
[  513.622798] [  559]     0   559    52863      369    65536        0             0 agetty
[  513.622815] [  560]     0   560    52775      527    73728        0             0 agetty
[  513.622832] [  565]  1000   565    34755      601   286720        0             0 (sd-pam)
[  513.622851] [  604]  1000   604    54160      848    77824        0             0 bash
[  513.622871] [  667]  1000   667     3500      276    73728        0             0 xinit
[  513.622888] [  675]  1000   675   310902    24505   692224        0             0 Xorg
[  513.622905] [  703]  1000   703    53597      767    81920        0             0 qubes-session
[  513.622924] [  713]  1000   713    13194     1147   151552        0             0 dbus-daemon
[  513.622946] [  731]  1000   731     7233      117    90112        0             0 ssh-agent
[  513.622972] [  751]  1000   751    16562      573   167936        0             0 qrexec-client-v
[  513.622994] [  766]  1000   766    48107     1271   139264        0             0 dconf-service
[  513.623033] [  772]  1000   772    62744     2971   139264        0             0 icon-sender
[  513.623058] [  774]  1000   774   428389    12268   827392        0             0 gsd-xsettings
[  513.623079] [  775]  1000   775   122405     1522   184320        0             0 gnome-keyring-d
[  513.623100] [  778]  1000   778   120207     1392   184320        0             0 agent
[  513.623118] [  793]  1000   793   128956     2128   401408        0             0 pulseaudio
[  513.623138] [  795]  1000   795   438416    13965   913408        0             0 nm-applet
[  513.623156] [  796]   172   796    47723      787   143360        0             0 rtkit-daemon
[  513.623175] [  801]   998   801   657132     5386   421888        0             0 polkitd
[  513.623197] [  814]  1000   814    16528      101   167936        0             0 qrexec-fork-ser
[  513.623217] [  817]  1000   817    52238      190    69632        0             0 sleep
[  513.623236] [  892]  1000   892    87397     1561   180224        0             0 at-spi-bus-laun
[  513.623256] [  897]  1000   897    13134      945   151552        0             0 dbus-daemon
[  513.623276] [  902]  1000   902    56364     1478   212992        0             0 at-spi2-registr
[  513.623296] [  909]  1000   909   123835     1751   208896        0             0 gvfsd
[  513.623312] [  938]  1000   938    89299     1342   184320        0             0 gvfsd-fuse
[  513.623332] [  969]  1000   969   207857    10404   598016        0             0 gnome-terminal-
[  513.623351] [  974]  1000   974   206359     2820   290816        0             0 xdg-desktop-por
[  513.623374] [  978]  1000   978   157249     1483   196608        0             0 xdg-document-po
[  513.623393] [  981]  1000   981   117667     1267   163840        0             0 xdg-permission-
[  513.623412] [  993]  1000   993   193292     5055   471040        0             0 xdg-desktop-por
[  513.623433] [ 1002]  1000  1002    54290     1061    81920        0             0 bash
[  513.623451] [ 1042]  1000  1042    54290     1063    81920        0             0 bash
[  513.623467] [ 1067]  1000  1067    53989      730    90112        0             0 watch
[  513.623483] [ 1332]  1000  1332    54290     1049    86016        0             0 bash
[  513.623499] [ 1417]  1000  1417    53876      256    77824        0             0 dmesg
[  513.623516] [10257]  1000 10257     2000      260    61440        0             0 stress
[  513.623530] [10258]  1000 10258  1292968   437053  3563520        0             0 stress
[  513.623545] [10259]  1000 10259  1292968   402007  3284992        0             0 stress
[  513.623559] [10260]  1000 10260  1292968   424051  3461120        0             0 stress
[  513.623576] Out of memory: Kill process 10260 (stress) score 288 or sacrifice child
[  513.623594] Killed process 10260 (stress) total-vm:5171872kB, anon-rss:1695992kB, file-rss:212kB, shmem-rss:0kB
[  513.907974] oom_reaper: reaped process 10260 (stress), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[  515.231095] MemAlloc-Info: stalling=1 dying=1 exiting=0 victim=1 oom_count=2
[  515.231123] MemAlloc: kswapd0(108) flags=0xa20840 switches=160
[  515.231138] kswapd0         S    0   108      2 0x80000000
[  515.231152] Call Trace:
[  515.231165]  ? __schedule+0x3f3/0x8c0
[  515.231176]  schedule+0x36/0x80
[  515.231187]  schedule_timeout+0x22b/0x4d0
[  515.231198]  ? __bpf_trace_tick_stop+0x10/0x10
[  515.231212]  kswapd+0x2fe/0x590
[  515.231222]  ? remove_wait_queue+0x70/0x70
[  515.231234]  kthread+0x105/0x140
[  515.231244]  ? balance_pgdat+0x3e0/0x3e0
[  515.231253]  ? kthread_stop+0x100/0x100
[  515.231263]  ret_from_fork+0x35/0x40
[  515.231288] MemAlloc: stress(10260) flags=0x404040 switches=437 seq=3712 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=1683 uninterruptible dying victim
[  515.231321] stress          D    0 10260  10257 0x00100084
[  515.231334] Call Trace:
[  515.231343]  ? __schedule+0x3f3/0x8c0
[  515.231353]  ? __switch_to_asm+0x40/0x70
[  515.231363]  ? __switch_to_asm+0x34/0x70
[  515.231374]  schedule+0x36/0x80
[  515.231383]  schedule_timeout+0x29b/0x4d0
[  515.231394]  ? __switch_to+0x13f/0x4d0
[  515.231404]  ? __switch_to_asm+0x40/0x70
[  515.231415]  ? finish_task_switch+0x75/0x2a0
[  515.231427]  wait_for_completion+0x121/0x190
[  515.231438]  ? wake_up_q+0x80/0x80
[  515.231447]  flush_work+0x18f/0x200
[  515.231456]  ? rcu_free_pwq+0x20/0x20
[  515.231466]  __alloc_pages_slowpath+0x766/0x1590
[  515.231480]  __alloc_pages_nodemask+0x302/0x3c0
[  515.231494]  alloc_pages_vma+0xac/0x4f0
[  515.231505]  do_anonymous_page+0x105/0x3f0
[  515.231518]  __handle_mm_fault+0xbc9/0xf10
[  515.231529]  handle_mm_fault+0x102/0x2c0
[  515.231538]  __do_page_fault+0x294/0x540
[  515.231548]  ? page_fault+0x8/0x30
[  515.231558]  do_page_fault+0x38/0x120
[  515.231569]  ? page_fault+0x8/0x30
[  515.231579]  page_fault+0x1e/0x30
[  515.231589] RIP: 0033:0x5c503a41fdd0
[  515.231598] Code: Bad RIP value.
[  515.231610] RSP: 002b:00007fffaa7493c0 EFLAGS: 00010206
[  515.231623] RAX: 0000000067826000 RBX: 00007b658ae22010 RCX: 00007b658ae22010
[  515.231642] RDX: 0000000000000001 RSI: 000000013b2d8000 RDI: 0000000000000000
[  515.231659] RBP: 00005c503a420bb4 R08: 00000000ffffffff R09: 0000000000000000
[  515.231676] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  515.231693] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013b2d7000
[  515.231711] Mem-Info:
[  515.231724] active_anon:2073853 inactive_anon:8253 isolated_anon:0
                active_file:92472 inactive_file:0 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8166 slab_unreclaimable:12822
                mapped:22834 shmem:8397 pagetables:7024 bounce:0
                free:40722 free_pcp:1878 free_cma:0
[  515.231795] Node 0 active_anon:8295704kB inactive_anon:33012kB active_file:369888kB inactive_file:0kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:91336kB dirty:0kB writeback:0kB shmem:33588kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  515.231852] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  515.231905] lowmem_reserve[]: 0 3956 23499 23499 23499
[  515.231919] Node 0 DMA32 free:89280kB min:11368kB low:15416kB high:19464kB active_anon:3967460kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:10456kB bounce:0kB free_pcp:2756kB local_pcp:76kB free_cma:0kB
[  515.431897] lowmem_reserve[]: 0 0 19543 19543 19543
[  515.431915] Node 0 Normal free:58020kB min:56168kB low:76180kB high:96192kB active_anon:4748764kB inactive_anon:33012kB active_file:369860kB inactive_file:0kB unevictable:47380kB writepending:0kB present:20400128kB managed:5506468kB mlocked:47380kB kernel_stack:4848kB pagetables:19144kB bounce:0kB free_pcp:5056kB local_pcp:604kB free_cma:0kB
[  515.431987] lowmem_reserve[]: 0 0 0 0 0
[  515.432014] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  515.432055] Node 0 DMA32: 0*4kB 4*8kB (UM) 0*16kB 1*32kB (M) 0*64kB 27*128kB (UM) 1*256kB (U) 1*512kB (U) 1*1024kB (M) 1*2048kB (M) 20*4096kB (M) = 89280kB
[  515.432095] Node 0 Normal: 249*4kB (UME) 69*8kB (UME) 143*16kB (UE) 247*32kB (UE) 188*64kB (UE) 250*128kB (U) 1*256kB (M) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 56028kB
[  515.432144] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  515.432167] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  515.432190] 100970 total pagecache pages
[  515.432200] 6143894 pages RAM
[  515.432210] 0 pages HighMem/MovableOnly
[  515.432221] 3745767 pages reserved
[  515.432232] 0 pages cma reserved
[  515.432243] 0 pages hwpoisoned
[  515.432253] Showing busy workqueues and worker pools:
[  515.432268] workqueue events: flags=0x0
[  515.432280]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  515.432299]     in-flight: 203:balloon_process
[  515.432319]     pending: balloon_process
[  515.432335] workqueue events_unbound: flags=0x2
[  515.432348]   pwq 24: cpus=0-11 flags=0x4 nice=0 active=1/512
[  515.432367]     pending: flush_to_ldisc BAR(969)
[  515.432385] workqueue events_power_efficient: flags=0x80
[  515.432399]   pwq 8: cpus=4 node=0 flags=0x0 nice=0 active=1/256
[  515.432416]     pending: gc_worker [nf_conntrack]
[  515.432432] workqueue mm_percpu_wq: flags=0x8
[  515.432440]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  515.432453]     pending: drain_local_pages_wq BAR(10260), vmstat_update
[  515.432482] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=1s workers=2 idle: 541
[  515.432498] MemAlloc-Info: stalling=1 dying=1 exiting=0 victim=1 oom_count=2
[  516.447094] MemAlloc-Info: stalling=1 dying=1 exiting=0 victim=1 oom_count=2
[  516.447118] MemAlloc: kswapd0(108) flags=0xa20840 switches=177
[  516.447134] kswapd0         S    0   108      2 0x80000000
[  516.447145] Call Trace:
[  516.447156]  ? __schedule+0x3f3/0x8c0
[  516.447163]  schedule+0x36/0x80
[  516.447171]  kswapd+0x584/0x590
[  516.447178]  ? remove_wait_queue+0x70/0x70
[  516.447185]  kthread+0x105/0x140
[  516.447192]  ? balance_pgdat+0x3e0/0x3e0
[  516.447198]  ? kthread_stop+0x100/0x100
[  516.447204]  ret_from_fork+0x35/0x40
[  516.447225] MemAlloc: stress(10260) flags=0x404040 switches=437 seq=3712 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=2899 uninterruptible dying victim
[  516.447253] stress          D    0 10260  10257 0x00100084
[  516.447265] Call Trace:
[  516.447272]  ? __schedule+0x3f3/0x8c0
[  516.447279]  ? __switch_to_asm+0x40/0x70
[  516.447287]  ? __switch_to_asm+0x34/0x70
[  516.447295]  schedule+0x36/0x80
[  516.447302]  schedule_timeout+0x29b/0x4d0
[  516.447312]  ? __switch_to+0x13f/0x4d0
[  516.447319]  ? __switch_to_asm+0x40/0x70
[  516.447328]  ? finish_task_switch+0x75/0x2a0
[  516.447339]  wait_for_completion+0x121/0x190
[  516.447352]  ? wake_up_q+0x80/0x80
[  516.447365]  flush_work+0x18f/0x200
[  516.447373]  ? rcu_free_pwq+0x20/0x20
[  516.447382]  __alloc_pages_slowpath+0x766/0x1590
[  516.447393]  __alloc_pages_nodemask+0x302/0x3c0
[  516.447404]  alloc_pages_vma+0xac/0x4f0
[  516.447413]  do_anonymous_page+0x105/0x3f0
[  516.447422]  __handle_mm_fault+0xbc9/0xf10
[  516.447431]  handle_mm_fault+0x102/0x2c0
[  516.447439]  __do_page_fault+0x294/0x540
[  516.447448]  ? page_fault+0x8/0x30
[  516.447455]  do_page_fault+0x38/0x120
[  516.447463]  ? page_fault+0x8/0x30
[  516.447471]  page_fault+0x1e/0x30
[  516.447479] RIP: 0033:0x5c503a41fdd0
[  516.447487] Code: Bad RIP value.
[  516.447498] RSP: 002b:00007fffaa7493c0 EFLAGS: 00010206
[  516.447509] RAX: 0000000067826000 RBX: 00007b658ae22010 RCX: 00007b658ae22010
[  516.447524] RDX: 0000000000000001 RSI: 000000013b2d8000 RDI: 0000000000000000
[  516.447538] RBP: 00005c503a420bb4 R08: 00000000ffffffff R09: 0000000000000000
[  516.447552] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  516.447566] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013b2d7000
[  516.447582] Mem-Info:
[  516.447594] active_anon:2610159 inactive_anon:8253 isolated_anon:0
                active_file:92424 inactive_file:11 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8178 slab_unreclaimable:12856
                mapped:22975 shmem:8397 pagetables:8318 bounce:0
                free:166203 free_pcp:2094 free_cma:0
[  516.447652] Node 0 active_anon:10440636kB inactive_anon:33012kB active_file:369696kB inactive_file:44kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:91900kB dirty:0kB writeback:0kB shmem:33588kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  516.447704] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  516.447753] lowmem_reserve[]: 0 3956 23499 23499 23499
[  516.447769] Node 0 DMA32 free:89280kB min:11368kB low:15416kB high:19464kB active_anon:3967460kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:10456kB bounce:0kB free_pcp:2756kB local_pcp:76kB free_cma:0kB
[  516.447830] lowmem_reserve[]: 0 0 19543 19543 19543
[  516.447845] Node 0 Normal free:559628kB min:56168kB low:76180kB high:96192kB active_anon:6472680kB inactive_anon:33012kB active_file:369952kB inactive_file:272kB unevictable:47380kB writepending:0kB present:20400128kB managed:7736740kB mlocked:47380kB kernel_stack:4964kB pagetables:22816kB bounce:0kB free_pcp:5620kB local_pcp:632kB free_cma:0kB
[  516.447901] lowmem_reserve[]: 0 0 0 0 0
[  516.447913] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  516.447945] Node 0 DMA32: 0*4kB 4*8kB (UM) 0*16kB 1*32kB (M) 0*64kB 27*128kB (UM) 1*256kB (U) 1*512kB (U) 1*1024kB (M) 1*2048kB (M) 20*4096kB (M) = 89280kB
[  516.647981] Node 0 Normal: 293*4kB (UME) 137*8kB (UME) 130*16kB (UME) 275*32kB (UE) 214*64kB (UE) 277*128kB (UM) 22*256kB (UM) 18*512kB (UM) 19*1024kB (UM) 17*2048kB (UM) 220*4096kB (U) = 1032540kB
[  516.648037] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  516.648083] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  516.648104] 100993 total pagecache pages
[  516.648115] 6143894 pages RAM
[  516.648125] 0 pages HighMem/MovableOnly
[  516.648134] 3069927 pages reserved
[  516.648144] 0 pages cma reserved
[  516.648155] 0 pages hwpoisoned
[  516.648164] Showing busy workqueues and worker pools:
[  516.648177] workqueue events: flags=0x0
[  516.648189]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  516.648204]     in-flight: 203:balloon_process
[  516.648227]     pending: balloon_process
[  516.648244] workqueue events_unbound: flags=0x2
[  516.648257] workqueue events_power_efficient: flags=0x80
[  516.648270]   pwq 8: cpus=4 node=0 flags=0x0 nice=0 active=1/256
[  516.648295]     pending: gc_worker [nf_conntrack]
[  516.648325] workqueue mm_percpu_wq: flags=0x8
[  516.648341]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  516.648358]     pending: drain_local_pages_wq BAR(10260), vmstat_update
[  516.648397] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=3s workers=2 idle: 541
[  516.648420] pool 24: cpus=0-11 flags=0x4 nice=0 hung=0s workers=3 idle: 293 325
[  516.648441] MemAlloc-Info: stalling=1 dying=1 exiting=0 victim=1 oom_count=2
[  517.663099] MemAlloc-Info: stalling=1 dying=1 exiting=0 victim=1 oom_count=2
[  517.663120] MemAlloc: kswapd0(108) flags=0xa20840 switches=177
[  517.663130] kswapd0         S    0   108      2 0x80000000
[  517.663139] Call Trace:
[  517.663149]  ? __schedule+0x3f3/0x8c0
[  517.663156]  schedule+0x36/0x80
[  517.663174]  kswapd+0x584/0x590
[  517.663181]  ? remove_wait_queue+0x70/0x70
[  517.663188]  kthread+0x105/0x140
[  517.663195]  ? balance_pgdat+0x3e0/0x3e0
[  517.663202]  ? kthread_stop+0x100/0x100
[  517.663208]  ret_from_fork+0x35/0x40
[  517.663221] MemAlloc: stress(10260) flags=0x404040 switches=437 seq=3712 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=4115 uninterruptible dying victim
[  517.663242] stress          D    0 10260  10257 0x00100084
[  517.663250] Call Trace:
[  517.663257]  ? __schedule+0x3f3/0x8c0
[  517.663264]  ? __switch_to_asm+0x40/0x70
[  517.663272]  ? __switch_to_asm+0x34/0x70
[  517.663279]  schedule+0x36/0x80
[  517.663286]  schedule_timeout+0x29b/0x4d0
[  517.663295]  ? __switch_to+0x13f/0x4d0
[  517.663302]  ? __switch_to_asm+0x40/0x70
[  517.663309]  ? finish_task_switch+0x75/0x2a0
[  517.663317]  wait_for_completion+0x121/0x190
[  517.663326]  ? wake_up_q+0x80/0x80
[  517.663334]  flush_work+0x18f/0x200
[  517.663341]  ? rcu_free_pwq+0x20/0x20
[  517.663350]  __alloc_pages_slowpath+0x766/0x1590
[  517.663360]  __alloc_pages_nodemask+0x302/0x3c0
[  517.663369]  alloc_pages_vma+0xac/0x4f0
[  517.663377]  do_anonymous_page+0x105/0x3f0
[  517.663385]  __handle_mm_fault+0xbc9/0xf10
[  517.663393]  handle_mm_fault+0x102/0x2c0
[  517.663405]  __do_page_fault+0x294/0x540
[  517.663412]  ? page_fault+0x8/0x30
[  517.663420]  do_page_fault+0x38/0x120
[  517.663426]  ? page_fault+0x8/0x30
[  517.663432]  page_fault+0x1e/0x30
[  517.663439] RIP: 0033:0x5c503a41fdd0
[  517.663446] Code: Bad RIP value.
[  517.663456] RSP: 002b:00007fffaa7493c0 EFLAGS: 00010206
[  517.663464] RAX: 0000000067826000 RBX: 00007b658ae22010 RCX: 00007b658ae22010
[  517.663476] RDX: 0000000000000001 RSI: 000000013b2d8000 RDI: 0000000000000000
[  517.663487] RBP: 00005c503a420bb4 R08: 00000000ffffffff R09: 0000000000000000
[  517.663499] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  517.663510] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013b2d7000
[  517.663522] Mem-Info:
[  517.663531] active_anon:2610144 inactive_anon:8253 isolated_anon:0
                active_file:92493 inactive_file:4 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8201 slab_unreclaimable:12865
                mapped:22915 shmem:8397 pagetables:8206 bounce:0
                free:899796 free_pcp:2162 free_cma:0
[  517.663580] Node 0 active_anon:10440576kB inactive_anon:33012kB active_file:369972kB inactive_file:16kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:91660kB dirty:0kB writeback:0kB shmem:33588kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  517.663621] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  517.663659] lowmem_reserve[]: 0 3956 23499 23499 23499
[  517.663669] Node 0 DMA32 free:89288kB min:11368kB low:15416kB high:19464kB active_anon:3967460kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:10456kB bounce:0kB free_pcp:2756kB local_pcp:76kB free_cma:0kB
[  517.663708] lowmem_reserve[]: 0 0 19543 19543 19543
[  517.663718] Node 0 Normal free:3493992kB min:56168kB low:76180kB high:96192kB active_anon:6472824kB inactive_anon:33012kB active_file:369944kB inactive_file:16kB unevictable:47380kB writepending:0kB present:20400128kB managed:10671524kB mlocked:47380kB kernel_stack:4832kB pagetables:22368kB bounce:0kB free_pcp:5892kB local_pcp:632kB free_cma:0kB
[  517.663759] lowmem_reserve[]: 0 0 0 0 0
[  517.663766] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  517.663791] Node 0 DMA32: 0*4kB 5*8kB (UM) 0*16kB 1*32kB (M) 0*64kB 27*128kB (UM) 1*256kB (U) 1*512kB (U) 1*1024kB (M) 1*2048kB (M) 20*4096kB (M) = 89288kB
[  517.663815] Node 0 Normal: 451*4kB (UME) 211*8kB (UME) 197*16kB (UME) 321*32kB (UME) 242*64kB (UE) 303*128kB (UM) 59*256kB (UM) 38*512kB (UM) 45*1024kB (UM) 46*2048kB (UM) 793*4096kB (U) = 3494164kB
[  517.663848] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  517.663861] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  517.663874] 100892 total pagecache pages
[  517.663881] 6143894 pages RAM
[  517.663887] 0 pages HighMem/MovableOnly
[  517.663894] 2454503 pages reserved
[  517.663901] 0 pages cma reserved
[  517.663908] 0 pages hwpoisoned
[  517.663914] Showing busy workqueues and worker pools:
[  517.663923] workqueue events: flags=0x0
[  517.663932]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  517.663943]     in-flight: 203:balloon_process
[  517.663954]     pending: balloon_process
[  517.663965] workqueue mm_percpu_wq: flags=0x8
[  517.663973]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  517.663984]     pending: drain_local_pages_wq BAR(10260), vmstat_update
[  517.664022] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=4s workers=2 idle: 541
[  517.664038] MemAlloc-Info: stalling=1 dying=1 exiting=0 victim=1 oom_count=2
[  518.687149] MemAlloc-Info: stalling=1 dying=1 exiting=0 victim=1 oom_count=2
[  518.687183] MemAlloc: kswapd0(108) flags=0xa20840 switches=177
[  518.687194] kswapd0         S    0   108      2 0x80000000
[  518.687203] Call Trace:
[  518.687228]  ? __schedule+0x3f3/0x8c0
[  518.687235]  schedule+0x36/0x80
[  518.687245]  kswapd+0x584/0x590
[  518.687253]  ? remove_wait_queue+0x70/0x70
[  518.687262]  kthread+0x105/0x140
[  518.687270]  ? balance_pgdat+0x3e0/0x3e0
[  518.687277]  ? kthread_stop+0x100/0x100
[  518.687286]  ret_from_fork+0x35/0x40
[  518.687301] MemAlloc: stress(10260) flags=0x404040 switches=437 seq=3712 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=5139 uninterruptible dying victim
[  518.687322] stress          D    0 10260  10257 0x00100084
[  518.687331] Call Trace:
[  518.687338]  ? __schedule+0x3f3/0x8c0
[  518.687346]  ? __switch_to_asm+0x40/0x70
[  518.687354]  ? __switch_to_asm+0x34/0x70
[  518.687365]  schedule+0x36/0x80
[  518.687372]  schedule_timeout+0x29b/0x4d0
[  518.687380]  ? __switch_to+0x13f/0x4d0
[  518.687387]  ? __switch_to_asm+0x40/0x70
[  518.687395]  ? finish_task_switch+0x75/0x2a0
[  518.687403]  wait_for_completion+0x121/0x190
[  518.687413]  ? wake_up_q+0x80/0x80
[  518.687422]  flush_work+0x18f/0x200
[  518.687429]  ? rcu_free_pwq+0x20/0x20
[  518.687437]  __alloc_pages_slowpath+0x766/0x1590
[  518.687448]  __alloc_pages_nodemask+0x302/0x3c0
[  518.687458]  alloc_pages_vma+0xac/0x4f0
[  518.687467]  do_anonymous_page+0x105/0x3f0
[  518.687475]  __handle_mm_fault+0xbc9/0xf10
[  518.687483]  handle_mm_fault+0x102/0x2c0
[  518.687490]  __do_page_fault+0x294/0x540
[  518.687498]  ? page_fault+0x8/0x30
[  518.687506]  do_page_fault+0x38/0x120
[  518.687513]  ? page_fault+0x8/0x30
[  518.687520]  page_fault+0x1e/0x30
[  518.687528] RIP: 0033:0x5c503a41fdd0
[  518.687535] Code: Bad RIP value.
[  518.687545] RSP: 002b:00007fffaa7493c0 EFLAGS: 00010206
[  518.687554] RAX: 0000000067826000 RBX: 00007b658ae22010 RCX: 00007b658ae22010
[  518.687566] RDX: 0000000000000001 RSI: 000000013b2d8000 RDI: 0000000000000000
[  518.687578] RBP: 00005c503a420bb4 R08: 00000000ffffffff R09: 0000000000000000
[  518.687590] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  518.687602] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013b2d7000
[  518.687614] Mem-Info:
[  518.687621] active_anon:2610126 inactive_anon:8253 isolated_anon:0
                active_file:92493 inactive_file:4 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8201 slab_unreclaimable:12884
                mapped:22922 shmem:8397 pagetables:8220 bounce:0
                free:1518322 free_pcp:2132 free_cma:0
[  518.687667] Node 0 active_anon:10440504kB inactive_anon:33012kB active_file:369972kB inactive_file:16kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:91688kB dirty:0kB writeback:0kB shmem:33588kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  518.687700] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  518.687736] lowmem_reserve[]: 0 3956 23499 23499 23499
[  518.687744] Node 0 DMA32 free:89288kB min:11368kB low:15416kB high:19464kB active_anon:3967460kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:10456kB bounce:0kB free_pcp:2756kB local_pcp:76kB free_cma:0kB
[  518.687784] lowmem_reserve[]: 0 0 19543 19543 19543
[  518.687794] Node 0 Normal free:5969920kB min:56168kB low:76180kB high:96192kB active_anon:6473008kB inactive_anon:33012kB active_file:369944kB inactive_file:16kB unevictable:47380kB writepending:0kB present:20400128kB managed:13147556kB mlocked:47380kB kernel_stack:4888kB pagetables:22424kB bounce:0kB free_pcp:5836kB local_pcp:632kB free_cma:0kB
[  518.687833] lowmem_reserve[]: 0 0 0 0 0
[  518.687841] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  518.687866] Node 0 DMA32: 0*4kB 5*8kB (UM) 0*16kB 1*32kB (M) 0*64kB 27*128kB (UM) 1*256kB (U) 1*512kB (U) 1*1024kB (M) 1*2048kB (M) 20*4096kB (M) = 89288kB
[  518.887948] Node 0 Normal: 472*4kB (UME) 263*8kB (UME) 237*16kB (UME) 345*32kB (UME) 269*64kB (UE) 331*128kB (UM) 73*256kB (UM) 65*512kB (UM) 71*1024kB (UM) 68*2048kB (UM) 1489*4096kB (U) = 6441288kB
[  518.887995] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  518.888028] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  518.888068] 100892 total pagecache pages
[  518.888076] 6143894 pages RAM
[  518.888099] 0 pages HighMem/MovableOnly
[  518.888110] 1717505 pages reserved
[  518.888135] 0 pages cma reserved
[  518.888143] 0 pages hwpoisoned
[  518.888152] Showing busy workqueues and worker pools:
[  518.888165] workqueue events: flags=0x0
[  518.888175]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  518.888192]     in-flight: 203:balloon_process
[  518.888208]     pending: balloon_process
[  518.888220] workqueue events_unbound: flags=0x2
[  518.888232]   pwq 24: cpus=0-11 flags=0x4 nice=0 active=1/512
[  518.888262]     pending: flush_to_ldisc
[  518.888276] workqueue events_power_efficient: flags=0x80
[  518.888289]   pwq 8: cpus=4 node=0 flags=0x0 nice=0 active=1/256
[  518.888306]     pending: gc_worker [nf_conntrack]
[  518.888331] workqueue mm_percpu_wq: flags=0x8
[  518.888342]   pwq 20: cpus=10 node=0 flags=0x0 nice=0 active=1/256
[  518.888358]     pending: vmstat_update
[  518.888370]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  518.888385]     pending: drain_local_pages_wq BAR(10260), vmstat_update
[  518.888414] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=5s workers=2 idle: 541
[  518.888435] MemAlloc-Info: stalling=1 dying=1 exiting=0 victim=1 oom_count=2
[  519.903110] MemAlloc-Info: stalling=1 dying=1 exiting=0 victim=1 oom_count=2
[  519.903133] MemAlloc: kswapd0(108) flags=0xa20840 switches=177
[  519.903146] kswapd0         S    0   108      2 0x80000000
[  519.903157] Call Trace:
[  519.903169]  ? __schedule+0x3f3/0x8c0
[  519.903179]  schedule+0x36/0x80
[  519.903205]  kswapd+0x584/0x590
[  519.903215]  ? remove_wait_queue+0x70/0x70
[  519.903229]  kthread+0x105/0x140
[  519.903238]  ? balance_pgdat+0x3e0/0x3e0
[  519.903246]  ? kthread_stop+0x100/0x100
[  519.903255]  ret_from_fork+0x35/0x40
[  519.903272] MemAlloc: stress(10260) flags=0x404040 switches=437 seq=3712 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=6355 uninterruptible dying victim
[  519.903328] stress          D    0 10260  10257 0x00100084
[  519.903339] Call Trace:
[  519.903347]  ? __schedule+0x3f3/0x8c0
[  519.903355]  ? __switch_to_asm+0x40/0x70
[  519.903364]  ? __switch_to_asm+0x34/0x70
[  519.903386]  schedule+0x36/0x80
[  519.903408]  schedule_timeout+0x29b/0x4d0
[  519.903419]  ? __switch_to+0x13f/0x4d0
[  519.903427]  ? __switch_to_asm+0x40/0x70
[  519.903437]  ? finish_task_switch+0x75/0x2a0
[  519.903449]  wait_for_completion+0x121/0x190
[  519.903463]  ? wake_up_q+0x80/0x80
[  519.903471]  flush_work+0x18f/0x200
[  519.903480]  ? rcu_free_pwq+0x20/0x20
[  519.903489]  __alloc_pages_slowpath+0x766/0x1590
[  519.903500]  __alloc_pages_nodemask+0x302/0x3c0
[  519.903511]  alloc_pages_vma+0xac/0x4f0
[  519.903521]  do_anonymous_page+0x105/0x3f0
[  519.903532]  __handle_mm_fault+0xbc9/0xf10
[  519.903542]  handle_mm_fault+0x102/0x2c0
[  519.903552]  __do_page_fault+0x294/0x540
[  519.903560]  ? page_fault+0x8/0x30
[  519.903570]  do_page_fault+0x38/0x120
[  519.903578]  ? page_fault+0x8/0x30
[  519.903587]  page_fault+0x1e/0x30
[  519.903597] RIP: 0033:0x5c503a41fdd0
[  519.903604] Code: Bad RIP value.
[  519.903615] RSP: 002b:00007fffaa7493c0 EFLAGS: 00010206
[  519.903626] RAX: 0000000067826000 RBX: 00007b658ae22010 RCX: 00007b658ae22010
[  519.903641] RDX: 0000000000000001 RSI: 000000013b2d8000 RDI: 0000000000000000
[  519.903657] RBP: 00005c503a420bb4 R08: 00000000ffffffff R09: 0000000000000000
[  519.903673] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  519.903688] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013b2d7000
[  519.903704] Mem-Info:
[  519.903713] active_anon:2610103 inactive_anon:8253 isolated_anon:0
                active_file:92493 inactive_file:4 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8201 slab_unreclaimable:12868
                mapped:22929 shmem:8397 pagetables:8224 bounce:0
                free:2254490 free_pcp:2189 free_cma:0
[  519.903776] Node 0 active_anon:10440412kB inactive_anon:33012kB active_file:369972kB inactive_file:16kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:91716kB dirty:0kB writeback:0kB shmem:33588kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  519.903825] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  519.903870] lowmem_reserve[]: 0 3956 23499 23499 23499
[  519.903882] Node 0 DMA32 free:89288kB min:11368kB low:15416kB high:19464kB active_anon:3967460kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:10456kB bounce:0kB free_pcp:2756kB local_pcp:76kB free_cma:0kB
[  519.903932] lowmem_reserve[]: 0 0 19543 19543 19543
[  519.903944] Node 0 Normal free:8912768kB min:56168kB low:76180kB high:96192kB active_anon:6472952kB inactive_anon:33012kB active_file:369944kB inactive_file:16kB unevictable:47380kB writepending:0kB present:20400128kB managed:16090532kB mlocked:47380kB kernel_stack:4864kB pagetables:22440kB bounce:0kB free_pcp:6000kB local_pcp:632kB free_cma:0kB
[  519.903995] lowmem_reserve[]: 0 0 0 0 0
[  520.104026] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  520.104077] Node 0 DMA32: 0*4kB 5*8kB (UM) 0*16kB 1*32kB (M) 0*64kB 27*128kB (UM) 1*256kB (U) 1*512kB (U) 1*1024kB (M) 1*2048kB (M) 20*4096kB (M) = 89288kB
[  520.104145] Node 0 Normal: 579*4kB (UME) 269*8kB (UME) 244*16kB (UME) 360*32kB (UME) 290*64kB (UE) 348*128kB (UM) 85*256kB (UM) 80*512kB (UM) 82*1024kB (UM) 79*2048kB (UM) 2195*4096kB (U) = 9382196kB
[  520.104189] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  520.104207] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  520.104226] 100892 total pagecache pages
[  520.104235] 6143894 pages RAM
[  520.104244] 0 pages HighMem/MovableOnly
[  520.104253] 981991 pages reserved
[  520.104262] 0 pages cma reserved
[  520.104270] 0 pages hwpoisoned
[  520.104279] Showing busy workqueues and worker pools:
[  520.104294] workqueue events: flags=0x0
[  520.104305]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  520.104323]     in-flight: 203:balloon_process
[  520.104339]     pending: balloon_process
[  520.104353] workqueue events_unbound: flags=0x2
[  520.104364]   pwq 24: cpus=0-11 flags=0x4 nice=0 active=1/512
[  520.104380]     pending: flush_to_ldisc
[  520.104393] workqueue events_power_efficient: flags=0x80
[  520.104404]   pwq 8: cpus=4 node=0 flags=0x0 nice=0 active=1/256
[  520.104420]     pending: gc_worker [nf_conntrack]
[  520.104457] workqueue mm_percpu_wq: flags=0x8
[  520.104469]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  520.104485]     pending: drain_local_pages_wq BAR(10260), vmstat_update
[  520.104517] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=6s workers=2 idle: 541
[  520.104539] MemAlloc-Info: stalling=1 dying=1 exiting=0 victim=1 oom_count=2
[  520.560078] audit: type=1701 audit(1535714345.426:74): auid=1000 uid=1000 gid=1000 ses=1 pid=10260 comm="stress" exe="/usr/bin/stress" sig=7 res=1

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 31, 2018

Stalling detected, but no disk thrashing was in effect (as far as I could tell), with -m 2 this time, as follows:
(which makes me wonder if triggering OOM-killer when stalling is detected would've been a good idea for this case?)

$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 2 --timeout 10s; echo $?
stress: info: [29259] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
stress: info: [29259] successful run completed in 10s

real	0m10.390s
user	0m11.861s
sys	0m3.526s
0

dmesg
[  908.134722] audit: type=1131 audit(1535714733.001:76): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 1518.495098] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1518.495123] MemAlloc: kswapd0(108) flags=0xa20840 switches=224
[ 1518.495136] kswapd0         S    0   108      2 0x80000000
[ 1518.495149] Call Trace:
[ 1518.495169]  ? __schedule+0x3f3/0x8c0
[ 1518.495183]  schedule+0x36/0x80
[ 1518.495199]  schedule_timeout+0x22b/0x4d0
[ 1518.495216]  ? __bpf_trace_tick_stop+0x10/0x10
[ 1518.495241]  kswapd+0x2fe/0x590
[ 1518.495256]  ? remove_wait_queue+0x70/0x70
[ 1518.495271]  kthread+0x105/0x140
[ 1518.495285]  ? balance_pgdat+0x3e0/0x3e0
[ 1518.495296]  ? kthread_stop+0x100/0x100
[ 1518.495310]  ret_from_fork+0x35/0x40
[ 1518.495341] MemAlloc: stress(29261) flags=0x404040 switches=3 seq=4002 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=1064 uninterruptible
[ 1518.495375] stress          D    0 29261  29259 0x00000080
[ 1518.495391] Call Trace:
[ 1518.495400]  ? __schedule+0x3f3/0x8c0
[ 1518.495417]  ? __switch_to_asm+0x40/0x70
[ 1518.495431]  ? __switch_to_asm+0x34/0x70
[ 1518.495448]  schedule+0x36/0x80
[ 1518.495463]  schedule_timeout+0x29b/0x4d0
[ 1518.495479]  ? __switch_to+0x13f/0x4d0
[ 1518.495494]  ? __switch_to_asm+0x40/0x70
[ 1518.495511]  ? finish_task_switch+0x75/0x2a0
[ 1518.495525]  wait_for_completion+0x121/0x190
[ 1518.495543]  ? wake_up_q+0x80/0x80
[ 1518.495559]  flush_work+0x18f/0x200
[ 1518.495573]  ? rcu_free_pwq+0x20/0x20
[ 1518.495585]  __alloc_pages_slowpath+0x766/0x1590
[ 1518.495612]  __alloc_pages_nodemask+0x302/0x3c0
[ 1518.495631]  alloc_pages_vma+0xac/0x4f0
[ 1518.495647]  do_anonymous_page+0x105/0x3f0
[ 1518.495664]  __handle_mm_fault+0xbc9/0xf10
[ 1518.495681]  handle_mm_fault+0x102/0x2c0
[ 1518.495699]  __do_page_fault+0x294/0x540
[ 1518.495716]  do_page_fault+0x38/0x120
[ 1518.495726]  ? page_fault+0x8/0x30
[ 1518.495736]  page_fault+0x1e/0x30
[ 1518.495746] RIP: 0033:0x5b58185b9dd0
[ 1518.495755] Code: Bad RIP value.
[ 1518.495767] RSP: 002b:00007ffd62b2b5e0 EFLAGS: 00010206
[ 1518.495779] RAX: 00000000c484b000 RBX: 00007d7cbda68010 RCX: 00007d7cbda68010
[ 1518.495795] RDX: 0000000000000001 RSI: 000000013dab4000 RDI: 0000000000000000
[ 1518.495812] RBP: 00005b58185babb4 R08: 00000000ffffffff R09: 0000000000000000
[ 1518.495828] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[ 1518.495845] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013dab3000
[ 1518.495863] Mem-Info:
[ 1518.495872] active_anon:2130048 inactive_anon:8253 isolated_anon:0
                active_file:92663 inactive_file:0 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8262 slab_unreclaimable:13017
                mapped:23016 shmem:8397 pagetables:6435 bounce:0
                free:40279 free_pcp:1417 free_cma:0
[ 1518.495947] Node 0 active_anon:8520192kB inactive_anon:33012kB active_file:370652kB inactive_file:0kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:92064kB dirty:0kB writeback:0kB shmem:33588kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 1518.496026] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 1518.496077] lowmem_reserve[]: 0 3956 23499 23499 23499
[ 1518.496091] Node 0 DMA32 free:89212kB min:11368kB low:15416kB high:19464kB active_anon:3972040kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7712kB bounce:0kB free_pcp:956kB local_pcp:0kB free_cma:0kB
[ 1518.695121] lowmem_reserve[]: 0 0 19543 19543 19543
[ 1518.695173] Node 0 Normal free:460652kB min:56168kB low:76180kB high:96192kB active_anon:4566336kB inactive_anon:33012kB active_file:371128kB inactive_file:632kB unevictable:47380kB writepending:0kB present:20400128kB managed:5727980kB mlocked:47380kB kernel_stack:4880kB pagetables:18020kB bounce:0kB free_pcp:4844kB local_pcp:424kB free_cma:0kB
[ 1518.695275] lowmem_reserve[]: 0 0 0 0 0
[ 1518.695295] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[ 1518.695364] Node 0 DMA32: 21*4kB (U) 59*8kB (UM) 37*16kB (UM) 14*32kB (U) 13*64kB (U) 4*128kB (UM) 3*256kB (UM) 3*512kB (UM) 0*1024kB 1*2048kB (M) 20*4096kB (M) = 89212kB
[ 1518.695422] Node 0 Normal: 337*4kB (UE) 121*8kB (UME) 103*16kB (UME) 124*32kB (UME) 73*64kB (UME) 67*128kB (UE) 59*256kB (UME) 53*512kB (U) 10*1024kB (U) 5*2048kB (U) 92*4096kB (U) = 460732kB
[ 1518.695482] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1518.695510] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1518.695538] 101163 total pagecache pages
[ 1518.695554] 6143894 pages RAM
[ 1518.695576] 0 pages HighMem/MovableOnly
[ 1518.695591] 3690389 pages reserved
[ 1518.695608] 0 pages cma reserved
[ 1518.695623] 0 pages hwpoisoned
[ 1518.695640] Showing busy workqueues and worker pools:
[ 1518.695680] workqueue events: flags=0x0
[ 1518.695697]   pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=2/256
[ 1518.695709]     in-flight: 25:balloon_process
[ 1518.695722]     pending: balloon_process
[ 1518.695735] workqueue mm_percpu_wq: flags=0x8
[ 1518.695742]   pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=2/256
[ 1518.695752]     pending: drain_local_pages_wq BAR(29261), vmstat_update
[ 1518.695787] pool 4: cpus=2 node=0 flags=0x0 nice=0 hung=1s workers=2 idle: 98
[ 1518.695802] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1519.711116] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1519.711137] MemAlloc: kswapd0(108) flags=0xa20840 switches=225
[ 1519.711147] kswapd0         S    0   108      2 0x80000000
[ 1519.711157] Call Trace:
[ 1519.711167]  ? __schedule+0x3f3/0x8c0
[ 1519.711174]  schedule+0x36/0x80
[ 1519.711182]  kswapd+0x584/0x590
[ 1519.711198]  ? remove_wait_queue+0x70/0x70
[ 1519.711206]  kthread+0x105/0x140
[ 1519.711213]  ? balance_pgdat+0x3e0/0x3e0
[ 1519.711220]  ? kthread_stop+0x100/0x100
[ 1519.711228]  ret_from_fork+0x35/0x40
[ 1519.711260] MemAlloc: stress(29261) flags=0x404040 switches=3 seq=4002 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=2280 uninterruptible
[ 1519.711285] stress          D    0 29261  29259 0x00000080
[ 1519.711293] Call Trace:
[ 1519.711304]  ? __schedule+0x3f3/0x8c0
[ 1519.711310]  ? __switch_to_asm+0x40/0x70
[ 1519.711321]  ? __switch_to_asm+0x34/0x70
[ 1519.711329]  schedule+0x36/0x80
[ 1519.711340]  schedule_timeout+0x29b/0x4d0
[ 1519.711348]  ? __switch_to+0x13f/0x4d0
[ 1519.711355]  ? __switch_to_asm+0x40/0x70
[ 1519.711368]  ? finish_task_switch+0x75/0x2a0
[ 1519.711379]  wait_for_completion+0x121/0x190
[ 1519.711392]  ? wake_up_q+0x80/0x80
[ 1519.711404]  flush_work+0x18f/0x200
[ 1519.711412]  ? rcu_free_pwq+0x20/0x20
[ 1519.711420]  __alloc_pages_slowpath+0x766/0x1590
[ 1519.711434]  __alloc_pages_nodemask+0x302/0x3c0
[ 1519.711450]  alloc_pages_vma+0xac/0x4f0
[ 1519.711458]  do_anonymous_page+0x105/0x3f0
[ 1519.711470]  __handle_mm_fault+0xbc9/0xf10
[ 1519.711481]  handle_mm_fault+0x102/0x2c0
[ 1519.711489]  __do_page_fault+0x294/0x540
[ 1519.711497]  do_page_fault+0x38/0x120
[ 1519.711509]  ? page_fault+0x8/0x30
[ 1519.711515]  page_fault+0x1e/0x30
[ 1519.711523] RIP: 0033:0x5b58185b9dd0
[ 1519.711534] Code: Bad RIP value.
[ 1519.711543] RSP: 002b:00007ffd62b2b5e0 EFLAGS: 00010206
[ 1519.711557] RAX: 00000000c484b000 RBX: 00007d7cbda68010 RCX: 00007d7cbda68010
[ 1519.711577] RDX: 0000000000000001 RSI: 000000013dab4000 RDI: 0000000000000000
[ 1519.711593] RBP: 00005b58185babb4 R08: 00000000ffffffff R09: 0000000000000000
[ 1519.711607] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[ 1519.711625] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013dab3000
[ 1519.711643] Mem-Info:
[ 1519.711654] active_anon:2134587 inactive_anon:8253 isolated_anon:0
                active_file:92706 inactive_file:8 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8266 slab_unreclaimable:13013
                mapped:22954 shmem:8397 pagetables:6461 bounce:0
                free:700938 free_pcp:1601 free_cma:0
[ 1519.711711] Node 0 active_anon:8538348kB inactive_anon:33012kB active_file:370824kB inactive_file:32kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:91816kB dirty:0kB writeback:0kB shmem:33588kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 1519.711766] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 1519.711808] lowmem_reserve[]: 0 3956 23499 23499 23499
[ 1519.711823] Node 0 DMA32 free:89212kB min:11368kB low:15416kB high:19464kB active_anon:3972104kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7716kB bounce:0kB free_pcp:972kB local_pcp:0kB free_cma:0kB
[ 1519.711871] lowmem_reserve[]: 0 0 19543 19543 19543
[ 1519.711882] Node 0 Normal free:2698636kB min:56168kB low:76180kB high:96192kB active_anon:4566244kB inactive_anon:33012kB active_file:370796kB inactive_file:32kB unevictable:47380kB writepending:0kB present:20400128kB managed:7966444kB mlocked:47380kB kernel_stack:4832kB pagetables:18128kB bounce:0kB free_pcp:5432kB local_pcp:424kB free_cma:0kB
[ 1519.711925] lowmem_reserve[]: 0 0 0 0 0
[ 1519.711933] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[ 1519.711958] Node 0 DMA32: 21*4kB (U) 59*8kB (UM) 37*16kB (UM) 14*32kB (U) 13*64kB (U) 4*128kB (UM) 3*256kB (UM) 3*512kB (UM) 0*1024kB 1*2048kB (M) 20*4096kB (M) = 89212kB
[ 1519.711982] Node 0 Normal: 445*4kB (UME) 241*8kB (UME) 186*16kB (UME) 173*32kB (UME) 115*64kB (UE) 102*128kB (UE) 103*256kB (UE) 85*512kB (U) 43*1024kB (U) 33*2048kB (U) 607*4096kB (U) = 2700412kB
[ 1519.712027] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1519.712044] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1519.712060] 101109 total pagecache pages
[ 1519.712067] 6143894 pages RAM
[ 1519.712073] 0 pages HighMem/MovableOnly
[ 1519.712079] 3130261 pages reserved
[ 1519.712085] 0 pages cma reserved
[ 1519.712092] 0 pages hwpoisoned
[ 1519.712098] Showing busy workqueues and worker pools:
[ 1519.712108] workqueue events: flags=0x0
[ 1519.712116]   pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=2/256
[ 1519.712130]     in-flight: 25:balloon_process
[ 1519.712141]     pending: balloon_process
[ 1519.712152] workqueue mm_percpu_wq: flags=0x8
[ 1519.712165]   pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=2/256
[ 1519.712178]     pending: drain_local_pages_wq BAR(29261), vmstat_update
[ 1519.712202] pool 4: cpus=2 node=0 flags=0x0 nice=0 hung=2s workers=2 idle: 98
[ 1519.911178] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1520.927121] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1520.927142] MemAlloc: kswapd0(108) flags=0xa20840 switches=225
[ 1520.927155] kswapd0         S    0   108      2 0x80000000
[ 1520.927168] Call Trace:
[ 1520.927179]  ? __schedule+0x3f3/0x8c0
[ 1520.927189]  schedule+0x36/0x80
[ 1520.927201]  kswapd+0x584/0x590
[ 1520.927218]  ? remove_wait_queue+0x70/0x70
[ 1520.927232]  kthread+0x105/0x140
[ 1520.927247]  ? balance_pgdat+0x3e0/0x3e0
[ 1520.927261]  ? kthread_stop+0x100/0x100
[ 1520.927281]  ret_from_fork+0x35/0x40
[ 1520.927317] MemAlloc: stress(29261) flags=0x404040 switches=3 seq=4002 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=3496 uninterruptible
[ 1520.927349] stress          D    0 29261  29259 0x00000080
[ 1520.927365] Call Trace:
[ 1520.927377]  ? __schedule+0x3f3/0x8c0
[ 1520.927392]  ? __switch_to_asm+0x40/0x70
[ 1520.927410]  ? __switch_to_asm+0x34/0x70
[ 1520.927424]  schedule+0x36/0x80
[ 1520.927438]  schedule_timeout+0x29b/0x4d0
[ 1520.927459]  ? __switch_to+0x13f/0x4d0
[ 1520.927476]  ? __switch_to_asm+0x40/0x70
[ 1520.927490]  ? finish_task_switch+0x75/0x2a0
[ 1520.927507]  wait_for_completion+0x121/0x190
[ 1520.927522]  ? wake_up_q+0x80/0x80
[ 1520.927543]  flush_work+0x18f/0x200
[ 1520.927557]  ? rcu_free_pwq+0x20/0x20
[ 1520.927576]  __alloc_pages_slowpath+0x766/0x1590
[ 1520.927595]  __alloc_pages_nodemask+0x302/0x3c0
[ 1520.927606]  alloc_pages_vma+0xac/0x4f0
[ 1520.927615]  do_anonymous_page+0x105/0x3f0
[ 1520.927625]  __handle_mm_fault+0xbc9/0xf10
[ 1520.927636]  handle_mm_fault+0x102/0x2c0
[ 1520.927646]  __do_page_fault+0x294/0x540
[ 1520.927655]  do_page_fault+0x38/0x120
[ 1520.927664]  ? page_fault+0x8/0x30
[ 1520.927672]  page_fault+0x1e/0x30
[ 1520.927681] RIP: 0033:0x5b58185b9dd0
[ 1520.927688] Code: Bad RIP value.
[ 1520.927699] RSP: 002b:00007ffd62b2b5e0 EFLAGS: 00010206
[ 1520.927709] RAX: 00000000c484b000 RBX: 00007d7cbda68010 RCX: 00007d7cbda68010
[ 1520.927724] RDX: 0000000000000001 RSI: 000000013dab4000 RDI: 0000000000000000
[ 1520.927738] RBP: 00005b58185babb4 R08: 00000000ffffffff R09: 0000000000000000
[ 1520.927753] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[ 1520.927768] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013dab3000
[ 1520.927783] Mem-Info:
[ 1520.927792] active_anon:2134599 inactive_anon:8253 isolated_anon:0
                active_file:92706 inactive_file:8 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8266 slab_unreclaimable:13021
                mapped:22960 shmem:8397 pagetables:6461 bounce:0
                free:1440278 free_pcp:1600 free_cma:0
[ 1520.927851] Node 0 active_anon:8538396kB inactive_anon:33012kB active_file:370824kB inactive_file:32kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:91840kB dirty:0kB writeback:0kB shmem:33588kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 1520.927897] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 1520.927941] lowmem_reserve[]: 0 3956 23499 23499 23499
[ 1521.126910] Node 0 DMA32 free:89212kB min:11368kB low:15416kB high:19464kB active_anon:3972104kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7716kB bounce:0kB free_pcp:972kB local_pcp:0kB free_cma:0kB
[ 1521.126990] lowmem_reserve[]: 0 0 19543 19543 19543
[ 1521.127022] Node 0 Normal free:6143168kB min:56168kB low:76180kB high:96192kB active_anon:4566292kB inactive_anon:33012kB active_file:370796kB inactive_file:32kB unevictable:47380kB writepending:0kB present:20400128kB managed:11411180kB mlocked:47380kB kernel_stack:4832kB pagetables:18128kB bounce:0kB free_pcp:5524kB local_pcp:424kB free_cma:0kB
[ 1521.127135] lowmem_reserve[]: 0 0 0 0 0
[ 1521.127153] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[ 1521.127204] Node 0 DMA32: 21*4kB (U) 59*8kB (UM) 37*16kB (UM) 14*32kB (U) 13*64kB (U) 4*128kB (UM) 3*256kB (UM) 3*512kB (UM) 0*1024kB 1*2048kB (M) 20*4096kB (M) = 89212kB
[ 1521.127259] Node 0 Normal: 505*4kB (UME) 280*8kB (UME) 222*16kB (UME) 199*32kB (UE) 139*64kB (UME) 135*128kB (UE) 130*256kB (UE) 103*512kB (U) 66*1024kB (U) 57*2048kB (U) 1424*4096kB (U) = 6143396kB
[ 1521.127323] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1521.127355] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1521.127388] 101109 total pagecache pages
[ 1521.127404] 6143894 pages RAM
[ 1521.127413] 0 pages HighMem/MovableOnly
[ 1521.127426] 2269589 pages reserved
[ 1521.127434] 0 pages cma reserved
[ 1521.127441] 0 pages hwpoisoned
[ 1521.127450] Showing busy workqueues and worker pools:
[ 1521.127462] workqueue events: flags=0x0
[ 1521.127469]   pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=2/256
[ 1521.127479]     in-flight: 25:balloon_process
[ 1521.127489]     pending: balloon_process
[ 1521.127500] workqueue mm_percpu_wq: flags=0x8
[ 1521.127506]   pwq 20: cpus=10 node=0 flags=0x0 nice=0 active=1/256
[ 1521.127517]     pending: vmstat_update
[ 1521.127526]   pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=2/256
[ 1521.127541]     pending: drain_local_pages_wq BAR(29261), vmstat_update
[ 1521.127566] pool 4: cpus=2 node=0 flags=0x0 nice=0 hung=3s workers=2 idle: 98
[ 1521.127583] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1522.143189] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1522.143209] MemAlloc: kswapd0(108) flags=0xa20840 switches=225
[ 1522.143219] kswapd0         S    0   108      2 0x80000000
[ 1522.143243] Call Trace:
[ 1522.143258]  ? __schedule+0x3f3/0x8c0
[ 1522.143266]  schedule+0x36/0x80
[ 1522.143274]  kswapd+0x584/0x590
[ 1522.143288]  ? remove_wait_queue+0x70/0x70
[ 1522.143300]  kthread+0x105/0x140
[ 1522.143310]  ? balance_pgdat+0x3e0/0x3e0
[ 1522.143321]  ? kthread_stop+0x100/0x100
[ 1522.143333]  ret_from_fork+0x35/0x40
[ 1522.143368] MemAlloc: stress(29261) flags=0x404040 switches=3 seq=4002 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=4712 uninterruptible
[ 1522.143398] stress          D    0 29261  29259 0x00000080
[ 1522.143411] Call Trace:
[ 1522.143418]  ? __schedule+0x3f3/0x8c0
[ 1522.143426]  ? __switch_to_asm+0x40/0x70
[ 1522.143437]  ? __switch_to_asm+0x34/0x70
[ 1522.143447]  schedule+0x36/0x80
[ 1522.143460]  schedule_timeout+0x29b/0x4d0
[ 1522.143473]  ? __switch_to+0x13f/0x4d0
[ 1522.143483]  ? __switch_to_asm+0x40/0x70
[ 1522.143495]  ? finish_task_switch+0x75/0x2a0
[ 1522.143508]  wait_for_completion+0x121/0x190
[ 1522.143523]  ? wake_up_q+0x80/0x80
[ 1522.143535]  flush_work+0x18f/0x200
[ 1522.143546]  ? rcu_free_pwq+0x20/0x20
[ 1522.143558]  __alloc_pages_slowpath+0x766/0x1590
[ 1522.143573]  __alloc_pages_nodemask+0x302/0x3c0
[ 1522.143592]  alloc_pages_vma+0xac/0x4f0
[ 1522.143605]  do_anonymous_page+0x105/0x3f0
[ 1522.143618]  __handle_mm_fault+0xbc9/0xf10
[ 1522.143631]  handle_mm_fault+0x102/0x2c0
[ 1522.143644]  __do_page_fault+0x294/0x540
[ 1522.143658]  do_page_fault+0x38/0x120
[ 1522.143666]  ? page_fault+0x8/0x30
[ 1522.143673]  page_fault+0x1e/0x30
[ 1522.143681] RIP: 0033:0x5b58185b9dd0
[ 1522.143689] Code: Bad RIP value.
[ 1522.143700] RSP: 002b:00007ffd62b2b5e0 EFLAGS: 00010206
[ 1522.143710] RAX: 00000000c484b000 RBX: 00007d7cbda68010 RCX: 00007d7cbda68010
[ 1522.143724] RDX: 0000000000000001 RSI: 000000013dab4000 RDI: 0000000000000000
[ 1522.143737] RBP: 00005b58185babb4 R08: 00000000ffffffff R09: 0000000000000000
[ 1522.143750] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[ 1522.143760] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013dab3000
[ 1522.143770] Mem-Info:
[ 1522.143777] active_anon:2134664 inactive_anon:8253 isolated_anon:0
                active_file:92706 inactive_file:8 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8266 slab_unreclaimable:13032
                mapped:22969 shmem:8397 pagetables:6425 bounce:0
                free:2200882 free_pcp:1744 free_cma:0
[ 1522.143824] Node 0 active_anon:8538656kB inactive_anon:33012kB active_file:370824kB inactive_file:32kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:91876kB dirty:0kB writeback:0kB shmem:33588kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 1522.143862] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 1522.143898] lowmem_reserve[]: 0 3956 23499 23499 23499
[ 1522.143907] Node 0 DMA32 free:89212kB min:11368kB low:15416kB high:19464kB active_anon:3972104kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7716kB bounce:0kB free_pcp:972kB local_pcp:0kB free_cma:0kB
[ 1522.342977] lowmem_reserve[]: 0 0 19543 19543 19543
[ 1522.342994] Node 0 Normal free:9183788kB min:56168kB low:76180kB high:96192kB active_anon:4566260kB inactive_anon:33012kB active_file:370796kB inactive_file:32kB unevictable:47380kB writepending:0kB present:20400128kB managed:14452460kB mlocked:47380kB kernel_stack:4828kB pagetables:17984kB bounce:0kB free_pcp:6024kB local_pcp:424kB free_cma:0kB
[ 1522.343053] lowmem_reserve[]: 0 0 0 0 0
[ 1522.343078] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[ 1522.343112] Node 0 DMA32: 21*4kB (U) 59*8kB (UM) 37*16kB (UM) 14*32kB (U) 13*64kB (U) 4*128kB (UM) 3*256kB (UM) 3*512kB (UM) 0*1024kB 1*2048kB (M) 20*4096kB (M) = 89212kB
[ 1522.343157] Node 0 Normal: 777*4kB (UME) 553*8kB (UME) 536*16kB (UME) 485*32kB (UME) 390*64kB (UME) 277*128kB (UME) 222*256kB (UME) 173*512kB (U) 125*1024kB (U) 106*2048kB (U) 2100*4096kB (U) = 9184140kB
[ 1522.343204] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1522.343225] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1522.343245] 101109 total pagecache pages
[ 1522.343258] 6143894 pages RAM
[ 1522.343269] 0 pages HighMem/MovableOnly
[ 1522.343276] 1509269 pages reserved
[ 1522.343292] 0 pages cma reserved
[ 1522.343304] 0 pages hwpoisoned
[ 1522.343311] Showing busy workqueues and worker pools:
[ 1522.343329] workqueue events: flags=0x0
[ 1522.343337]   pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=2/256
[ 1522.343352]     in-flight: 25:balloon_process
[ 1522.343365]     pending: balloon_process
[ 1522.343379] workqueue mm_percpu_wq: flags=0x8
[ 1522.343390]   pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=2/256
[ 1522.343404]     pending: drain_local_pages_wq BAR(29261), vmstat_update
[ 1522.343427] pool 4: cpus=2 node=0 flags=0x0 nice=0 hung=5s workers=2 idle: 98
[ 1522.343446] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2

Ok, ran that again, and with sudo iotop -d 5 there were 3 updates(each update is 5 sec apart) of approx. 900K/s then 800K/s then 24K/s (it's what I'd call inexistent disk thrashing, but still there should've probably been 0 b/s if it was actually inexistent) during this:

$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 2 --timeout 10s; echo $?
stress: info: [808] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
stress: info: [808] successful run completed in 10s

real	0m10.232s
user	0m11.112s
sys	0m3.358s
0
dmesg

(first line is from prev. dmesg)

[ 1522.343446] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1727.135109] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1727.135134] MemAlloc: kswapd0(108) flags=0xa20840 switches=248
[ 1727.135150] kswapd0         S    0   108      2 0x80000000
[ 1727.135161] Call Trace:
[ 1727.135174]  ? __schedule+0x3f3/0x8c0
[ 1727.135182]  schedule+0x36/0x80
[ 1727.135191]  kswapd+0x584/0x590
[ 1727.135200]  ? remove_wait_queue+0x70/0x70
[ 1727.135216]  kthread+0x105/0x140
[ 1727.135224]  ? balance_pgdat+0x3e0/0x3e0
[ 1727.135230]  ? kthread_stop+0x100/0x100
[ 1727.135238]  ret_from_fork+0x35/0x40
[ 1727.135261] MemAlloc: stress(810) flags=0x404040 switches=5 seq=4375 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=1440 uninterruptible
[ 1727.135286] stress          D    0   810    808 0x00000080
[ 1727.135300] Call Trace:
[ 1727.135312]  ? __schedule+0x3f3/0x8c0
[ 1727.135320]  ? __switch_to_asm+0x40/0x70
[ 1727.135332]  ? __switch_to_asm+0x34/0x70
[ 1727.135339]  schedule+0x36/0x80
[ 1727.135352]  schedule_timeout+0x29b/0x4d0
[ 1727.135376]  ? __switch_to+0x13f/0x4d0
[ 1727.135389]  ? __switch_to_asm+0x40/0x70
[ 1727.135401]  ? finish_task_switch+0x75/0x2a0
[ 1727.135410]  wait_for_completion+0x121/0x190
[ 1727.135425]  ? wake_up_q+0x80/0x80
[ 1727.135436]  flush_work+0x18f/0x200
[ 1727.135443]  ? rcu_free_pwq+0x20/0x20
[ 1727.135455]  __alloc_pages_slowpath+0x766/0x1590
[ 1727.135470]  __alloc_pages_nodemask+0x302/0x3c0
[ 1727.135480]  alloc_pages_vma+0xac/0x4f0
[ 1727.135487]  do_anonymous_page+0x105/0x3f0
[ 1727.135502]  __handle_mm_fault+0xbc9/0xf10
[ 1727.135510]  handle_mm_fault+0x102/0x2c0
[ 1727.135523]  __do_page_fault+0x294/0x540
[ 1727.135536]  ? page_fault+0x8/0x30
[ 1727.135548]  do_page_fault+0x38/0x120
[ 1727.135559]  ? page_fault+0x8/0x30
[ 1727.135570]  page_fault+0x1e/0x30
[ 1727.135580] RIP: 0033:0x5cddefbcadd0
[ 1727.135592] Code: Bad RIP value.
[ 1727.135622] RSP: 002b:00007ffdd191b430 EFLAGS: 00010206
[ 1727.135632] RAX: 00000000b94b1000 RBX: 0000764561a5c010 RCX: 0000764561a5c010
[ 1727.135649] RDX: 0000000000000001 RSI: 000000013bd2b000 RDI: 0000000000000000
[ 1727.135671] RBP: 00005cddefbcbbb4 R08: 00000000ffffffff R09: 0000000000000000
[ 1727.135696] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[ 1727.135728] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013bd2a000
[ 1727.135748] Mem-Info:
[ 1727.135759] active_anon:2080928 inactive_anon:8253 isolated_anon:0
                active_file:92675 inactive_file:69 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8251 slab_unreclaimable:13103
                mapped:22972 shmem:8401 pagetables:6294 bounce:0
                free:107704 free_pcp:1590 free_cma:0
[ 1727.135809] Node 0 active_anon:8323712kB inactive_anon:33012kB active_file:370700kB inactive_file:276kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:91888kB dirty:0kB writeback:0kB shmem:33604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 1727.135859] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 1727.135904] lowmem_reserve[]: 0 3956 23499 23499 23499
[ 1727.135914] Node 0 DMA32 free:89108kB min:11368kB low:15416kB high:19464kB active_anon:3972048kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7740kB bounce:0kB free_pcp:1104kB local_pcp:16kB free_cma:0kB
[ 1727.135963] lowmem_reserve[]: 0 0 19543 19543 19543
[ 1727.135974] Node 0 Normal free:325804kB min:56168kB low:76180kB high:96192kB active_anon:4351872kB inactive_anon:33012kB active_file:370672kB inactive_file:12kB unevictable:47380kB writepending:0kB present:20400128kB managed:5379588kB mlocked:47380kB kernel_stack:4832kB pagetables:17436kB bounce:0kB free_pcp:5256kB local_pcp:612kB free_cma:0kB
[ 1727.136042] lowmem_reserve[]: 0 0 0 0 0
[ 1727.136050] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[ 1727.136074] Node 0 DMA32: 1*4kB (U) 8*8kB (UM) 17*16kB (UM) 10*32kB (U) 8*64kB (UM) 5*128kB (UM) 1*256kB (U) 2*512kB (U) 2*1024kB (UM) 1*2048kB (M) 20*4096kB (M) = 89108kB
[ 1727.136098] Node 0 Normal: 351*4kB (UME) 110*8kB (UME) 102*16kB (UE) 161*32kB (UE) 80*64kB (UE) 59*128kB (UME) 54*256kB (UME) 44*512kB (U) 4*1024kB (U) 4*2048kB (U) 63*4096kB (U) = 328428kB
[ 1727.136130] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1727.136144] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1727.136158] 101231 total pagecache pages
[ 1727.136166] 6143894 pages RAM
[ 1727.136172] 0 pages HighMem/MovableOnly
[ 1727.136179] 3776975 pages reserved
[ 1727.136186] 0 pages cma reserved
[ 1727.136195] 0 pages hwpoisoned
[ 1727.136202] Showing busy workqueues and worker pools:
[ 1727.136212] workqueue events: flags=0x0
[ 1727.136218]   pwq 18: cpus=9 node=0 flags=0x0 nice=0 active=2/256
[ 1727.136232]     in-flight: 230:balloon_process
[ 1727.136242]     pending: balloon_process
[ 1727.136259] workqueue mm_percpu_wq: flags=0x8
[ 1727.136269]   pwq 18: cpus=9 node=0 flags=0x0 nice=0 active=2/256
[ 1727.335208]     pending: drain_local_pages_wq BAR(810), vmstat_update
[ 1727.335280] pool 18: cpus=9 node=0 flags=0x0 nice=0 hung=1s workers=2 idle: 763
[ 1727.335302] pool 24: cpus=0-11 flags=0x4 nice=0 hung=0s workers=3 idle: 30900 293
[ 1727.335344] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1728.351089] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1728.351109] MemAlloc: kswapd0(108) flags=0xa20840 switches=248
[ 1728.351120] kswapd0         S    0   108      2 0x80000000
[ 1728.351145] Call Trace:
[ 1728.351160]  ? __schedule+0x3f3/0x8c0
[ 1728.351167]  schedule+0x36/0x80
[ 1728.351179]  kswapd+0x584/0x590
[ 1728.351192]  ? remove_wait_queue+0x70/0x70
[ 1728.351204]  kthread+0x105/0x140
[ 1728.351215]  ? balance_pgdat+0x3e0/0x3e0
[ 1728.351226]  ? kthread_stop+0x100/0x100
[ 1728.351254]  ret_from_fork+0x35/0x40
[ 1728.351289] MemAlloc: stress(810) flags=0x404040 switches=5 seq=4375 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=2656 uninterruptible
[ 1728.351314] stress          D    0   810    808 0x00000080
[ 1728.351326] Call Trace:
[ 1728.351337]  ? __schedule+0x3f3/0x8c0
[ 1728.351343]  ? __switch_to_asm+0x40/0x70
[ 1728.351354]  ? __switch_to_asm+0x34/0x70
[ 1728.351362]  schedule+0x36/0x80
[ 1728.351372]  schedule_timeout+0x29b/0x4d0
[ 1728.351384]  ? __switch_to+0x13f/0x4d0
[ 1728.351395]  ? __switch_to_asm+0x40/0x70
[ 1728.351413]  ? finish_task_switch+0x75/0x2a0
[ 1728.351427]  wait_for_completion+0x121/0x190
[ 1728.351440]  ? wake_up_q+0x80/0x80
[ 1728.351452]  flush_work+0x18f/0x200
[ 1728.351464]  ? rcu_free_pwq+0x20/0x20
[ 1728.351476]  __alloc_pages_slowpath+0x766/0x1590
[ 1728.351485]  __alloc_pages_nodemask+0x302/0x3c0
[ 1728.351499]  alloc_pages_vma+0xac/0x4f0
[ 1728.351511]  do_anonymous_page+0x105/0x3f0
[ 1728.351524]  __handle_mm_fault+0xbc9/0xf10
[ 1728.351537]  handle_mm_fault+0x102/0x2c0
[ 1728.351560]  __do_page_fault+0x294/0x540
[ 1728.351586]  ? page_fault+0x8/0x30
[ 1728.351602]  do_page_fault+0x38/0x120
[ 1728.351609]  ? page_fault+0x8/0x30
[ 1728.351615]  page_fault+0x1e/0x30
[ 1728.351628] RIP: 0033:0x5cddefbcadd0
[ 1728.351634] Code: Bad RIP value.
[ 1728.351644] RSP: 002b:00007ffdd191b430 EFLAGS: 00010206
[ 1728.351652] RAX: 00000000b94b1000 RBX: 0000764561a5c010 RCX: 0000764561a5c010
[ 1728.351665] RDX: 0000000000000001 RSI: 000000013bd2b000 RDI: 0000000000000000
[ 1728.351677] RBP: 00005cddefbcbbb4 R08: 00000000ffffffff R09: 0000000000000000
[ 1728.351688] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[ 1728.351699] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013bd2a000
[ 1728.351712] Mem-Info:
[ 1728.351718] active_anon:2081047 inactive_anon:8253 isolated_anon:0
                active_file:92706 inactive_file:4 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8255 slab_unreclaimable:13087
                mapped:22986 shmem:8401 pagetables:6355 bounce:0
                free:832596 free_pcp:1743 free_cma:0
[ 1728.351765] Node 0 active_anon:8324188kB inactive_anon:33012kB active_file:370824kB inactive_file:16kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:91944kB dirty:0kB writeback:0kB shmem:33604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 1728.351801] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 1728.351832] lowmem_reserve[]: 0 3956 23499 23499 23499
[ 1728.351842] Node 0 DMA32 free:89108kB min:11368kB low:15416kB high:19464kB active_anon:3972048kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7740kB bounce:0kB free_pcp:1104kB local_pcp:16kB free_cma:0kB
[ 1728.550909] lowmem_reserve[]: 0 0 19543 19543 19543
[ 1728.550927] Node 0 Normal free:3715080kB min:56168kB low:76180kB high:96192kB active_anon:4352140kB inactive_anon:33012kB active_file:370796kB inactive_file:16kB unevictable:47380kB writepending:0kB present:20400128kB managed:8769028kB mlocked:47380kB kernel_stack:4832kB pagetables:17680kB bounce:0kB free_pcp:5780kB local_pcp:612kB free_cma:0kB
[ 1728.550985] lowmem_reserve[]: 0 0 0 0 0
[ 1728.550999] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[ 1728.551088] Node 0 DMA32: 1*4kB (U) 8*8kB (UM) 17*16kB (UM) 10*32kB (U) 8*64kB (UM) 5*128kB (UM) 1*256kB (U) 2*512kB (U) 2*1024kB (UM) 1*2048kB (M) 20*4096kB (M) = 89108kB
[ 1728.551128] Node 0 Normal: 535*4kB (UME) 235*8kB (UME) 183*16kB (UME) 208*32kB (UME) 126*64kB (UE) 101*128kB (UME) 92*256kB (UE) 87*512kB (U) 46*1024kB (U) 39*2048kB (U) 851*4096kB (U) = 3715364kB
[ 1728.551167] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1728.551189] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1728.551204] 101109 total pagecache pages
[ 1728.551214] 6143894 pages RAM
[ 1728.551222] 0 pages HighMem/MovableOnly
[ 1728.551232] 2930127 pages reserved
[ 1728.551237] 0 pages cma reserved
[ 1728.551246] 0 pages hwpoisoned
[ 1728.551251] Showing busy workqueues and worker pools:
[ 1728.551262] workqueue events: flags=0x0
[ 1728.551268]   pwq 18: cpus=9 node=0 flags=0x0 nice=0 active=2/256
[ 1728.551279]     in-flight: 230:balloon_process
[ 1728.551296]     pending: balloon_process
[ 1728.551314] workqueue mm_percpu_wq: flags=0x8
[ 1728.551334]   pwq 18: cpus=9 node=0 flags=0x0 nice=0 active=2/256
[ 1728.551350]     pending: drain_local_pages_wq BAR(810), vmstat_update
[ 1728.551393] pool 18: cpus=9 node=0 flags=0x0 nice=0 hung=2s workers=2 idle: 763
[ 1728.551410] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1729.567119] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1729.567136] MemAlloc: kswapd0(108) flags=0xa20840 switches=248
[ 1729.567146] kswapd0         S    0   108      2 0x80000000
[ 1729.567155] Call Trace:
[ 1729.567165]  ? __schedule+0x3f3/0x8c0
[ 1729.567172]  schedule+0x36/0x80
[ 1729.567180]  kswapd+0x584/0x590
[ 1729.567188]  ? remove_wait_queue+0x70/0x70
[ 1729.567202]  kthread+0x105/0x140
[ 1729.567209]  ? balance_pgdat+0x3e0/0x3e0
[ 1729.567216]  ? kthread_stop+0x100/0x100
[ 1729.567230]  ret_from_fork+0x35/0x40
[ 1729.567253] MemAlloc: stress(810) flags=0x404040 switches=5 seq=4375 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=3872 uninterruptible
[ 1729.567278] stress          D    0   810    808 0x00000080
[ 1729.567291] Call Trace:
[ 1729.567302]  ? __schedule+0x3f3/0x8c0
[ 1729.567314]  ? __switch_to_asm+0x40/0x70
[ 1729.567324]  ? __switch_to_asm+0x34/0x70
[ 1729.567336]  schedule+0x36/0x80
[ 1729.567346]  schedule_timeout+0x29b/0x4d0
[ 1729.567354]  ? __switch_to+0x13f/0x4d0
[ 1729.567363]  ? __switch_to_asm+0x40/0x70
[ 1729.567373]  ? finish_task_switch+0x75/0x2a0
[ 1729.567388]  wait_for_completion+0x121/0x190
[ 1729.567406]  ? wake_up_q+0x80/0x80
[ 1729.567419]  flush_work+0x18f/0x200
[ 1729.567430]  ? rcu_free_pwq+0x20/0x20
[ 1729.567443]  __alloc_pages_slowpath+0x766/0x1590
[ 1729.567463]  __alloc_pages_nodemask+0x302/0x3c0
[ 1729.567477]  alloc_pages_vma+0xac/0x4f0
[ 1729.567485]  do_anonymous_page+0x105/0x3f0
[ 1729.567497]  __handle_mm_fault+0xbc9/0xf10
[ 1729.567510]  handle_mm_fault+0x102/0x2c0
[ 1729.567523]  __do_page_fault+0x294/0x540
[ 1729.567537]  ? page_fault+0x8/0x30
[ 1729.567545]  do_page_fault+0x38/0x120
[ 1729.567560]  ? page_fault+0x8/0x30
[ 1729.567567]  page_fault+0x1e/0x30
[ 1729.567579] RIP: 0033:0x5cddefbcadd0
[ 1729.567586] Code: Bad RIP value.
[ 1729.567595] RSP: 002b:00007ffdd191b430 EFLAGS: 00010206
[ 1729.567603] RAX: 00000000b94b1000 RBX: 0000764561a5c010 RCX: 0000764561a5c010
[ 1729.567615] RDX: 0000000000000001 RSI: 000000013bd2b000 RDI: 0000000000000000
[ 1729.567626] RBP: 00005cddefbcbbb4 R08: 00000000ffffffff R09: 0000000000000000
[ 1729.567637] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[ 1729.567650] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013bd2a000
[ 1729.567662] Mem-Info:
[ 1729.567668] active_anon:2081048 inactive_anon:8253 isolated_anon:0
                active_file:92706 inactive_file:4 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8255 slab_unreclaimable:13097
                mapped:22996 shmem:8401 pagetables:6355 bounce:0
                free:1590349 free_pcp:1789 free_cma:0
[ 1729.567715] Node 0 active_anon:8324192kB inactive_anon:33012kB active_file:370824kB inactive_file:16kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:91984kB dirty:0kB writeback:0kB shmem:33604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 1729.567752] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 1729.567786] lowmem_reserve[]: 0 3956 23499 23499 23499
[ 1729.567796] Node 0 DMA32 free:89108kB min:11368kB low:15416kB high:19464kB active_anon:3972048kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7740kB bounce:0kB free_pcp:1104kB local_pcp:16kB free_cma:0kB
[ 1729.567834] lowmem_reserve[]: 0 0 19543 19543 19543
[ 1729.766873] Node 0 Normal free:6745912kB min:56168kB low:76180kB high:96192kB active_anon:4352112kB inactive_anon:33012kB active_file:370796kB inactive_file:16kB unevictable:47380kB writepending:0kB present:20400128kB managed:11800068kB mlocked:47380kB kernel_stack:4832kB pagetables:17680kB bounce:0kB free_pcp:5960kB local_pcp:612kB free_cma:0kB
[ 1729.766927] lowmem_reserve[]: 0 0 0 0 0
[ 1729.766954] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[ 1729.767006] Node 0 DMA32: 1*4kB (U) 8*8kB (UM) 17*16kB (UM) 10*32kB (U) 8*64kB (UM) 5*128kB (UM) 1*256kB (U) 2*512kB (U) 2*1024kB (UM) 1*2048kB (M) 20*4096kB (M) = 89108kB
[ 1729.767060] Node 0 Normal: 565*4kB (UME) 280*8kB (UME) 249*16kB (UME) 249*32kB (UME) 150*64kB (UE) 129*128kB (UME) 106*256kB (UE) 110*512kB (U) 71*1024kB (U) 63*2048kB (U) 1567*4096kB (U) = 6746180kB
[ 1729.767105] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1729.767128] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1729.767147] 101109 total pagecache pages
[ 1729.767158] 6143894 pages RAM
[ 1729.767170] 0 pages HighMem/MovableOnly
[ 1729.767181] 2172367 pages reserved
[ 1729.767192] 0 pages cma reserved
[ 1729.767203] 0 pages hwpoisoned
[ 1729.767214] Showing busy workqueues and worker pools:
[ 1729.767225] workqueue events: flags=0x0
[ 1729.767239]   pwq 18: cpus=9 node=0 flags=0x0 nice=0 active=2/256
[ 1729.767254]     in-flight: 230:balloon_process
[ 1729.767265]     pending: balloon_process
[ 1729.767279] workqueue mm_percpu_wq: flags=0x8
[ 1729.767287]   pwq 18: cpus=9 node=0 flags=0x0 nice=0 active=2/256
[ 1729.767299]     pending: drain_local_pages_wq BAR(810), vmstat_update
[ 1729.767322] pool 18: cpus=9 node=0 flags=0x0 nice=0 hung=4s workers=2 idle: 763
[ 1729.767336] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1730.783155] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1730.783191] MemAlloc: kswapd0(108) flags=0xa20840 switches=248
[ 1730.783208] kswapd0         S    0   108      2 0x80000000
[ 1730.783233] Call Trace:
[ 1730.783250]  ? __schedule+0x3f3/0x8c0
[ 1730.783270]  schedule+0x36/0x80
[ 1730.783290]  kswapd+0x584/0x590
[ 1730.783321]  ? remove_wait_queue+0x70/0x70
[ 1730.783338]  kthread+0x105/0x140
[ 1730.783354]  ? balance_pgdat+0x3e0/0x3e0
[ 1730.783370]  ? kthread_stop+0x100/0x100
[ 1730.783386]  ret_from_fork+0x35/0x40
[ 1730.783435] MemAlloc: stress(810) flags=0x404040 switches=5 seq=4375 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=5088 uninterruptible
[ 1730.783472] stress          D    0   810    808 0x00000080
[ 1730.783491] Call Trace:
[ 1730.783507]  ? __schedule+0x3f3/0x8c0
[ 1730.783528]  ? __switch_to_asm+0x40/0x70
[ 1730.783552]  ? __switch_to_asm+0x34/0x70
[ 1730.783563]  schedule+0x36/0x80
[ 1730.783580]  schedule_timeout+0x29b/0x4d0
[ 1730.783597]  ? __switch_to+0x13f/0x4d0
[ 1730.783612]  ? __switch_to_asm+0x40/0x70
[ 1730.783627]  ? finish_task_switch+0x75/0x2a0
[ 1730.783644]  wait_for_completion+0x121/0x190
[ 1730.783671]  ? wake_up_q+0x80/0x80
[ 1730.783689]  flush_work+0x18f/0x200
[ 1730.783706]  ? rcu_free_pwq+0x20/0x20
[ 1730.783716]  __alloc_pages_slowpath+0x766/0x1590
[ 1730.783739]  __alloc_pages_nodemask+0x302/0x3c0
[ 1730.783769]  alloc_pages_vma+0xac/0x4f0
[ 1730.783786]  do_anonymous_page+0x105/0x3f0
[ 1730.783802]  __handle_mm_fault+0xbc9/0xf10
[ 1730.783813]  handle_mm_fault+0x102/0x2c0
[ 1730.783828]  __do_page_fault+0x294/0x540
[ 1730.783839]  ? page_fault+0x8/0x30
[ 1730.783849]  do_page_fault+0x38/0x120
[ 1730.783860]  ? page_fault+0x8/0x30
[ 1730.783870]  page_fault+0x1e/0x30
[ 1730.783881] RIP: 0033:0x5cddefbcadd0
[ 1730.783890] Code: Bad RIP value.
[ 1730.783903] RSP: 002b:00007ffdd191b430 EFLAGS: 00010206
[ 1730.783916] RAX: 00000000b94b1000 RBX: 0000764561a5c010 RCX: 0000764561a5c010
[ 1730.783932] RDX: 0000000000000001 RSI: 000000013bd2b000 RDI: 0000000000000000
[ 1730.783949] RBP: 00005cddefbcbbb4 R08: 00000000ffffffff R09: 0000000000000000
[ 1730.783966] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[ 1730.783983] R13: 0000000000000002 R14: 0000000000001000 R15: 000000013bd2a000
[ 1730.784025] Mem-Info:
[ 1730.784035] active_anon:2081045 inactive_anon:8253 isolated_anon:0
                active_file:92706 inactive_file:4 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8255 slab_unreclaimable:13095
                mapped:23002 shmem:8401 pagetables:6355 bounce:0
                free:2349678 free_pcp:1750 free_cma:0
[ 1730.784106] Node 0 active_anon:8324180kB inactive_anon:33012kB active_file:370824kB inactive_file:16kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:92008kB dirty:0kB writeback:0kB shmem:33604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 1730.784161] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 1730.784217] lowmem_reserve[]: 0 3956 23499 23499 23499
[ 1730.784232] Node 0 DMA32 free:89108kB min:11368kB low:15416kB high:19464kB active_anon:3972048kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:0kB pagetables:7740kB bounce:0kB free_pcp:1104kB local_pcp:16kB free_cma:0kB
[ 1730.983139] lowmem_reserve[]: 0 0 19543 19543 19543
[ 1730.983156] Node 0 Normal free:9780888kB min:56168kB low:76180kB high:96192kB active_anon:4352336kB inactive_anon:33012kB active_file:370796kB inactive_file:16kB unevictable:47380kB writepending:0kB present:20400128kB managed:14835204kB mlocked:47380kB kernel_stack:4832kB pagetables:17692kB bounce:0kB free_pcp:5988kB local_pcp:612kB free_cma:0kB
[ 1730.983245] lowmem_reserve[]: 0 0 0 0 0
[ 1730.983257] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[ 1730.983294] Node 0 DMA32: 1*4kB (U) 8*8kB (UM) 17*16kB (UM) 10*32kB (U) 8*64kB (UM) 5*128kB (UM) 1*256kB (U) 2*512kB (U) 2*1024kB (UM) 1*2048kB (M) 20*4096kB (M) = 89108kB
[ 1730.983332] Node 0 Normal: 594*4kB (UME) 324*8kB (UME) 278*16kB (UE) 280*32kB (UE) 172*64kB (UE) 143*128kB (UME) 134*256kB (UE) 134*512kB (U) 89*1024kB (U) 76*2048kB (U) 2291*4096kB (U) = 9781320kB
[ 1730.983383] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1730.983411] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1730.983429] 101112 total pagecache pages
[ 1730.983440] 6143894 pages RAM
[ 1730.983452] 0 pages HighMem/MovableOnly
[ 1730.983463] 1413583 pages reserved
[ 1730.983473] 0 pages cma reserved
[ 1730.983479] 0 pages hwpoisoned
[ 1730.983491] Showing busy workqueues and worker pools:
[ 1730.983504] workqueue events: flags=0x0
[ 1730.983518]   pwq 18: cpus=9 node=0 flags=0x0 nice=0 active=2/256
[ 1730.983533]     in-flight: 230:balloon_process
[ 1730.983545]     pending: balloon_process
[ 1730.983558] workqueue mm_percpu_wq: flags=0x8
[ 1730.983567]   pwq 18: cpus=9 node=0 flags=0x0 nice=0 active=2/256
[ 1730.983578]     pending: drain_local_pages_wq BAR(810), vmstat_update
[ 1730.983603] pool 18: cpus=9 node=0 flags=0x0 nice=0 hung=5s workers=2 idle: 763
[ 1730.983618] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2


@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 31, 2018

On the next run I looked at the terminal refreshing this watch -n0.1 -d cat /proc/meminfo and it froze for like 3 seconds during this:
(so I guess there is merit for what I was wondering above: triggering OOM-killer when stalling is detected, I'm just hoping that stalling only happens during the specific times when disk thrashing happens due to high memory pressure, as opposed to some other times when for example high cpu load causes stalls even with PLENTY of RAM free! which would probably be bad to invoke OOM-killer then 'cause it would kill the process using the highest amount of RAM (oom_score?) while there's plenty of RAM still available, all because there was some memory allocation stalling detected):

$ time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 2 --timeout 10s; echo $?
stress: info: [5076] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
stress: info: [5076] successful run completed in 10s

real	0m10.261s
user	0m14.043s
sys	0m6.254s
0
dmesg
[ 1730.983618] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1947.039485] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1947.039502] MemAlloc: kswapd0(108) flags=0xa20840 switches=252
[ 1947.039512] kswapd0         S    0   108      2 0x80000000
[ 1947.039521] Call Trace:
[ 1947.039530]  ? __schedule+0x3f3/0x8c0
[ 1947.039543]  schedule+0x36/0x80
[ 1947.039550]  kswapd+0x584/0x590
[ 1947.039558]  ? remove_wait_queue+0x70/0x70
[ 1947.039565]  kthread+0x105/0x140
[ 1947.039572]  ? balance_pgdat+0x3e0/0x3e0
[ 1947.039582]  ? kthread_stop+0x100/0x100
[ 1947.039590]  ret_from_fork+0x35/0x40
[ 1947.039604] MemAlloc: sh(5096) flags=0x404000 switches=2 seq=50 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=1766 uninterruptible
[ 1947.039626] sh              D    0  5096   5095 0x00000080
[ 1947.039634] Call Trace:
[ 1947.039643]  ? __schedule+0x3f3/0x8c0
[ 1947.039651]  ? vmpressure+0x2d/0x180
[ 1947.039661]  schedule+0x36/0x80
[ 1947.039668]  schedule_timeout+0x29b/0x4d0
[ 1947.039676]  wait_for_completion+0x121/0x190
[ 1947.039688]  ? wake_up_q+0x80/0x80
[ 1947.039699]  flush_work+0x18f/0x200
[ 1947.039706]  ? rcu_free_pwq+0x20/0x20
[ 1947.039714]  __alloc_pages_slowpath+0x766/0x1590
[ 1947.039726]  ? find_get_entry+0x1e/0x190
[ 1947.039733]  __alloc_pages_nodemask+0x302/0x3c0
[ 1947.039746]  alloc_pages_vma+0xac/0x4f0
[ 1947.039757]  do_anonymous_page+0x105/0x3f0
[ 1947.039764]  __handle_mm_fault+0xbc9/0xf10
[ 1947.039775]  ? do_mmap+0x463/0x5b0
[ 1947.039781]  handle_mm_fault+0x102/0x2c0
[ 1947.039788]  __do_page_fault+0x294/0x540
[ 1947.039799]  ? __audit_syscall_exit+0x2bf/0x3e0
[ 1947.039808]  do_page_fault+0x38/0x120
[ 1947.039818]  ? page_fault+0x8/0x30
[ 1947.039824]  page_fault+0x1e/0x30
[ 1947.039836] RIP: 0033:0x78f0e79ed3f4
[ 1947.039842] Code: Bad RIP value.
[ 1947.039854] RSP: 002b:00007ffdda11ae20 EFLAGS: 00010206
[ 1947.039862] RAX: 000078f0e7bee000 RBX: 000078f0e7bee000 RCX: 0000000000000000
[ 1947.039877] RDX: 000078f0e7bee000 RSI: 0000000000002000 RDI: 0000000000000000
[ 1947.039892] RBP: 000078f0e7c0a130 R08: 00000000ffffffff R09: 0000000000000000
[ 1947.039903] R10: 000064bacc4f6b09 R11: 0000000000000246 R12: 0000000000000000
[ 1947.039917] R13: 0000000000000000 R14: 0000000000000000 R15: 000000000000000d
[ 1947.039932] Mem-Info:
[ 1947.039939] active_anon:2279014 inactive_anon:8253 isolated_anon:0
                active_file:92674 inactive_file:0 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8259 slab_unreclaimable:13178
                mapped:23020 shmem:8404 pagetables:6739 bounce:0
                free:40329 free_pcp:387 free_cma:0
[ 1947.039989] Node 0 active_anon:9116056kB inactive_anon:33012kB active_file:370696kB inactive_file:0kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:92080kB dirty:0kB writeback:0kB shmem:33616kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 1947.040044] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 1947.040087] lowmem_reserve[]: 0 3956 23499 23499 23499
[ 1947.040096] Node 0 DMA32 free:89352kB min:11368kB low:15416kB high:19464kB active_anon:3972428kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:16kB pagetables:7724kB bounce:0kB free_pcp:408kB local_pcp:16kB free_cma:0kB
[ 1947.040136] lowmem_reserve[]: 0 0 19543 19543 19543
[ 1947.040148] Node 0 Normal free:56060kB min:56168kB low:76180kB high:96192kB active_anon:5143812kB inactive_anon:33012kB active_file:370668kB inactive_file:0kB unevictable:47380kB writepending:0kB present:20400128kB managed:5899452kB mlocked:47380kB kernel_stack:4848kB pagetables:19232kB bounce:0kB free_pcp:1140kB local_pcp:692kB free_cma:0kB
[ 1947.040185] lowmem_reserve[]: 0 0 0 0 0
[ 1947.040192] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[ 1947.040214] Node 0 DMA32: 1*4kB (U) 12*8kB (UM) 16*16kB (UM) 15*32kB (UM) 8*64kB (UM) 4*128kB (UM) 0*256kB 3*512kB (U) 2*1024kB (UM) 1*2048kB (M) 20*4096kB (M) = 89412kB
[ 1947.040238] Node 0 Normal: 309*4kB (E) 91*8kB (UME) 168*16kB (UE) 153*32kB (UME) 77*64kB (UE) 84*128kB (UME) 73*256kB (UME) 25*512kB (UM) 0*1024kB 0*2048kB 0*4096kB = 56716kB
[ 1947.040264] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1947.040276] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1947.040288] 101076 total pagecache pages
[ 1947.040293] 6143894 pages RAM
[ 1947.040298] 0 pages HighMem/MovableOnly
[ 1947.040304] 3647521 pages reserved
[ 1947.040309] 0 pages cma reserved
[ 1947.040315] 0 pages hwpoisoned
[ 1947.040320] Showing busy workqueues and worker pools:
[ 1947.040328] workqueue events: flags=0x0
[ 1947.040334]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=3/256
[ 1947.040345]     in-flight: 87:balloon_process
[ 1947.040355]     pending: balloon_process, vmstat_shepherd
[ 1947.040368] workqueue mm_percpu_wq: flags=0x8
[ 1947.040377]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256
[ 1947.040387]     pending: drain_local_pages_wq BAR(5096)
[ 1947.040404] pool 0: cpus=0 node=0 flags=0x0 nice=0 hung=1s workers=2 idle: 88
[ 1947.040418] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1948.064883] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1948.064915] MemAlloc: kswapd0(108) flags=0xa20840 switches=252
[ 1948.064924] kswapd0         S    0   108      2 0x80000000
[ 1948.064945] Call Trace:
[ 1948.064954]  ? __schedule+0x3f3/0x8c0
[ 1948.064960]  schedule+0x36/0x80
[ 1948.064967]  kswapd+0x584/0x590
[ 1948.064974]  ? remove_wait_queue+0x70/0x70
[ 1948.064986]  kthread+0x105/0x140
[ 1948.064992]  ? balance_pgdat+0x3e0/0x3e0
[ 1948.064998]  ? kthread_stop+0x100/0x100
[ 1948.065046]  ret_from_fork+0x35/0x40
[ 1948.065063] MemAlloc: sh(5096) flags=0x404000 switches=2 seq=50 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=2791 uninterruptible
[ 1948.065084] sh              D    0  5096   5095 0x00000080
[ 1948.065091] Call Trace:
[ 1948.065097]  ? __schedule+0x3f3/0x8c0
[ 1948.065108]  ? vmpressure+0x2d/0x180
[ 1948.065115]  schedule+0x36/0x80
[ 1948.065125]  schedule_timeout+0x29b/0x4d0
[ 1948.065132]  wait_for_completion+0x121/0x190
[ 1948.065144]  ? wake_up_q+0x80/0x80
[ 1948.065171]  flush_work+0x18f/0x200
[ 1948.065200]  ? rcu_free_pwq+0x20/0x20
[ 1948.065207]  __alloc_pages_slowpath+0x766/0x1590
[ 1948.065220]  ? find_get_entry+0x1e/0x190
[ 1948.065227]  __alloc_pages_nodemask+0x302/0x3c0
[ 1948.065234]  alloc_pages_vma+0xac/0x4f0
[ 1948.065251]  do_anonymous_page+0x105/0x3f0
[ 1948.065258]  __handle_mm_fault+0xbc9/0xf10
[ 1948.065265]  ? do_mmap+0x463/0x5b0
[ 1948.065276]  handle_mm_fault+0x102/0x2c0
[ 1948.065285]  __do_page_fault+0x294/0x540
[ 1948.065296]  ? __audit_syscall_exit+0x2bf/0x3e0
[ 1948.065304]  do_page_fault+0x38/0x120
[ 1948.065310]  ? page_fault+0x8/0x30
[ 1948.065319]  page_fault+0x1e/0x30
[ 1948.065326] RIP: 0033:0x78f0e79ed3f4
[ 1948.065331] Code: Bad RIP value.
[ 1948.065344] RSP: 002b:00007ffdda11ae20 EFLAGS: 00010206
[ 1948.065353] RAX: 000078f0e7bee000 RBX: 000078f0e7bee000 RCX: 0000000000000000
[ 1948.065362] RDX: 000078f0e7bee000 RSI: 0000000000002000 RDI: 0000000000000000
[ 1948.065376] RBP: 000078f0e7c0a130 R08: 00000000ffffffff R09: 0000000000000000
[ 1948.065387] R10: 000064bacc4f6b09 R11: 0000000000000246 R12: 0000000000000000
[ 1948.065418] R13: 0000000000000000 R14: 0000000000000000 R15: 000000000000000d
[ 1948.065442] Mem-Info:
[ 1948.065449] active_anon:2610367 inactive_anon:8253 isolated_anon:0
                active_file:92674 inactive_file:0 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8259 slab_unreclaimable:13178
                mapped:23020 shmem:8404 pagetables:7406 bounce:0
                free:295710 free_pcp:361 free_cma:0
[ 1948.065495] Node 0 active_anon:10441468kB inactive_anon:33012kB active_file:370696kB inactive_file:0kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:92080kB dirty:0kB writeback:0kB shmem:33616kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 1948.065532] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 1948.065569] lowmem_reserve[]: 0 3956 23499 23499 23499
[ 1948.065583] Node 0 DMA32 free:89352kB min:11368kB low:15416kB high:19464kB active_anon:3972428kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:16kB pagetables:7724kB bounce:0kB free_pcp:408kB local_pcp:16kB free_cma:0kB
[ 1948.065621] lowmem_reserve[]: 0 0 19543 19543 19543
[ 1948.065630] Node 0 Normal free:1077584kB min:56168kB low:76180kB high:96192kB active_anon:6469040kB inactive_anon:33012kB active_file:370668kB inactive_file:0kB unevictable:47380kB writepending:0kB present:20400128kB managed:8248508kB mlocked:47380kB kernel_stack:4848kB pagetables:21900kB bounce:0kB free_pcp:1036kB local_pcp:692kB free_cma:0kB
[ 1948.065670] lowmem_reserve[]: 0 0 0 0 0
[ 1948.065677] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[ 1948.065702] Node 0 DMA32: 1*4kB (U) 12*8kB (UM) 16*16kB (UM) 15*32kB (UM) 8*64kB (UM) 4*128kB (UM) 0*256kB 3*512kB (U) 2*1024kB (UM) 1*2048kB (M) 20*4096kB (M) = 89412kB
[ 1948.065733] Node 0 Normal: 465*4kB (UME) 152*8kB (UME) 183*16kB (UME) 181*32kB (UME) 119*64kB (UME) 116*128kB (UME) 92*256kB (UME) 25*512kB (U) 6*1024kB (U) 9*2048kB (U) 240*4096kB (U) = 1078228kB
[ 1948.065805] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1948.065816] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1948.065832] 101076 total pagecache pages
[ 1948.065837] 6143894 pages RAM
[ 1948.065846] 0 pages HighMem/MovableOnly
[ 1948.065853] 3060257 pages reserved
[ 1948.065858] 0 pages cma reserved
[ 1948.065863] 0 pages hwpoisoned
[ 1948.065872] Showing busy workqueues and worker pools:
[ 1948.065880] workqueue events: flags=0x0
[ 1948.065890]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=3/256
[ 1948.065900]     in-flight: 87:balloon_process
[ 1948.065909]     pending: balloon_process, vmstat_shepherd
[ 1948.065936] workqueue mm_percpu_wq: flags=0x8
[ 1948.065944]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256
[ 1948.065953]     pending: drain_local_pages_wq BAR(5096)
[ 1948.065972] pool 0: cpus=0 node=0 flags=0x0 nice=0 hung=2s workers=2 idle: 88
[ 1948.065988] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1949.088730] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1949.088748] MemAlloc: kswapd0(108) flags=0xa20840 switches=252
[ 1949.088757] kswapd0         S    0   108      2 0x80000000
[ 1949.088765] Call Trace:
[ 1949.088774]  ? __schedule+0x3f3/0x8c0
[ 1949.088780]  schedule+0x36/0x80
[ 1949.088787]  kswapd+0x584/0x590
[ 1949.088793]  ? remove_wait_queue+0x70/0x70
[ 1949.088802]  kthread+0x105/0x140
[ 1949.088811]  ? balance_pgdat+0x3e0/0x3e0
[ 1949.088816]  ? kthread_stop+0x100/0x100
[ 1949.088828]  ret_from_fork+0x35/0x40
[ 1949.088842] MemAlloc: sh(5096) flags=0x404000 switches=2 seq=50 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=3815 uninterruptible
[ 1949.088858] sh              D    0  5096   5095 0x00000080
[ 1949.088865] Call Trace:
[ 1949.088870]  ? __schedule+0x3f3/0x8c0
[ 1949.088876]  ? vmpressure+0x2d/0x180
[ 1949.088882]  schedule+0x36/0x80
[ 1949.088888]  schedule_timeout+0x29b/0x4d0
[ 1949.088898]  wait_for_completion+0x121/0x190
[ 1949.088906]  ? wake_up_q+0x80/0x80
[ 1949.088912]  flush_work+0x18f/0x200
[ 1949.088922]  ? rcu_free_pwq+0x20/0x20
[ 1949.088929]  __alloc_pages_slowpath+0x766/0x1590
[ 1949.088936]  ? find_get_entry+0x1e/0x190
[ 1949.088942]  __alloc_pages_nodemask+0x302/0x3c0
[ 1949.088954]  alloc_pages_vma+0xac/0x4f0
[ 1949.088961]  do_anonymous_page+0x105/0x3f0
[ 1949.088967]  __handle_mm_fault+0xbc9/0xf10
[ 1949.088973]  ? do_mmap+0x463/0x5b0
[ 1949.088980]  handle_mm_fault+0x102/0x2c0
[ 1949.088989]  __do_page_fault+0x294/0x540
[ 1949.088995]  ? __audit_syscall_exit+0x2bf/0x3e0
[ 1949.089044]  do_page_fault+0x38/0x120
[ 1949.089058]  ? page_fault+0x8/0x30
[ 1949.089066]  page_fault+0x1e/0x30
[ 1949.089079] RIP: 0033:0x78f0e79ed3f4
[ 1949.089086] Code: Bad RIP value.
[ 1949.089115] RSP: 002b:00007ffdda11ae20 EFLAGS: 00010206
[ 1949.089125] RAX: 000078f0e7bee000 RBX: 000078f0e7bee000 RCX: 0000000000000000
[ 1949.089143] RDX: 000078f0e7bee000 RSI: 0000000000002000 RDI: 0000000000000000
[ 1949.089161] RBP: 000078f0e7c0a130 R08: 00000000ffffffff R09: 0000000000000000
[ 1949.089175] R10: 000064bacc4f6b09 R11: 0000000000000246 R12: 0000000000000000
[ 1949.089192] R13: 0000000000000000 R14: 0000000000000000 R15: 000000000000000d
[ 1949.089211] Mem-Info:
[ 1949.089219] active_anon:2610367 inactive_anon:8253 isolated_anon:0
                active_file:92674 inactive_file:0 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8259 slab_unreclaimable:13178
                mapped:23020 shmem:8404 pagetables:7406 bounce:0
                free:935701 free_pcp:366 free_cma:0
[ 1949.089284] Node 0 active_anon:10441468kB inactive_anon:33012kB active_file:370696kB inactive_file:0kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:92080kB dirty:0kB writeback:0kB shmem:33616kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 1949.089335] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 1949.089384] lowmem_reserve[]: 0 3956 23499 23499 23499
[ 1949.089395] Node 0 DMA32 free:89352kB min:11368kB low:15416kB high:19464kB active_anon:3972428kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:16kB pagetables:7724kB bounce:0kB free_pcp:408kB local_pcp:16kB free_cma:0kB
[ 1949.089444] lowmem_reserve[]: 0 0 19543 19543 19543
[ 1949.089455] Node 0 Normal free:3637548kB min:56168kB low:76180kB high:96192kB active_anon:6469040kB inactive_anon:33012kB active_file:370668kB inactive_file:0kB unevictable:47380kB writepending:0kB present:20400128kB managed:10808508kB mlocked:47380kB kernel_stack:4848kB pagetables:21900kB bounce:0kB free_pcp:1056kB local_pcp:712kB free_cma:0kB
[ 1949.089550] lowmem_reserve[]: 0 0 0 0 0
[ 1949.089567] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[ 1949.089598] Node 0 DMA32: 1*4kB (U) 12*8kB (UM) 16*16kB (UM) 15*32kB (UM) 8*64kB (UM) 4*128kB (UM) 0*256kB 3*512kB (U) 2*1024kB (UM) 1*2048kB (M) 20*4096kB (M) = 89412kB
[ 1949.089651] Node 0 Normal: 508*4kB (UME) 162*8kB (UME) 198*16kB (UME) 195*32kB (UME) 130*64kB (UME) 125*128kB (UME) 103*256kB (UME) 34*512kB (U) 18*1024kB (U) 20*2048kB (U) 854*4096kB (U) = 3638208kB
[ 1949.089692] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1949.089715] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1949.089729] 101076 total pagecache pages
[ 1949.089741] 6143894 pages RAM
[ 1949.089748] 0 pages HighMem/MovableOnly
[ 1949.089754] 2420257 pages reserved
[ 1949.089766] 0 pages cma reserved
[ 1949.089772] 0 pages hwpoisoned
[ 1949.089779] Showing busy workqueues and worker pools:
[ 1949.089789] workqueue events: flags=0x0
[ 1949.089797]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=3/256
[ 1949.089810]     in-flight: 87:balloon_process
[ 1949.089822]     pending: balloon_process, vmstat_shepherd
[ 1949.089835] workqueue events_unbound: flags=0x2
[ 1949.089844]   pwq 24: cpus=0-11 flags=0x4 nice=0 active=1/512
[ 1949.089856]     pending: flush_to_ldisc BAR(969)
[ 1949.089870] workqueue mm_percpu_wq: flags=0x8
[ 1949.089880]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256
[ 1949.089892]     pending: drain_local_pages_wq BAR(5096)
[ 1949.089916] pool 0: cpus=0 node=0 flags=0x0 nice=0 hung=3s workers=2 idle: 88
[ 1949.089934] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1950.112660] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1950.112678] MemAlloc: kswapd0(108) flags=0xa20840 switches=252
[ 1950.112688] kswapd0         S    0   108      2 0x80000000
[ 1950.112696] Call Trace:
[ 1950.112705]  ? __schedule+0x3f3/0x8c0
[ 1950.112712]  schedule+0x36/0x80
[ 1950.112719]  kswapd+0x584/0x590
[ 1950.112725]  ? remove_wait_queue+0x70/0x70
[ 1950.112731]  kthread+0x105/0x140
[ 1950.112738]  ? balance_pgdat+0x3e0/0x3e0
[ 1950.112750]  ? kthread_stop+0x100/0x100
[ 1950.112756]  ret_from_fork+0x35/0x40
[ 1950.112769] MemAlloc: sh(5096) flags=0x404000 switches=2 seq=50 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=4839 uninterruptible
[ 1950.112794] sh              D    0  5096   5095 0x00000080
[ 1950.112806] Call Trace:
[ 1950.112815]  ? __schedule+0x3f3/0x8c0
[ 1950.112822]  ? vmpressure+0x2d/0x180
[ 1950.112832]  schedule+0x36/0x80
[ 1950.112844]  schedule_timeout+0x29b/0x4d0
[ 1950.112854]  wait_for_completion+0x121/0x190
[ 1950.112865]  ? wake_up_q+0x80/0x80
[ 1950.112872]  flush_work+0x18f/0x200
[ 1950.112882]  ? rcu_free_pwq+0x20/0x20
[ 1950.112892]  __alloc_pages_slowpath+0x766/0x1590
[ 1950.112904]  ? find_get_entry+0x1e/0x190
[ 1950.112918]  __alloc_pages_nodemask+0x302/0x3c0
[ 1950.112968]  alloc_pages_vma+0xac/0x4f0
[ 1950.112993]  do_anonymous_page+0x105/0x3f0
[ 1950.113034]  __handle_mm_fault+0xbc9/0xf10
[ 1950.113060]  ? do_mmap+0x463/0x5b0
[ 1950.113078]  handle_mm_fault+0x102/0x2c0
[ 1950.113090]  __do_page_fault+0x294/0x540
[ 1950.113101]  ? __audit_syscall_exit+0x2bf/0x3e0
[ 1950.113113]  do_page_fault+0x38/0x120
[ 1950.113136]  ? page_fault+0x8/0x30
[ 1950.113142]  page_fault+0x1e/0x30
[ 1950.113154] RIP: 0033:0x78f0e79ed3f4
[ 1950.113163] Code: Bad RIP value.
[ 1950.113175] RSP: 002b:00007ffdda11ae20 EFLAGS: 00010206
[ 1950.113186] RAX: 000078f0e7bee000 RBX: 000078f0e7bee000 RCX: 0000000000000000
[ 1950.113200] RDX: 000078f0e7bee000 RSI: 0000000000002000 RDI: 0000000000000000
[ 1950.113214] RBP: 000078f0e7c0a130 R08: 00000000ffffffff R09: 0000000000000000
[ 1950.113228] R10: 000064bacc4f6b09 R11: 0000000000000246 R12: 0000000000000000
[ 1950.113242] R13: 0000000000000000 R14: 0000000000000000 R15: 000000000000000d
[ 1950.113255] Mem-Info:
[ 1950.113277] active_anon:2610367 inactive_anon:8253 isolated_anon:0
                active_file:92674 inactive_file:0 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8259 slab_unreclaimable:13178
                mapped:23020 shmem:8404 pagetables:7406 bounce:0
                free:1575200 free_pcp:355 free_cma:0
[ 1950.113320] Node 0 active_anon:10441468kB inactive_anon:33012kB active_file:370696kB inactive_file:0kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:92080kB dirty:0kB writeback:0kB shmem:33616kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 1950.113354] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 1950.113386] lowmem_reserve[]: 0 3956 23499 23499 23499
[ 1950.113394] Node 0 DMA32 free:89352kB min:11368kB low:15416kB high:19464kB active_anon:3972428kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:16kB pagetables:7724kB bounce:0kB free_pcp:408kB local_pcp:16kB free_cma:0kB
[ 1950.113442] lowmem_reserve[]: 0 0 19543 19543 19543
[ 1950.113450] Node 0 Normal free:6195544kB min:56168kB low:76180kB high:96192kB active_anon:6469040kB inactive_anon:33012kB active_file:370668kB inactive_file:0kB unevictable:47380kB writepending:0kB present:20400128kB managed:13366460kB mlocked:47380kB kernel_stack:4848kB pagetables:21900kB bounce:0kB free_pcp:1012kB local_pcp:668kB free_cma:0kB
[ 1950.113486] lowmem_reserve[]: 0 0 0 0 0
[ 1950.113492] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[ 1950.313606] Node 0 DMA32: 1*4kB (U) 12*8kB (UM) 16*16kB (UM) 15*32kB (UM) 8*64kB (UM) 4*128kB (UM) 0*256kB 3*512kB (U) 2*1024kB (UM) 1*2048kB (M) 20*4096kB (M) = 89412kB
[ 1950.313641] Node 0 Normal: 513*4kB (UME) 165*8kB (UME) 200*16kB (UME) 200*32kB (UME) 135*64kB (UME) 129*128kB (UME) 107*256kB (UME) 40*512kB (U) 19*1024kB (U) 24*2048kB (U) 1475*4096kB (U) = 6196204kB
[ 1950.313681] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1950.313697] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1950.313712] 101076 total pagecache pages
[ 1950.313719] 6143894 pages RAM
[ 1950.313729] 0 pages HighMem/MovableOnly
[ 1950.313735] 1780769 pages reserved
[ 1950.313743] 0 pages cma reserved
[ 1950.313752] 0 pages hwpoisoned
[ 1950.313762] Showing busy workqueues and worker pools:
[ 1950.313778] workqueue events: flags=0x0
[ 1950.313798]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=3/256
[ 1950.313811]     in-flight: 87:balloon_process
[ 1950.313826]     pending: balloon_process, vmstat_shepherd
[ 1950.313846] workqueue events_unbound: flags=0x2
[ 1950.313857]   pwq 24: cpus=0-11 flags=0x4 nice=0 active=1/512
[ 1950.313868]     pending: flush_to_ldisc
[ 1950.313884] workqueue mm_percpu_wq: flags=0x8
[ 1950.313897]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256
[ 1950.313907]     pending: drain_local_pages_wq BAR(5096)
[ 1950.313940] pool 0: cpus=0 node=0 flags=0x0 nice=0 hung=5s workers=2 idle: 88
[ 1950.313960] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1951.319161] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2
[ 1951.319179] MemAlloc: kswapd0(108) flags=0xa20840 switches=252
[ 1951.319188] kswapd0         S    0   108      2 0x80000000
[ 1951.319196] Call Trace:
[ 1951.319205]  ? __schedule+0x3f3/0x8c0
[ 1951.319211]  schedule+0x36/0x80
[ 1951.319218]  kswapd+0x584/0x590
[ 1951.319224]  ? remove_wait_queue+0x70/0x70
[ 1951.319237]  kthread+0x105/0x140
[ 1951.319243]  ? balance_pgdat+0x3e0/0x3e0
[ 1951.319249]  ? kthread_stop+0x100/0x100
[ 1951.319258]  ret_from_fork+0x35/0x40
[ 1951.319271] MemAlloc: sh(5096) flags=0x404000 switches=2 seq=50 gfp=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO) order=0 delay=6046 uninterruptible
[ 1951.319292] sh              D    0  5096   5095 0x00000080
[ 1951.319299] Call Trace:
[ 1951.319306]  ? __schedule+0x3f3/0x8c0
[ 1951.319314]  ? vmpressure+0x2d/0x180
[ 1951.319320]  schedule+0x36/0x80
[ 1951.319330]  schedule_timeout+0x29b/0x4d0
[ 1951.319336]  wait_for_completion+0x121/0x190
[ 1951.319344]  ? wake_up_q+0x80/0x80
[ 1951.319354]  flush_work+0x18f/0x200
[ 1951.319361]  ? rcu_free_pwq+0x20/0x20
[ 1951.319367]  __alloc_pages_slowpath+0x766/0x1590
[ 1951.319375]  ? find_get_entry+0x1e/0x190
[ 1951.319381]  __alloc_pages_nodemask+0x302/0x3c0
[ 1951.319392]  alloc_pages_vma+0xac/0x4f0
[ 1951.319399]  do_anonymous_page+0x105/0x3f0
[ 1951.319406]  __handle_mm_fault+0xbc9/0xf10
[ 1951.319416]  ? do_mmap+0x463/0x5b0
[ 1951.319422]  handle_mm_fault+0x102/0x2c0
[ 1951.319428]  __do_page_fault+0x294/0x540
[ 1951.319438]  ? __audit_syscall_exit+0x2bf/0x3e0
[ 1951.319446]  do_page_fault+0x38/0x120
[ 1951.319452]  ? page_fault+0x8/0x30
[ 1951.319458]  page_fault+0x1e/0x30
[ 1951.319465] RIP: 0033:0x78f0e79ed3f4
[ 1951.319474] Code: Bad RIP value.
[ 1951.319482] RSP: 002b:00007ffdda11ae20 EFLAGS: 00010206
[ 1951.319490] RAX: 000078f0e7bee000 RBX: 000078f0e7bee000 RCX: 0000000000000000
[ 1951.319500] RDX: 000078f0e7bee000 RSI: 0000000000002000 RDI: 0000000000000000
[ 1951.319513] RBP: 000078f0e7c0a130 R08: 00000000ffffffff R09: 0000000000000000
[ 1951.319523] R10: 000064bacc4f6b09 R11: 0000000000000246 R12: 0000000000000000
[ 1951.319533] R13: 0000000000000000 R14: 0000000000000000 R15: 000000000000000d
[ 1951.319547] Mem-Info:
[ 1951.319554] active_anon:2610367 inactive_anon:8253 isolated_anon:0
                active_file:92674 inactive_file:0 isolated_file:0
                unevictable:11845 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8259 slab_unreclaimable:13178
                mapped:23020 shmem:8404 pagetables:7406 bounce:0
                free:2205487 free_pcp:366 free_cma:0
[ 1951.319599] Node 0 active_anon:10441468kB inactive_anon:33012kB active_file:370696kB inactive_file:0kB unevictable:47380kB isolated(anon):0kB isolated(file):0kB mapped:92080kB dirty:0kB writeback:0kB shmem:33616kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 1951.319636] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 1951.319671] lowmem_reserve[]: 0 3956 23499 23499 23499
[ 1951.319683] Node 0 DMA32 free:89352kB min:11368kB low:15416kB high:19464kB active_anon:3972428kB inactive_anon:0kB active_file:28kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:16kB pagetables:7724kB bounce:0kB free_pcp:408kB local_pcp:16kB free_cma:0kB
[ 1951.319721] lowmem_reserve[]: 0 0 19543 19543 19543
[ 1951.319732] Node 0 Normal free:8716692kB min:56168kB low:76180kB high:96192kB active_anon:6469040kB inactive_anon:33012kB active_file:370668kB inactive_file:0kB unevictable:47380kB writepending:0kB present:20400128kB managed:15887548kB mlocked:47380kB kernel_stack:4848kB pagetables:21900kB bounce:0kB free_pcp:1056kB local_pcp:712kB free_cma:0kB
[ 1951.319773] lowmem_reserve[]: 0 0 0 0 0
[ 1951.319779] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[ 1951.319804] Node 0 DMA32: 1*4kB (U) 12*8kB (UM) 16*16kB (UM) 15*32kB (UM) 8*64kB (UM) 4*128kB (UM) 0*256kB 3*512kB (U) 2*1024kB (UM) 1*2048kB (M) 20*4096kB (M) = 89412kB
[ 1951.319831] Node 0 Normal: 514*4kB (UME) 165*8kB (UME) 203*16kB (UME) 199*32kB (UME) 136*64kB (UME) 130*128kB (UME) 108*256kB (UME) 39*512kB (U) 23*1024kB (U) 25*2048kB (U) 2089*4096kB (U) = 8717248kB
[ 1951.319867] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1951.319879] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1951.319894] 101076 total pagecache pages
[ 1951.319899] 6143894 pages RAM
[ 1951.319904] 0 pages HighMem/MovableOnly
[ 1951.319913] 1150497 pages reserved
[ 1951.319919] 0 pages cma reserved
[ 1951.319924] 0 pages hwpoisoned
[ 1951.319929] Showing busy workqueues and worker pools:
[ 1951.319941] workqueue events: flags=0x0
[ 1951.319948]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=3/256
[ 1951.319958]     in-flight: 87:balloon_process
[ 1951.319967]     pending: balloon_process, vmstat_shepherd
[ 1951.319998] workqueue mm_percpu_wq: flags=0x8
[ 1951.320023]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256
[ 1951.320033]     pending: drain_local_pages_wq BAR(5096)
[ 1951.320059] pool 0: cpus=0 node=0 flags=0x0 nice=0 hung=6s workers=2 idle: 88
[ 1951.320078] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=2


Worth mentioning that due to how qubes(AppVMs) in QubeOS (R4.0) work, the MemTotal reported increases as more memory is required, so it is at MemTotal: 5899624 kB now when idle, but when stress runs, it goes up to its max (24gig). But the stalls are not in effect when MemTotal increases(actually they are, but it's because memfree is low enough with regards to current memtotal before it increases, that it causes the same stall/freeze; then memtotal increases and it defreezes temporarily; though this doesn't always happen because memtotal can increase before memfree reached that critical low value which would cause a freeze), but rather only when MemTotal is already at max (24gig) and MemFree is something like below 60meg.

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 31, 2018

Here's an idea of what files get accessed during the 3 sec freezing that is happening when I run the following (after a cold start of the Fedora 28 AppVM, in QubesOS R4.0):
(even though OOM-killer doesn't trigger to kill it)

$ sudo sysctl -w vm.block_dump=1 && time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 2 --timeout 10s; echo $?; sudo sysctl -w vm.block_dump=0
vm.block_dump = 1
stress: info: [4529] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
stress: info: [4529] successful run completed in 10s

real	0m10.290s
user	0m13.739s
sys	0m6.638s
0
vm.block_dump = 0
dmesg
[  668.976049] audit: type=1104 audit(1535719448.629:133): pid=4525 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/3 res=success'
[  670.052737] sh(4553): READ block 7353800 on xvda3 (256 sectors)
[  670.054659] sh(4553): READ block 4471744 on xvda3 (56 sectors)
[  670.055494] sh(4553): READ block 4478856 on xvda3 (72 sectors)
[  670.055959] sh(4553): READ block 4479176 on xvda3 (8 sectors)
[  670.056245] sh(4553): READ block 6647224 on xvda3 (224 sectors)
[  671.775093] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[  671.775119] MemAlloc: kswapd0(108) flags=0xa20840 switches=16
[  671.775134] kswapd0         S    0   108      2 0x80000000
[  671.775149] Call Trace:
[  671.775161]  ? __schedule+0x3f3/0x8c0
[  671.775170]  schedule+0x36/0x80
[  671.775185]  kswapd+0x584/0x590
[  671.775197]  ? remove_wait_queue+0x70/0x70
[  671.775211]  kthread+0x105/0x140
[  671.775224]  ? balance_pgdat+0x3e0/0x3e0
[  671.775234]  ? kthread_stop+0x100/0x100
[  671.775245]  ret_from_fork+0x35/0x40
[  671.775266] MemAlloc: sh(4553) flags=0x404000 switches=8 seq=115 gfp=0x6200ca(GFP_HIGHUSER_MOVABLE) order=0 delay=1718 uninterruptible
[  671.775290] sh              D    0  4553   4552 0x00000080
[  671.775304] Call Trace:
[  671.775311]  ? __schedule+0x3f3/0x8c0
[  671.775320]  ? __switch_to_asm+0x40/0x70
[  671.775330]  ? __switch_to_asm+0x34/0x70
[  671.775339]  schedule+0x36/0x80
[  671.775349]  schedule_timeout+0x29b/0x4d0
[  671.775358]  ? __switch_to+0xb2/0x4d0
[  671.775367]  ? __switch_to_asm+0x40/0x70
[  671.775376]  ? finish_task_switch+0x75/0x2a0
[  671.775388]  wait_for_completion+0x121/0x190
[  671.775400]  ? wake_up_q+0x80/0x80
[  671.775409]  flush_work+0x18f/0x200
[  671.775420]  ? rcu_free_pwq+0x20/0x20
[  671.775433]  __alloc_pages_slowpath+0x766/0x1590
[  671.775444]  ? filemap_fault+0x23c/0xa20
[  671.775454]  __alloc_pages_nodemask+0x302/0x3c0
[  671.775466]  alloc_pages_vma+0xac/0x4f0
[  671.775477]  __handle_mm_fault+0x71d/0xf10
[  671.775489]  ? do_mmap+0x463/0x5b0
[  671.775499]  handle_mm_fault+0x102/0x2c0
[  671.775511]  __do_page_fault+0x294/0x540
[  671.775523]  ? __audit_syscall_exit+0x2bf/0x3e0
[  671.775537]  do_page_fault+0x38/0x120
[  671.775548]  ? page_fault+0x8/0x30
[  671.775558]  page_fault+0x1e/0x30
[  671.775568] RIP: 0033:0x727e0af72f7f
[  671.775578] Code: Bad RIP value.
[  671.775589] RSP: 002b:00007fff560ef8c8 EFLAGS: 00010206
[  671.775601] RAX: 0000727e0ab1e860 RBX: 0000727e0b1609c0 RCX: 0000727e0ab1e8a0
[  671.775618] RDX: 00000000000007a0 RSI: 0000000000000000 RDI: 0000727e0ab1e860
[  671.775638] RBP: 00007fff560efbf0 R08: 0000727e0ab22ae0 R09: 00000000001b5000
[  671.775655] R10: 0000727e0ab1f000 R11: 0000000000000206 R12: 00007fff560ef900
[  671.775671] R13: 00007fff560efcd8 R14: 000000000000ca03 R15: 0000000000000002
[  671.775690] Mem-Info:
[  671.775701] active_anon:2683145 inactive_anon:8764 isolated_anon:0
                active_file:96964 inactive_file:27 isolated_file:0
                unevictable:6301 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8373 slab_unreclaimable:13123
                mapped:22339 shmem:8901 pagetables:7509 bounce:0
                free:40612 free_pcp:1191 free_cma:0
[  671.775771] Node 0 active_anon:10733164kB inactive_anon:35056kB active_file:387856kB inactive_file:108kB unevictable:25204kB isolated(anon):0kB isolated(file):0kB mapped:89356kB dirty:0kB writeback:0kB shmem:35604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  671.775824] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  671.775877] lowmem_reserve[]: 0 3956 23499 23499 23499
[  671.775892] Node 0 DMA32 free:89156kB min:11368kB low:15416kB high:19464kB active_anon:3971524kB inactive_anon:0kB active_file:112kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:16kB pagetables:7968kB bounce:0kB free_pcp:1180kB local_pcp:0kB free_cma:0kB
[  671.975869] lowmem_reserve[]: 0 0 19543 19543 19543
[  671.975888] Node 0 Normal free:56100kB min:56168kB low:76180kB high:96192kB active_anon:7216280kB inactive_anon:35056kB active_file:387452kB inactive_file:268kB unevictable:25204kB writepending:0kB present:20400128kB managed:7975024kB mlocked:25204kB kernel_stack:4912kB pagetables:23428kB bounce:0kB free_pcp:3740kB local_pcp:0kB free_cma:0kB
[  671.975970] lowmem_reserve[]: 0 0 0 0 0
[  671.975992] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  671.976045] Node 0 DMA32: 1*4kB (U) 1*8kB (U) 5*16kB (U) 13*32kB (UM) 5*64kB (U) 1*128kB (U) 1*256kB (M) 2*512kB (UM) 1*1024kB (M) 2*2048kB (UM) 20*4096kB (M) = 89276kB
[  671.976112] Node 0 Normal: 522*4kB (UE) 188*8kB (UME) 154*16kB (UME) 214*32kB (UE) 132*64kB (UE) 107*128kB (UE) 77*256kB (UME) 3*512kB (U) 0*1024kB 0*2048kB 0*4096kB = 56296kB
[  671.976180] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  671.976201] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  671.976229] 105921 total pagecache pages
[  671.976255] 6143894 pages RAM
[  671.976262] 0 pages HighMem/MovableOnly
[  671.976269] 3128628 pages reserved
[  671.976277] 0 pages cma reserved
[  671.976285] 0 pages hwpoisoned
[  671.976290] Showing busy workqueues and worker pools:
[  671.976312] workqueue events: flags=0x0
[  671.976318]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[  671.976345]     in-flight: 230:balloon_process
[  671.976360]     pending: balloon_process
[  671.976388] workqueue events_power_efficient: flags=0x80
[  671.976395]   pwq 12: cpus=6 node=0 flags=0x0 nice=0 active=1/256
[  671.976419]     pending: gc_worker [nf_conntrack]
[  671.976434] workqueue mm_percpu_wq: flags=0x8
[  671.976445]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[  671.976457]     pending: drain_local_pages_wq BAR(4553), vmstat_update
[  671.976495] pool 16: cpus=8 node=0 flags=0x0 nice=0 hung=1s workers=2 idle: 61
[  671.976514] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[  672.006935] gnome-terminal-(974): READ block 13465112 on xvda3 (240 sectors)
[  672.008370] gnome-terminal-(974): READ block 13464752 on xvda3 (8 sectors)
[  672.009236] gnome-terminal-(974): READ block 13464736 on xvda3 (8 sectors)
[  672.010070] gnome-terminal-(974): READ block 13464744 on xvda3 (8 sectors)
[  672.010918] gnome-terminal-(974): READ block 13464728 on xvda3 (8 sectors)
[  672.991164] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[  672.991211] MemAlloc: kswapd0(108) flags=0xa20840 switches=19
[  672.991221] kswapd0         S    0   108      2 0x80000000
[  672.991230] Call Trace:
[  672.991241]  ? __schedule+0x3f3/0x8c0
[  672.991248]  schedule+0x36/0x80
[  672.991257]  kswapd+0x584/0x590
[  672.991265]  ? remove_wait_queue+0x70/0x70
[  672.991272]  kthread+0x105/0x140
[  672.991280]  ? balance_pgdat+0x3e0/0x3e0
[  672.991287]  ? kthread_stop+0x100/0x100
[  672.991293]  ret_from_fork+0x35/0x40
[  672.991308] MemAlloc: sh(4553) flags=0x404000 switches=8 seq=115 gfp=0x6200ca(GFP_HIGHUSER_MOVABLE) order=0 delay=2934 uninterruptible
[  672.991324] sh              D    0  4553   4552 0x00000080
[  672.991332] Call Trace:
[  672.991337]  ? __schedule+0x3f3/0x8c0
[  672.991343]  ? __switch_to_asm+0x40/0x70
[  672.991348]  ? __switch_to_asm+0x34/0x70
[  672.991355]  schedule+0x36/0x80
[  672.991362]  schedule_timeout+0x29b/0x4d0
[  672.991370]  ? __switch_to+0xb2/0x4d0
[  672.991378]  ? __switch_to_asm+0x40/0x70
[  672.991386]  ? finish_task_switch+0x75/0x2a0
[  672.991394]  wait_for_completion+0x121/0x190
[  672.991405]  ? wake_up_q+0x80/0x80
[  672.991414]  flush_work+0x18f/0x200
[  672.991422]  ? rcu_free_pwq+0x20/0x20
[  672.991430]  __alloc_pages_slowpath+0x766/0x1590
[  672.991440]  ? filemap_fault+0x23c/0xa20
[  672.991449]  __alloc_pages_nodemask+0x302/0x3c0
[  672.991458]  alloc_pages_vma+0xac/0x4f0
[  672.991467]  __handle_mm_fault+0x71d/0xf10
[  672.991475]  ? do_mmap+0x463/0x5b0
[  672.991482]  handle_mm_fault+0x102/0x2c0
[  672.991490]  __do_page_fault+0x294/0x540
[  672.991501]  ? __audit_syscall_exit+0x2bf/0x3e0
[  672.991514]  do_page_fault+0x38/0x120
[  672.991522]  ? page_fault+0x8/0x30
[  672.991528]  page_fault+0x1e/0x30
[  672.991537] RIP: 0033:0x727e0af72f7f
[  672.991544] Code: Bad RIP value.
[  672.991555] RSP: 002b:00007fff560ef8c8 EFLAGS: 00010206
[  672.991564] RAX: 0000727e0ab1e860 RBX: 0000727e0b1609c0 RCX: 0000727e0ab1e8a0
[  672.991580] RDX: 00000000000007a0 RSI: 0000000000000000 RDI: 0000727e0ab1e860
[  672.991592] RBP: 00007fff560efbf0 R08: 0000727e0ab22ae0 R09: 00000000001b5000
[  672.991605] R10: 0000727e0ab1f000 R11: 0000000000000206 R12: 00007fff560ef900
[  672.991617] R13: 00007fff560efcd8 R14: 000000000000ca03 R15: 0000000000000002
[  672.991630] Mem-Info:
[  672.991645] active_anon:2995275 inactive_anon:8764 isolated_anon:0
                active_file:97014 inactive_file:0 isolated_file:0
                unevictable:6301 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8298 slab_unreclaimable:13127
                mapped:22368 shmem:8901 pagetables:8196 bounce:0
                free:458172 free_pcp:1149 free_cma:0
[  672.991695] Node 0 active_anon:11981100kB inactive_anon:35056kB active_file:388056kB inactive_file:0kB unevictable:25204kB isolated(anon):0kB isolated(file):0kB mapped:89472kB dirty:0kB writeback:0kB shmem:35604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  672.991736] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  672.991773] lowmem_reserve[]: 0 3956 23499 23499 23499
[  672.991783] Node 0 DMA32 free:89276kB min:11368kB low:15416kB high:19464kB active_anon:3971524kB inactive_anon:0kB active_file:112kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:16kB pagetables:7968kB bounce:0kB free_pcp:1180kB local_pcp:0kB free_cma:0kB
[  672.991824] lowmem_reserve[]: 0 0 19543 19543 19543
[  672.991833] Node 0 Normal free:1727508kB min:56168kB low:76180kB high:96192kB active_anon:8009576kB inactive_anon:35056kB active_file:387944kB inactive_file:0kB unevictable:25204kB writepending:0kB present:20400128kB managed:10440816kB mlocked:25204kB kernel_stack:4912kB pagetables:24816kB bounce:0kB free_pcp:3416kB local_pcp:0kB free_cma:0kB
[  672.991874] lowmem_reserve[]: 0 0 0 0 0
[  672.991882] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  672.991907] Node 0 DMA32: 1*4kB (U) 1*8kB (U) 5*16kB (U) 13*32kB (UM) 5*64kB (U) 1*128kB (U) 1*256kB (M) 2*512kB (UM) 1*1024kB (M) 2*2048kB (UM) 20*4096kB (M) = 89276kB
[  672.991933] Node 0 Normal: 625*4kB (UME) 250*8kB (UE) 181*16kB (UME) 245*32kB (UME) 151*64kB (UME) 137*128kB (UME) 99*256kB (UME) 22*512kB (U) 20*1024kB (U) 17*2048kB (U) 389*4096kB (U) = 1727684kB
[  672.991964] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  672.991978] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  672.991991] 105926 total pagecache pages
[  672.991997] 6143894 pages RAM
[  672.992019] 0 pages HighMem/MovableOnly
[  672.992025] 2512180 pages reserved
[  672.992032] 0 pages cma reserved
[  672.992039] 0 pages hwpoisoned
[  672.992046] Showing busy workqueues and worker pools:
[  672.992054] workqueue events: flags=0x0
[  672.992060]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[  672.992072]     in-flight: 230:balloon_process
[  672.992085]     pending: balloon_process
[  672.992104] workqueue events_unbound: flags=0x2
[  672.992112]   pwq 24: cpus=0-11 flags=0x4 nice=0 active=1/512
[  672.992124]     pending: flush_to_ldisc
[  672.992135] workqueue mm_percpu_wq: flags=0x8
[  672.992143]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[  672.992154]     pending: drain_local_pages_wq BAR(4553), vmstat_update
[  672.992181] pool 16: cpus=8 node=0 flags=0x0 nice=0 hung=2s workers=2 idle: 61
[  673.192148] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[  674.207120] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[  674.207149] MemAlloc: kswapd0(108) flags=0xa20840 switches=19
[  674.207170] kswapd0         S    0   108      2 0x80000000
[  674.207185] Call Trace:
[  674.207198]  ? __schedule+0x3f3/0x8c0
[  674.207211]  schedule+0x36/0x80
[  674.207223]  kswapd+0x584/0x590
[  674.207233]  ? remove_wait_queue+0x70/0x70
[  674.207242]  kthread+0x105/0x140
[  674.207252]  ? balance_pgdat+0x3e0/0x3e0
[  674.207260]  ? kthread_stop+0x100/0x100
[  674.207271]  ret_from_fork+0x35/0x40
[  674.207296] MemAlloc: sh(4553) flags=0x404000 switches=8 seq=115 gfp=0x6200ca(GFP_HIGHUSER_MOVABLE) order=0 delay=4150 uninterruptible
[  674.207319] sh              D    0  4553   4552 0x00000080
[  674.207329] Call Trace:
[  674.207338]  ? __schedule+0x3f3/0x8c0
[  674.207346]  ? __switch_to_asm+0x40/0x70
[  674.207356]  ? __switch_to_asm+0x34/0x70
[  674.207365]  schedule+0x36/0x80
[  674.207374]  schedule_timeout+0x29b/0x4d0
[  674.207384]  ? __switch_to+0xb2/0x4d0
[  674.207403]  ? __switch_to_asm+0x40/0x70
[  674.207413]  ? finish_task_switch+0x75/0x2a0
[  674.207425]  wait_for_completion+0x121/0x190
[  674.207438]  ? wake_up_q+0x80/0x80
[  674.207447]  flush_work+0x18f/0x200
[  674.207456]  ? rcu_free_pwq+0x20/0x20
[  674.207466]  __alloc_pages_slowpath+0x766/0x1590
[  674.207479]  ? filemap_fault+0x23c/0xa20
[  674.207490]  __alloc_pages_nodemask+0x302/0x3c0
[  674.207504]  alloc_pages_vma+0xac/0x4f0
[  674.207514]  __handle_mm_fault+0x71d/0xf10
[  674.207525]  ? do_mmap+0x463/0x5b0
[  674.207534]  handle_mm_fault+0x102/0x2c0
[  674.207544]  __do_page_fault+0x294/0x540
[  674.207554]  ? __audit_syscall_exit+0x2bf/0x3e0
[  674.207568]  do_page_fault+0x38/0x120
[  674.207577]  ? page_fault+0x8/0x30
[  674.207585]  page_fault+0x1e/0x30
[  674.207595] RIP: 0033:0x727e0af72f7f
[  674.207605] Code: Bad RIP value.
[  674.207624] RSP: 002b:00007fff560ef8c8 EFLAGS: 00010206
[  674.207640] RAX: 0000727e0ab1e860 RBX: 0000727e0b1609c0 RCX: 0000727e0ab1e8a0
[  674.207655] RDX: 00000000000007a0 RSI: 0000000000000000 RDI: 0000727e0ab1e860
[  674.207670] RBP: 00007fff560efbf0 R08: 0000727e0ab22ae0 R09: 00000000001b5000
[  674.207684] R10: 0000727e0ab1f000 R11: 0000000000000206 R12: 00007fff560ef900
[  674.207697] R13: 00007fff560efcd8 R14: 000000000000ca03 R15: 0000000000000002
[  674.207712] Mem-Info:
[  674.207721] active_anon:2995292 inactive_anon:8764 isolated_anon:0
                active_file:97027 inactive_file:0 isolated_file:0
                unevictable:6301 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8298 slab_unreclaimable:13127
                mapped:22368 shmem:8901 pagetables:8196 bounce:0
                free:1153936 free_pcp:1175 free_cma:0
[  674.207784] Node 0 active_anon:11981168kB inactive_anon:35056kB active_file:388108kB inactive_file:0kB unevictable:25204kB isolated(anon):0kB isolated(file):0kB mapped:89472kB dirty:0kB writeback:0kB shmem:35604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  674.207837] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  674.207894] lowmem_reserve[]: 0 3956 23499 23499 23499
[  674.207907] Node 0 DMA32 free:89276kB min:11368kB low:15416kB high:19464kB active_anon:3971524kB inactive_anon:0kB active_file:112kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:16kB pagetables:7968kB bounce:0kB free_pcp:1180kB local_pcp:0kB free_cma:0kB
[  674.207962] lowmem_reserve[]: 0 0 19543 19543 19543
[  674.207974] Node 0 Normal free:4510564kB min:56168kB low:76180kB high:96192kB active_anon:8009644kB inactive_anon:35056kB active_file:387996kB inactive_file:0kB unevictable:25204kB writepending:0kB present:20400128kB managed:13224048kB mlocked:25204kB kernel_stack:4912kB pagetables:24816kB bounce:0kB free_pcp:3520kB local_pcp:0kB free_cma:0kB
[  674.208049] lowmem_reserve[]: 0 0 0 0 0
[  674.208059] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  674.208100] Node 0 DMA32: 1*4kB (U) 1*8kB (U) 5*16kB (U) 13*32kB (UM) 5*64kB (U) 1*128kB (U) 1*256kB (M) 2*512kB (UM) 1*1024kB (M) 2*2048kB (UM) 20*4096kB (M) = 89276kB
[  674.208142] Node 0 Normal: 689*4kB (UME) 278*8kB (UE) 206*16kB (UME) 270*32kB (UME) 172*64kB (UME) 154*128kB (UME) 118*256kB (UME) 38*512kB (U) 36*1024kB (U) 33*2048kB (U) 1052*4096kB (U) = 4510740kB
[  674.208184] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  674.208203] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  674.208221] 105926 total pagecache pages
[  674.208231] 6143894 pages RAM
[  674.408259] 0 pages HighMem/MovableOnly
[  674.408274] 1816372 pages reserved
[  674.408286] 0 pages cma reserved
[  674.408299] 0 pages hwpoisoned
[  674.408310] Showing busy workqueues and worker pools:
[  674.408327] workqueue events: flags=0x0
[  674.408339]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[  674.408374]     in-flight: 230:balloon_process
[  674.408399]     pending: balloon_process
[  674.408420] workqueue events_power_efficient: flags=0x80
[  674.408440]   pwq 12: cpus=6 node=0 flags=0x0 nice=0 active=1/256
[  674.408459]     pending: gc_worker [nf_conntrack]
[  674.408499] workqueue mm_percpu_wq: flags=0x8
[  674.408517]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[  674.408537]     pending: drain_local_pages_wq BAR(4553), vmstat_update
[  674.408615] pool 16: cpus=8 node=0 flags=0x0 nice=0 hung=4s workers=2 idle: 61
[  674.408643] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[  675.423224] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[  675.423259] MemAlloc: kswapd0(108) flags=0xa20840 switches=19
[  675.423270] kswapd0         S    0   108      2 0x80000000
[  675.423279] Call Trace:
[  675.423291]  ? __schedule+0x3f3/0x8c0
[  675.423298]  schedule+0x36/0x80
[  675.423325]  kswapd+0x584/0x590
[  675.423335]  ? remove_wait_queue+0x70/0x70
[  675.423342]  kthread+0x105/0x140
[  675.423350]  ? balance_pgdat+0x3e0/0x3e0
[  675.423357]  ? kthread_stop+0x100/0x100
[  675.423377]  ret_from_fork+0x35/0x40
[  675.423410] MemAlloc: sh(4553) flags=0x404000 switches=8 seq=115 gfp=0x6200ca(GFP_HIGHUSER_MOVABLE) order=0 delay=5366 uninterruptible
[  675.423429] sh              D    0  4553   4552 0x00000080
[  675.423437] Call Trace:
[  675.423443]  ? __schedule+0x3f3/0x8c0
[  675.423463]  ? __switch_to_asm+0x40/0x70
[  675.423483]  ? __switch_to_asm+0x34/0x70
[  675.423491]  schedule+0x36/0x80
[  675.423501]  schedule_timeout+0x29b/0x4d0
[  675.423513]  ? __switch_to+0xb2/0x4d0
[  675.423523]  ? __switch_to_asm+0x40/0x70
[  675.423534]  ? finish_task_switch+0x75/0x2a0
[  675.423547]  wait_for_completion+0x121/0x190
[  675.423561]  ? wake_up_q+0x80/0x80
[  675.423572]  flush_work+0x18f/0x200
[  675.423584]  ? rcu_free_pwq+0x20/0x20
[  675.423595]  __alloc_pages_slowpath+0x766/0x1590
[  675.423608]  ? filemap_fault+0x23c/0xa20
[  675.423618]  __alloc_pages_nodemask+0x302/0x3c0
[  675.423630]  alloc_pages_vma+0xac/0x4f0
[  675.423640]  __handle_mm_fault+0x71d/0xf10
[  675.423648]  ? do_mmap+0x463/0x5b0
[  675.423656]  handle_mm_fault+0x102/0x2c0
[  675.423663]  __do_page_fault+0x294/0x540
[  675.423671]  ? __audit_syscall_exit+0x2bf/0x3e0
[  675.423681]  do_page_fault+0x38/0x120
[  675.423687]  ? page_fault+0x8/0x30
[  675.423694]  page_fault+0x1e/0x30
[  675.423701] RIP: 0033:0x727e0af72f7f
[  675.423708] Code: Bad RIP value.
[  675.423718] RSP: 002b:00007fff560ef8c8 EFLAGS: 00010206
[  675.423728] RAX: 0000727e0ab1e860 RBX: 0000727e0b1609c0 RCX: 0000727e0ab1e8a0
[  675.423739] RDX: 00000000000007a0 RSI: 0000000000000000 RDI: 0000727e0ab1e860
[  675.423752] RBP: 00007fff560efbf0 R08: 0000727e0ab22ae0 R09: 00000000001b5000
[  675.423766] R10: 0000727e0ab1f000 R11: 0000000000000206 R12: 00007fff560ef900
[  675.423779] R13: 00007fff560efcd8 R14: 000000000000ca03 R15: 0000000000000002
[  675.423793] Mem-Info:
[  675.423803] active_anon:2995292 inactive_anon:8764 isolated_anon:0
                active_file:97027 inactive_file:0 isolated_file:0
                unevictable:6301 dirty:0 writeback:0 unstable:0
                slab_reclaimable:8438 slab_unreclaimable:13125
                mapped:22388 shmem:8901 pagetables:8196 bounce:0
                free:1766211 free_pcp:1138 free_cma:0
[  675.423855] Node 0 active_anon:11981168kB inactive_anon:35056kB active_file:388108kB inactive_file:0kB unevictable:25204kB isolated(anon):0kB isolated(file):0kB mapped:89552kB dirty:0kB writeback:0kB shmem:35604kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  675.423893] Node 0 DMA free:15904kB min:44kB low:56kB high:68kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  675.423928] lowmem_reserve[]: 0 3956 23499 23499 23499
[  675.423938] Node 0 DMA32 free:89276kB min:11368kB low:15416kB high:19464kB active_anon:3971524kB inactive_anon:0kB active_file:112kB inactive_file:0kB unevictable:0kB writepending:0kB present:4159452kB managed:4070136kB mlocked:0kB kernel_stack:16kB pagetables:7968kB bounce:0kB free_pcp:1180kB local_pcp:0kB free_cma:0kB
[  675.423975] lowmem_reserve[]: 0 0 19543 19543 19543
[  675.423985] Node 0 Normal free:6959664kB min:56168kB low:76180kB high:96192kB active_anon:8009644kB inactive_anon:35056kB active_file:387996kB inactive_file:0kB unevictable:25204kB writepending:0kB present:20400128kB managed:15673456kB mlocked:25204kB kernel_stack:4912kB pagetables:24816kB bounce:0kB free_pcp:3372kB local_pcp:0kB free_cma:0kB
[  675.424043] lowmem_reserve[]: 0 0 0 0 0
[  675.424052] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
[  675.424079] Node 0 DMA32: 1*4kB (U) 1*8kB (U) 5*16kB (U) 13*32kB (UM) 5*64kB (U) 1*128kB (U) 1*256kB (M) 2*512kB (UM) 1*1024kB (M) 2*2048kB (UM) 20*4096kB (M) = 89276kB
[  675.424107] Node 0 Normal: 690*4kB (UME) 298*8kB (UME) 198*16kB (UME) 281*32kB (UME) 189*64kB (UME) 165*128kB (UME) 135*256kB (UME) 251*512kB (UM) 294*1024kB (U) 177*2048kB (U) 1485*4096kB (U) = 6959704kB
[  675.424142] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  675.624183] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  675.624215] 105926 total pagecache pages
[  675.624241] 6143894 pages RAM
[  675.624248] 0 pages HighMem/MovableOnly
[  675.624256] 1081652 pages reserved
[  675.624265] 0 pages cma reserved
[  675.624272] 0 pages hwpoisoned
[  675.624293] Showing busy workqueues and worker pools:
[  675.624318] workqueue events: flags=0x0
[  675.624324]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[  675.624335]     in-flight: 230:balloon_process
[  675.624346]     pending: balloon_process
[  675.624356] workqueue events_power_efficient: flags=0x80
[  675.624364]   pwq 12: cpus=6 node=0 flags=0x0 nice=0 active=1/256
[  675.624377]     pending: gc_worker [nf_conntrack]
[  675.624398] workqueue mm_percpu_wq: flags=0x8
[  675.624407]   pwq 16: cpus=8 node=0 flags=0x0 nice=0 active=2/256
[  675.624418]     pending: drain_local_pages_wq BAR(4553), vmstat_update
[  675.624442] pool 16: cpus=8 node=0 flags=0x0 nice=0 hung=5s workers=2 idle: 61
[  675.624458] MemAlloc-Info: stalling=1 dying=0 exiting=0 victim=0 oom_count=0
[  675.761377] sh(4553): READ block 4478856 on xvda3 (72 sectors)
[  675.762177] sh(4553): READ block 7353744 on xvda3 (216 sectors)
[  675.763041] sh(4553): READ block 7353960 on xvda3 (112 sectors)
[  675.763509] sh(4553): READ block 5494784 on xvda3 (32 sectors)
[  675.764331] cat(4553): READ block 5494816 on xvda3 (64 sectors)
[  679.266410] bash(4623): READ block 5609600 on xvda3 (32 sectors)
[  679.267084] sudo(4623): READ block 5609752 on xvda3 (160 sectors)
[  679.267921] sudo(4623): READ block 5609632 on xvda3 (120 sectors)
[  679.268046] sudo(4623): READ block 4635536 on xvda3 (24 sectors)
[  679.269063] sudo(4623): READ block 4477232 on xvda3 (48 sectors)
[  679.269431] sudo(4623): READ block 4479216 on xvda3 (16 sectors)
[  679.269673] sudo(4623): READ block 5615104 on xvda3 (32 sectors)
[  679.269990] sudo(4623): READ block 5615264 on xvda3 (8 sectors)
[  679.270303] sudo(4623): READ block 5615144 on xvda3 (120 sectors)
[  679.270315] sudo(4623): READ block 5615272 on xvda3 (32 sectors)
[  679.270865] sudo(4623): READ block 5705304 on xvda3 (16 sectors)
[  679.271315] sudo(4623): READ block 6514496 on xvda3 (32 sectors)
[  679.271804] sudo(4623): READ block 4481224 on xvda3 (104 sectors)
[  679.272628] sudo(4623): READ block 5615136 on xvda3 (8 sectors)
[  679.273065] sudo(4623): READ block 5095768 on xvda3 (8 sectors)
[  679.273680] sudo(4623): READ block 8851584 on xvda3 (8 sectors)
[  679.274313] sudo(4623): READ block 4468968 on xvda3 (24 sectors)
[  679.274701] sudo(4623): READ block 10125608 on xvda3 (8 sectors)
[  679.275426] sudo(4623): READ block 5613568 on xvda3 (32 sectors)
[  679.275676] sudo(4623): READ block 5614360 on xvda3 (8 sectors)
[  679.275977] sudo(4623): READ block 5614264 on xvda3 (96 sectors)
[  679.275991] sudo(4623): READ block 5614368 on xvda3 (80 sectors)
[  679.276973] sudo(4623): READ block 4722232 on xvda3 (24 sectors)
[  679.277728] sudo(4623): READ block 4739200 on xvda3 (32 sectors)
[  679.278051] sudo(4623): READ block 4739840 on xvda3 (8 sectors)
[  679.278284] sudo(4623): READ block 4739736 on xvda3 (104 sectors)
[  679.278297] sudo(4623): READ block 4739848 on xvda3 (72 sectors)
[  679.279035] sudo(4623): READ block 4739232 on xvda3 (224 sectors)
[  679.279829] sudo(4623): READ block 4739072 on xvda3 (32 sectors)
[  679.280143] sudo(4623): READ block 4739176 on xvda3 (8 sectors)
[  679.280419] sudo(4623): READ block 4739104 on xvda3 (72 sectors)
[  679.280428] sudo(4623): READ block 4739184 on xvda3 (16 sectors)
[  679.280992] sudo(4623): READ block 4481472 on xvda3 (48 sectors)
[  679.281421] sudo(4623): READ block 4711352 on xvda3 (32 sectors)
[  679.281951] sudo(4623): READ block 4711568 on xvda3 (8 sectors)
[  679.282231] sudo(4623): READ block 4711448 on xvda3 (120 sectors)
[  679.282241] sudo(4623): READ block 4711576 on xvda3 (16 sectors)
[  679.283019] sudo(4623): READ block 4554728 on xvda3 (32 sectors)
[  679.284074] sudo(4623): READ block 4555512 on xvda3 (8 sectors)
[  679.284363] sudo(4623): READ block 4555464 on xvda3 (48 sectors)
[  679.284381] sudo(4623): READ block 4555520 on xvda3 (128 sectors)
[  679.285201] sudo(4623): READ block 4554760 on xvda3 (224 sectors)
[  679.286152] sudo(4623): READ block 4554088 on xvda3 (256 sectors)
[  679.287058] sudo(4623): READ block 4698712 on xvda3 (32 sectors)
[  679.287075] sudo(4623): READ block 4698752 on xvda3 (80 sectors)
[  679.287782] sudo(4623): READ block 4698072 on xvda3 (56 sectors)
[  679.288554] sudo(4623): READ block 6387888 on xvda3 (72 sectors)
[  679.288570] sudo(4623): READ block 6387968 on xvda3 (176 sectors)
[  679.289452] sudo(4623): READ block 4464424 on xvda3 (80 sectors)
[  679.289977] sudo(4623): READ block 4604240 on xvda3 (16 sectors)
[  679.290524] sudo(4623): READ block 4604200 on xvda3 (16 sectors)
[  679.290784] sudo(4623): READ block 4605544 on xvda3 (72 sectors)
[  679.291466] sudo(4623): READ block 4788048 on xvda3 (112 sectors)
[  679.292340] sudo(4623): READ block 6133472 on xvda3 (256 sectors)
[  679.293427] sudo(4623): READ block 4463304 on xvda3 (72 sectors)
[  679.293802] sudo(4623): READ block 4462008 on xvda3 (8 sectors)
[  679.294097] sudo(4623): READ block 4469048 on xvda3 (32 sectors)
[  679.294475] sudo(4623): READ block 4473456 on xvda3 (24 sectors)
[  679.295040] sudo(4623): READ block 4553776 on xvda3 (120 sectors)
[  679.295052] sudo(4623): READ block 4553904 on xvda3 (128 sectors)
[  679.296617] sudo(4623): READ block 4554032 on xvda3 (56 sectors)
[  679.296650] sudo(4623): READ block 4554344 on xvda3 (200 sectors)
[  679.298139] sudo(4623): READ block 4554984 on xvda3 (256 sectors)
[  679.298376] sudo(4623): READ block 4711384 on xvda3 (64 sectors)
[  679.299508] sudo(4623): READ block 4739456 on xvda3 (256 sectors)
[  679.299714] sudo(4623): READ block 5613600 on xvda3 (224 sectors)
[  679.300736] sudo(4623): READ block 5613824 on xvda3 (256 sectors)
[  679.300893] sudo(4623): READ block 5614080 on xvda3 (184 sectors)
[  679.301911] sudo(4623): READ block 4481696 on xvda3 (8 sectors)
[  679.302374] sudo(4623): READ block 9826640 on xvda3 (8 sectors)
[  679.302819] sudo(4623): READ block 9984760 on xvda3 (8 sectors)
[  679.303071] sudo(4623): READ block 9847368 on xvda3 (8 sectors)
[  679.303409] sudo(4623): READ block 8814904 on xvda3 (8 sectors)
[  679.303708] sudo(4623): READ block 10449008 on xvda3 (8 sectors)
[  679.304095] sudo(4623): READ block 4790832 on xvda3 (40 sectors)
[  679.304105] sudo(4623): READ block 4790880 on xvda3 (88 sectors)
[  679.304553] sudo(4623): READ block 4598920 on xvda3 (16 sectors)
[  679.304808] sudo(4623): READ block 5159096 on xvda3 (16 sectors)
[  679.305679] sudo(4623): READ block 9846504 on xvda3 (8 sectors)
[  679.306063] sudo(4623): READ block 8811728 on xvda3 (8 sectors)
[  679.306404] sudo(4623): READ block 4722584 on xvda3 (8 sectors)
[  679.306652] sudo(4623): READ block 4723712 on xvda3 (24 sectors)
[  679.306976] sudo(4623): READ block 4797512 on xvda3 (64 sectors)
[  679.307706] sudo(4623): READ block 4718168 on xvda3 (24 sectors)
[  679.308309] sudo(4623): READ block 4722944 on xvda3 (16 sectors)
[  679.308606] sudo(4623): READ block 4727808 on xvda3 (88 sectors)
[  679.308997] sudo(4623): READ block 4722296 on xvda3 (8 sectors)
[  679.309446] sudo(4623): READ block 8811696 on xvda3 (8 sectors)
[  679.309657] sudo(4623): READ block 8784704 on xvda3 (8 sectors)
[  679.310117] sudo(4623): READ block 10125624 on xvda3 (8 sectors)
[  679.310385] audit: type=1101 audit(1535719458.964:134): pid=4623 uid=1000 auid=1000 ses=1 msg='op=PAM:accounting grantors=pam_unix acct="user" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/3 res=success'
[  679.310527] sudo(4623): READ block 8811808 on xvda3 (8 sectors)
[  679.310633] audit: type=1123 audit(1535719458.964:135): pid=4623 uid=1000 auid=1000 ses=1 msg='cwd="/home/user" cmd=73797363746C202D7720766D2E626C6F636B5F64756D703D30 terminal=pts/3 res=success'
[  679.310877] sudo(4623): READ block 8811784 on xvda3 (8 sectors)
[  679.310993] audit: type=1110 audit(1535719458.964:136): pid=4623 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/3 res=success'
[  679.311223] sudo(4623): READ block 9985296 on xvda3 (8 sectors)
[  679.312564] systemd-journal(288): dirtied inode 13198 (exe) on proc
[  679.312751] audit: type=1105 audit(1535719458.966:137): pid=4623 uid=0 auid=1000 ses=1 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/3 res=success'
[  679.312970] sudo(4624): READ block 5274248 on xvda3 (32 sectors)
[  679.313527] sysctl(4624): READ block 5274280 on xvda3 (16 sectors)
[  679.313911] sysctl(4624): READ block 5274216 on xvda3 (32 sectors)
[  679.314464] sysctl(4624): READ block 6130744 on xvda3 (48 sectors)
[  679.314477] sysctl(4624): READ block 6130800 on xvda3 (120 sectors)
[  679.315209] sysctl(4624): READ block 4596040 on xvda3 (64 sectors)
[  679.315582] sysctl(4624): READ block 4763312 on xvda3 (72 sectors)
[  679.316158] sysctl(4624): READ block 5287928 on xvda3 (80 sectors)
[  679.316171] sysctl(4624): READ block 5288016 on xvda3 (152 sectors)
[  679.317545] sysctl(4624): READ block 4750064 on xvda3 (128 sectors)
[  679.318257] sysctl(4624): READ block 5285776 on xvda3 (24 sectors)
[  679.319268] sysctl(4624): dirtied inode 20931 (block_dump) on proc
[  679.319563] audit: type=1106 audit(1535719458.973:138): pid=4623 uid=0 auid=1000 ses=1 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/3 res=success'

Here's what files were accessed (translated from the above dmesg READ blocks):

$ ./showallblocks |grep 'path :'|sort -u
path : /etc/group
path : /etc/login.defs
path : /etc/nsswitch.conf
path : /etc/pam.d/other
path : /etc/pam.d/sudo
path : /etc/pam.d/system-auth
path : /etc/passwd
path : /etc/security/limits.conf
path : /etc/security/limits.d/90-qubes-gui.conf
path : /etc/security/pam_env.conf
path : /etc/shadow
path : /etc/sudoers
path : /etc/sudoers.d/qt_x11_no_mitshm
path : /etc/sudoers.d/qubes
path : /etc/sudoers.d/qubes-input-trigger
path : /usr/bin/bash
path : /usr/bin/cat
path : /usr/bin/sudo
path : /usr/lib64/ld-2.27.so
path : /usr/lib64/libaudit.so.1.0.0
path : /usr/lib64/libblkid.so.1.1.0
path : /usr/lib64/libc-2.27.so
path : /usr/lib64/libcap-ng.so.0.0.0
path : /usr/lib64/libcap.so.2.25
path : /usr/lib64/libcom_err.so.2.1
path : /usr/lib64/libcrack.so.2.9.0
path : /usr/lib64/libcrypto.so.1.1.0h
path : /usr/lib64/libdl-2.27.so
path : /usr/lib64/libgcc_s-8-20180712.so.1
path : /usr/lib64/libgcrypt.so.20.2.3
path : /usr/lib64/libgpg-error.so.0.24.2
path : /usr/lib64/libgssapi_krb5.so.2.2
path : /usr/lib64/libk5crypto.so.3.1
path : /usr/lib64/libkrb5.so.3.3
path : /usr/lib64/libkrb5support.so.0.1
path : /usr/lib64/liblber-2.4.so.2.10.9
path : /usr/lib64/libldap-2.4.so.2.10.9
path : /usr/lib64/liblzma.so.5.2.4
path : /usr/lib64/libmount.so.1.1.0
path : /usr/lib64/libnspr4.so
path : /usr/lib64/libnss3.so
path : /usr/lib64/libnss_files-2.27.so
path : /usr/lib64/libnss_systemd.so.2
path : /usr/lib64/libnssutil3.so
path : /usr/lib64/libpam_misc.so.0.82.1
path : /usr/lib64/libpam.so.0.84.2
path : /usr/lib64/libpcre2-8.so.0.7.0
path : /usr/lib64/libplc4.so
path : /usr/lib64/libplds4.so
path : /usr/lib64/libprocps.so.6.0.0
path : /usr/lib64/libpthread-2.27.so
path : /usr/lib64/libresolv-2.27.so
path : /usr/lib64/librt-2.27.so
path : /usr/lib64/libsasl2.so.3.0.0
path : /usr/lib64/libselinux.so.1
path : /usr/lib64/libsmime3.so
path : /usr/lib64/libssl3.so
path : /usr/lib64/libssl.so.1.1.0h
path : /usr/lib64/libsystemd.so.0.22.0
path : /usr/lib64/libtinfo.so.6.1
path : /usr/lib64/libtirpc.so.3.0.0
path : /usr/lib64/libutil-2.27.so
path : /usr/lib64/security/pam_env.so
path : /usr/lib64/security/pam_limits.so
path : /usr/lib64/security/pam_systemd.so
path : /usr/lib64/security/pam_unix.so
path : /usr/libexec/sudo/libsudo_util.so.0.0.0
path : /usr/libexec/sudo/sudoers.so
path : /usr/sbin/sysctl
path : /usr/share/fonts/dejavu/DejaVuSansMono-Bold.ttf
path : /usr/share/locale/locale.alias
path : /usr/share/zoneinfo/[censored hihihi]

Programs running were: bash, top, gnome-terminal, dmesg(without sudo), watch -n0.1 -d cat /proc/meminfo, and see the above stress command (which eventually calls sysctl). With both patches applied( le9b.patch and malloc_watchdog.patch)

Hmm, wait a sec... this shouldn't happen with my patch.... I mean, executable pages still being read from disk!? (those .so files, cat, bash ...).

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 31, 2018

Recompiled and reinstalled kernel (with both patches) just to be sure.
Let's try with 4000MB RAM this time:

$ sudo sysctl -w vm.block_dump=1 && time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 1 --timeout 10s; echo $?; sudo sysctl -w vm.block_dump=0
vm.block_dump = 1
stress: info: [1622] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [1622] (415) <-- worker 1623 got signal 9
stress: WARN: [1622] (417) now reaping child worker processes
stress: FAIL: [1622] (451) failed run completed in 1s

real	0m1.098s
user	0m0.175s
sys	0m0.900s
1
vm.block_dump = 0
dmesg
[   37.120950] audit: type=1131 audit(1535721330.640:71): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=mlocate-updatedb comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   72.980131] audit: type=1101 audit(1535721366.499:72): pid=1616 uid=1000 auid=1000 ses=1 msg='op=PAM:accounting grantors=pam_unix acct="user" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[   72.980672] audit: type=1123 audit(1535721366.500:73): pid=1616 uid=1000 auid=1000 ses=1 msg='cwd="/home/user" cmd=73797363746C202D7720766D2E626C6F636B5F64756D703D31 terminal=pts/0 res=success'
[   72.981914] audit: type=1110 audit(1535721366.501:74): pid=1616 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[   72.985924] audit: type=1105 audit(1535721366.505:75): pid=1616 uid=0 auid=1000 ses=1 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[   72.996229] audit: type=1106 audit(1535721366.516:76): pid=1616 uid=0 auid=1000 ses=1 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[   72.996430] audit: type=1104 audit(1535721366.516:77): pid=1616 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[   72.997365] sudo(1616): READ block 4739712 on xvda3 (24 sectors)
[   72.998368] sudo(1616): READ block 6387424 on xvda3 (256 sectors)
[   73.014472] bash(1622): READ block 4470464 on xvda3 (32 sectors)
[   73.015674] stress(1622): READ block 4470496 on xvda3 (16 sectors)
[   73.793256] sh(1639): READ block 7353800 on xvda3 (256 sectors)
[   73.794696] sh(1639): READ block 4471744 on xvda3 (56 sectors)
[   73.795396] sh(1639): READ block 8932864 on xvda3 (192 sectors)
[   73.796171] sh(1639): READ block 4478856 on xvda3 (72 sectors)
[   73.796655] sh(1639): READ block 4479176 on xvda3 (16 sectors)
[   73.796981] sh(1639): READ block 6647192 on xvda3 (256 sectors)
[   73.798410] cat(1639): READ block 5494848 on xvda3 (32 sectors)
[   73.832404] stress invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
[   73.832421] stress cpuset=/ mems_allowed=0
[   73.832430] CPU: 9 PID: 1623 Comm: stress Tainted: G           O      4.18.5-5.pvops.qubes.x86_64 #1
[   73.832441] Call Trace:
[   73.832461]  dump_stack+0x63/0x83
[   73.832469]  dump_header+0x6e/0x285
[   73.832475]  oom_kill_process+0x23c/0x450
[   73.832481]  out_of_memory+0x147/0x590
[   73.832487]  __alloc_pages_slowpath+0x134c/0x1590
[   73.832496]  __alloc_pages_nodemask+0x302/0x3c0
[   73.832503]  alloc_pages_vma+0xac/0x4f0
[   73.832510]  do_anonymous_page+0x105/0x3f0
[   73.832516]  __handle_mm_fault+0xbc9/0xf10
[   73.832522]  ? __switch_to_asm+0x40/0x70
[   73.832528]  handle_mm_fault+0x102/0x2c0
[   73.832535]  __do_page_fault+0x294/0x540
[   73.832541]  do_page_fault+0x38/0x120
[   73.832547]  ? page_fault+0x8/0x30
[   73.832553]  page_fault+0x1e/0x30
[   73.832559] RIP: 0033:0x5a93f17bcdd0
[   73.832565] Code: 0f 84 3c 02 00 00 8b 54 24 0c 31 c0 85 d2 0f 94 c0 89 04 24 41 83 fd 02 0f 8f f6 00 00 00 31 c0 4d 85 ff 7e 11 0f 1f 44 00 00 <c6> 04 03 5a 4c 01 f0 49 39 c7 7f f4 4d 85 e4 0f 84 e3 01 00 00 7e 
[   73.832603] RSP: 002b:00007ffc773e5230 EFLAGS: 00010206
[   73.832610] RAX: 00000000b971d000 RBX: 0000767407dc7010 RCX: 0000767407dc7010
[   73.832620] RDX: 0000000000000001 RSI: 00000000d3b9c000 RDI: 0000000000000000
[   73.832630] RBP: 00005a93f17bdbb4 R08: 00000000ffffffff R09: 0000000000000000
[   73.832640] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[   73.832650] R13: 0000000000000002 R14: 0000000000001000 R15: 00000000d3b9b000
[   73.832661] Mem-Info:
[   73.832667] active_anon:793974 inactive_anon:4672 isolated_anon:0
                active_file:130890 inactive_file:69 isolated_file:0
                unevictable:3534 dirty:0 writeback:0 unstable:0
                slab_reclaimable:9267 slab_unreclaimable:12129
                mapped:22729 shmem:4805 pagetables:3777 bounce:0
                free:14989 free_pcp:274 free_cma:0
[   73.832727] Node 0 active_anon:3175896kB inactive_anon:18688kB active_file:523560kB inactive_file:276kB unevictable:14136kB isolated(anon):0kB isolated(file):0kB mapped:90916kB dirty:0kB writeback:0kB shmem:19220kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[   73.832774] Node 0 DMA free:15676kB min:176kB low:220kB high:264kB active_anon:224kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:4kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[   73.832809] lowmem_reserve[]: 0 3876 3876 3876 3876
[   73.832817] Node 0 DMA32 free:44280kB min:44876kB low:56092kB high:67308kB active_anon:3175588kB inactive_anon:18688kB active_file:523448kB inactive_file:720kB unevictable:14136kB writepending:0kB present:4079616kB managed:3971476kB mlocked:14136kB kernel_stack:4928kB pagetables:15104kB bounce:0kB free_pcp:1096kB local_pcp:288kB free_cma:0kB
[   73.832853] lowmem_reserve[]: 0 0 0 0 0
[   73.832859] Node 0 DMA: 1*4kB (U) 1*8kB (U) 1*16kB (U) 1*32kB (M) 2*64kB (U) 1*128kB (U) 2*256kB (UM) 1*512kB (M) 2*1024kB (UM) 0*2048kB 3*4096kB (M) = 15676kB
[   73.832883] Node 0 DMA32: 1043*4kB (UE) 294*8kB (UME) 198*16kB (UME) 258*32kB (UE) 189*64kB (UE) 68*128kB (UE) 22*256kB (UME) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 44380kB
[   73.832910] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[   73.832924] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[   73.832935] 135803 total pagecache pages
[   73.832940] 1023903 pages RAM
[   73.832945] 0 pages HighMem/MovableOnly
[   73.832950] 27058 pages reserved
[   73.832955] 0 pages cma reserved
[   73.832960] 0 pages hwpoisoned
[   73.832965] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[   73.832982] [  290]     0   290    24952     2119   184320        0             0 systemd-journal
[   73.832994] [  298]     0   298    30805      551   163840        0             0 qubesdb-daemon
[   73.833020] [  327]     0   327    23557     1938   204800        0         -1000 systemd-udevd
[   73.833032] [  453]     0   453    19356     1535   188416        0             0 systemd-logind
[   73.833044] [  454]    81   454    13254     1166   151552        0          -900 dbus-daemon
[   73.833056] [  461]     0   461     3042     1196    69632        0             0 haveged
[   73.833067] [  463]     0   463    10243       73   118784        0             0 meminfo-writer
[   73.833079] [  472]     0   472    34209      657   180224        0             0 xl
[   73.833089] [  484]     0   484    18919      903   200704        0             0 qubes-gui
[   73.833100] [  489]     0   489    16536      830   172032        0             0 qrexec-agent
[   73.833112] [  491]     0   491    52863      400    69632        0             0 agetty
[   73.833123] [  492]     0   492    52775      538    69632        0             0 agetty
[   73.833137] [  572]     0   572    73994     1314   245760        0             0 su
[   73.833148] [  578]  1000   578    21933     2030   208896        0             0 systemd
[   74.033162] [  579]  1000   579    34788      610   286720        0             0 (sd-pam)
[   74.033173] [  584]  1000   584    54160      837    77824        0             0 bash
[   74.033184] [  605]  1000   605     3500      287    77824        0             0 xinit
[   74.033195] [  606]  1000   606   313480    23531   700416        0             0 Xorg
[   74.033205] [  621]  1000   621    53597      756    81920        0             0 qubes-session
[   74.033218] [  626]  1000   626    13194     1102   151552        0             0 dbus-daemon
[   74.033230] [  638]  1000   638     7233      118    94208        0             0 ssh-agent
[   74.033240] [  656]  1000   656    16562      578   172032        0             0 qrexec-client-v
[   74.033253] [  734]  1000   734    48107     1292   147456        0             0 dconf-service
[   74.033265] [  751]  1000   751   428399    12273   827392        0             0 gsd-xsettings
[   74.033278] [  755]  1000   755   122406     1540   192512        0             0 gnome-keyring-d
[   74.033293] [  759]  1000   759   120207     1362   172032        0             0 agent
[   74.033304] [  760]  1000   760    62744     2938   155648        0             0 icon-sender
[   74.033316] [  775]  1000   775   438415    13960   897024        0             0 nm-applet
[   74.033329] [  777]  1000   777   128956     2123   401408        0             0 pulseaudio
[   74.033341] [  778]   172   778    47723      821   147456        0             0 rtkit-daemon
[   74.033353] [  785]   998   785   657134     5390   417792        0             0 polkitd
[   74.033364] [  797]  1000   797    16528      101   167936        0             0 qrexec-fork-ser
[   74.033378] [  800]  1000   800    52238      181    69632        0             0 sleep
[   74.033389] [  914]  1000   914    87397     1533   180224        0             0 at-spi-bus-laun
[   74.033418] [  919]  1000   919    13134      972   159744        0             0 dbus-daemon
[   74.033431] [  923]  1000   923    56364     1539   208896        0             0 at-spi2-registr
[   74.033458] [  930]  1000   930   123835     1753   221184        0             0 gvfsd
[   74.033469] [  933]  1000   933   208257     9799   573440        0             0 gnome-terminal-
[   74.033482] [  939]  1000   939    89299     1333   192512        0             0 gvfsd-fuse
[   74.033511] [  953]  1000   953   169493     2763   274432        0             0 xdg-desktop-por
[   74.033539] [  958]  1000   958   173633     1512   200704        0             0 xdg-document-po
[   74.033571] [  961]  1000   961   117667     1269   167936        0             0 xdg-permission-
[   74.033583] [  971]  1000   971   193293     5053   475136        0             0 xdg-desktop-por
[   74.033611] [  979]  1000   979    54291     1077    77824        0             0 bash
[   74.033622] [ 1018]  1000  1018    54291     1068    86016        0             0 bash
[   74.033632] [ 1043]  1000  1043    53987      752    73728        0             0 watch
[   74.033642] [ 1188]  1000  1188    54291     1080    81920        0             0 bash
[   74.033652] [ 1239]  1000  1239    53876      273    77824        0             0 dmesg
[   74.033663] [ 1622]  1000  1622     2000      283    61440        0             0 stress
[   74.033673] [ 1623]  1000  1623   869228   759595  6156288        0             0 stress
[   74.033683] Out of memory: Kill process 1623 (stress) score 763 or sacrifice child
[   74.033694] Killed process 1623 (stress) total-vm:3476912kB, anon-rss:3038168kB, file-rss:212kB, shmem-rss:0kB
[   74.034370] sh(1641): READ block 7353800 on xvda3 (256 sectors)
[   74.034625] gnome-terminal-(933): READ block 13465112 on xvda3 (240 sectors)
[   74.035552] sh(1641): READ block 8932864 on xvda3 (192 sectors)
[   74.036009] gnome-terminal-(933): READ block 13464704 on xvda3 (256 sectors)
[   74.036375] sh(1641): READ block 4478856 on xvda3 (72 sectors)
[   74.037866] sh(1641): READ block 5494784 on xvda3 (32 sectors)
[   74.038366] cat(1641): READ block 5494816 on xvda3 (64 sectors)
[   74.099622] bash(1642): READ block 5609600 on xvda3 (32 sectors)
[   74.100278] sudo(1642): READ block 5609752 on xvda3 (160 sectors)
[   74.101215] sudo(1642): READ block 5609632 on xvda3 (120 sectors)
[   74.101275] sudo(1642): READ block 4635536 on xvda3 (24 sectors)
[   74.102047] sudo(1642): READ block 4477232 on xvda3 (48 sectors)
[   74.102546] sudo(1642): READ block 4479216 on xvda3 (16 sectors)
[   74.102780] sudo(1642): READ block 5615104 on xvda3 (32 sectors)
[   74.103058] sudo(1642): READ block 5615264 on xvda3 (8 sectors)
[   74.103346] sudo(1642): READ block 5615144 on xvda3 (120 sectors)
[   74.103355] sudo(1642): READ block 5615272 on xvda3 (32 sectors)
[   74.103878] sudo(1642): READ block 5705304 on xvda3 (16 sectors)
[   74.104319] sudo(1642): READ block 6514496 on xvda3 (32 sectors)
[   74.104765] sudo(1642): READ block 4481224 on xvda3 (104 sectors)
[   74.105521] sudo(1642): READ block 5615136 on xvda3 (8 sectors)
[   74.105867] sudo(1642): READ block 5095768 on xvda3 (8 sectors)
[   74.106204] sudo(1642): READ block 8851584 on xvda3 (8 sectors)
[   74.106744] sudo(1642): READ block 4468968 on xvda3 (24 sectors)
[   74.107120] sudo(1642): READ block 10125608 on xvda3 (8 sectors)
[   74.107577] sudo(1642): READ block 5613568 on xvda3 (32 sectors)
[   74.107844] sudo(1642): READ block 5614360 on xvda3 (8 sectors)
[   74.108131] sudo(1642): READ block 5614264 on xvda3 (96 sectors)
[   74.108143] sudo(1642): READ block 5614368 on xvda3 (80 sectors)
[   74.109123] sudo(1642): READ block 4722232 on xvda3 (24 sectors)
[   74.109656] sudo(1642): READ block 4739200 on xvda3 (32 sectors)
[   74.109950] sudo(1642): READ block 4739840 on xvda3 (8 sectors)
[   74.110159] sudo(1642): READ block 4739736 on xvda3 (104 sectors)
[   74.110170] sudo(1642): READ block 4739848 on xvda3 (72 sectors)
[   74.110913] sudo(1642): READ block 4739232 on xvda3 (224 sectors)
[   74.111695] sudo(1642): READ block 4739072 on xvda3 (32 sectors)
[   74.111982] sudo(1642): READ block 4739176 on xvda3 (8 sectors)
[   74.112216] sudo(1642): READ block 4739104 on xvda3 (72 sectors)
[   74.112224] sudo(1642): READ block 4739184 on xvda3 (16 sectors)
[   74.112706] sudo(1642): READ block 4481472 on xvda3 (48 sectors)
[   74.113059] sudo(1642): READ block 4711352 on xvda3 (32 sectors)
[   74.113561] sudo(1642): READ block 4711568 on xvda3 (8 sectors)
[   74.113790] sudo(1642): READ block 4711448 on xvda3 (120 sectors)
[   74.113797] sudo(1642): READ block 4711576 on xvda3 (16 sectors)
[   74.114173] sudo(1642): READ block 4554728 on xvda3 (32 sectors)
[   74.114844] sudo(1642): READ block 4555512 on xvda3 (8 sectors)
[   74.115077] sudo(1642): READ block 4555464 on xvda3 (48 sectors)
[   74.115088] sudo(1642): READ block 4555520 on xvda3 (128 sectors)
[   74.115882] sudo(1642): READ block 4554760 on xvda3 (224 sectors)
[   74.116752] sudo(1642): READ block 4554088 on xvda3 (256 sectors)
[   74.117607] sudo(1642): READ block 4698648 on xvda3 (96 sectors)
[   74.117617] sudo(1642): READ block 4698752 on xvda3 (80 sectors)
[   74.118240] sudo(1642): READ block 4697960 on xvda3 (104 sectors)
[   74.118249] sudo(1642): READ block 4698072 on xvda3 (56 sectors)
[   74.119226] sudo(1642): READ block 6387888 on xvda3 (72 sectors)
[   74.119241] sudo(1642): READ block 6387968 on xvda3 (176 sectors)
[   74.120114] sudo(1642): READ block 4464344 on xvda3 (72 sectors)
[   74.120125] sudo(1642): READ block 4464424 on xvda3 (80 sectors)
[   74.121159] sudo(1642): READ block 4604240 on xvda3 (16 sectors)
[   74.121575] sudo(1642): READ block 4604200 on xvda3 (16 sectors)
[   74.122456] sudo(1642): READ block 4605512 on xvda3 (24 sectors)
[   74.122506] sudo(1642): READ block 4605544 on xvda3 (72 sectors)
[   74.124282] sudo(1642): READ block 4788048 on xvda3 (112 sectors)
[   74.125644] sudo(1642): READ block 6133472 on xvda3 (256 sectors)
[   74.127479] sudo(1642): READ block 4463304 on xvda3 (72 sectors)
[   74.128533] sudo(1642): READ block 4462008 on xvda3 (8 sectors)
[   74.129467] sudo(1642): READ block 4469048 on xvda3 (32 sectors)
[   74.130330] gnome-terminal-(933): READ block 13466768 on xvda3 (248 sectors)
[   74.130520] sudo(1642): READ block 4473456 on xvda3 (24 sectors)
[   74.131924] gnome-terminal-(933): READ block 13466344 on xvda3 (256 sectors)
[   74.132549] sudo(1642): READ block 4553776 on xvda3 (120 sectors)
[   74.132619] sudo(1642): READ block 4553904 on xvda3 (128 sectors)
[   74.134927] sudo(1642): READ block 4554032 on xvda3 (56 sectors)
[   74.135160] sudo(1642): READ block 4554344 on xvda3 (200 sectors)
[   74.141514] sudo(1642): READ block 4554984 on xvda3 (256 sectors)
[   74.142778] sudo(1642): READ block 4711384 on xvda3 (64 sectors)
[   74.144384] sudo(1642): READ block 4739456 on xvda3 (256 sectors)
[   74.146086] sudo(1642): READ block 5613600 on xvda3 (224 sectors)
[   74.147892] sudo(1642): READ block 5613824 on xvda3 (256 sectors)
[   74.148745] sudo(1642): READ block 5614080 on xvda3 (184 sectors)
[   74.150907] sudo(1642): READ block 4481696 on xvda3 (8 sectors)
[   74.152667] sudo(1642): READ block 9826640 on xvda3 (8 sectors)
[   74.153902] sudo(1642): READ block 9984760 on xvda3 (8 sectors)
[   74.154830] sudo(1642): READ block 9847368 on xvda3 (8 sectors)
[   74.155814] sudo(1642): READ block 8814904 on xvda3 (8 sectors)
[   74.157111] sudo(1642): READ block 10449008 on xvda3 (8 sectors)
[   74.158246] sudo(1642): READ block 4790776 on xvda3 (96 sectors)
[   74.158307] sudo(1642): READ block 4790880 on xvda3 (88 sectors)
[   74.159848] sudo(1642): READ block 4598920 on xvda3 (16 sectors)
[   74.160728] sudo(1642): READ block 5159096 on xvda3 (16 sectors)
[   74.161952] sudo(1642): READ block 9846504 on xvda3 (8 sectors)
[   74.162225] sudo(1642): READ block 8811728 on xvda3 (8 sectors)
[   74.162583] sudo(1642): READ block 4722568 on xvda3 (8 sectors)
[   74.162591] sudo(1642): READ block 4722584 on xvda3 (8 sectors)
[   74.163036] sudo(1642): READ block 4723712 on xvda3 (24 sectors)
[   74.163429] sudo(1642): READ block 4797488 on xvda3 (16 sectors)
[   74.163439] sudo(1642): READ block 4797512 on xvda3 (64 sectors)
[   74.164441] sudo(1642): READ block 4718168 on xvda3 (24 sectors)
[   74.165113] sudo(1642): READ block 4722944 on xvda3 (16 sectors)
[   74.165451] sudo(1642): READ block 4727808 on xvda3 (88 sectors)
[   74.166055] sudo(1642): READ block 4722296 on xvda3 (8 sectors)
[   74.166543] sudo(1642): READ block 8811696 on xvda3 (8 sectors)
[   74.166764] sudo(1642): READ block 8784704 on xvda3 (8 sectors)
[   74.167145] audit: type=1101 audit(1535721367.687:78): pid=1642 uid=1000 auid=1000 ses=1 msg='op=PAM:accounting grantors=pam_unix acct="user" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[   74.167217] audit: type=1123 audit(1535721367.687:79): pid=1642 uid=1000 auid=1000 ses=1 msg='cwd="/home/user" cmd=73797363746C202D7720766D2E626C6F636B5F64756D703D30 terminal=pts/0 res=success'
[   74.167255] sudo(1642): READ block 8811808 on xvda3 (8 sectors)
[   74.167642] audit: type=1110 audit(1535721367.687:80): pid=1642 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[   74.167658] sudo(1642): READ block 8811784 on xvda3 (8 sectors)
[   74.168058] sudo(1642): READ block 9985296 on xvda3 (8 sectors)
[   74.169066] systemd-journal(290): dirtied inode 14849 (exe) on proc
[   74.169133] audit: type=1105 audit(1535721367.689:81): pid=1642 uid=0 auid=1000 ses=1 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[   74.169364] sudo(1645): READ block 5274248 on xvda3 (32 sectors)
[   74.169917] sysctl(1645): READ block 5274280 on xvda3 (16 sectors)
[   74.170317] sysctl(1645): READ block 5274088 on xvda3 (32 sectors)
[   74.170677] sysctl(1645): READ block 5274208 on xvda3 (8 sectors)
[   74.170986] sysctl(1645): READ block 5274120 on xvda3 (88 sectors)
[   74.170997] sysctl(1645): READ block 5274216 on xvda3 (32 sectors)
[   74.171631] sysctl(1645): READ block 6130728 on xvda3 (64 sectors)
[   74.171646] sysctl(1645): READ block 6130800 on xvda3 (120 sectors)
[   74.172421] sysctl(1645): READ block 4596040 on xvda3 (64 sectors)
[   74.172859] sysctl(1645): READ block 4763312 on xvda3 (72 sectors)
[   74.173318] sysctl(1645): READ block 5287984 on xvda3 (24 sectors)
[   74.173333] sysctl(1645): READ block 5288016 on xvda3 (152 sectors)
[   74.174593] sysctl(1645): READ block 4750064 on xvda3 (128 sectors)
[   74.175353] sysctl(1645): READ block 5285776 on xvda3 (24 sectors)
[   74.176470] sysctl(1645): dirtied inode 21940 (block_dump) on proc

$ ./showallblocks |grep 'path :'|sort -u
path : /etc/group
path : /etc/ld.so.cache
path : /etc/login.defs
path : /etc/nsswitch.conf
path : /etc/pam.d/other
path : /etc/pam.d/sudo
path : /etc/pam.d/system-auth
path : /etc/passwd
path : /etc/security/limits.conf
path : /etc/security/limits.d/90-qubes-gui.conf
path : /etc/security/pam_env.conf
path : /etc/sudoers
path : /etc/sudoers.d/qt_x11_no_mitshm
path : /etc/sudoers.d/qubes
path : /etc/sudoers.d/qubes-input-trigger
path : /usr/bin/bash
path : /usr/bin/cat
path : /usr/bin/stress
path : /usr/bin/sudo
path : /usr/lib64/ld-2.27.so
path : /usr/lib64/libaudit.so.1.0.0
path : /usr/lib64/libblkid.so.1.1.0
path : /usr/lib64/libc-2.27.so
path : /usr/lib64/libcap-ng.so.0.0.0
path : /usr/lib64/libcap.so.2.25
path : /usr/lib64/libcom_err.so.2.1
path : /usr/lib64/libcrack.so.2.9.0
path : /usr/lib64/libcrypto.so.1.1.0h
path : /usr/lib64/libdl-2.27.so
path : /usr/lib64/libgcc_s-8-20180712.so.1
path : /usr/lib64/libgcrypt.so.20.2.3
path : /usr/lib64/libgpg-error.so.0.24.2
path : /usr/lib64/libgssapi_krb5.so.2.2
path : /usr/lib64/libk5crypto.so.3.1
path : /usr/lib64/libkrb5.so.3.3
path : /usr/lib64/libkrb5support.so.0.1
path : /usr/lib64/liblber-2.4.so.2.10.9
path : /usr/lib64/libldap-2.4.so.2.10.9
path : /usr/lib64/liblzma.so.5.2.4
path : /usr/lib64/libmount.so.1.1.0
path : /usr/lib64/libnspr4.so
path : /usr/lib64/libnss3.so
path : /usr/lib64/libnss_files-2.27.so
path : /usr/lib64/libnss_systemd.so.2
path : /usr/lib64/libnssutil3.so
path : /usr/lib64/libpam_misc.so.0.82.1
path : /usr/lib64/libpam.so.0.84.2
path : /usr/lib64/libpcre2-8.so.0.7.0
path : /usr/lib64/libplc4.so
path : /usr/lib64/libplds4.so
path : /usr/lib64/libprocps.so.6.0.0
path : /usr/lib64/libpthread-2.27.so
path : /usr/lib64/libresolv-2.27.so
path : /usr/lib64/librt-2.27.so
path : /usr/lib64/libsasl2.so.3.0.0
path : /usr/lib64/libselinux.so.1
path : /usr/lib64/libsmime3.so
path : /usr/lib64/libssl3.so
path : /usr/lib64/libssl.so.1.1.0h
path : /usr/lib64/libsystemd.so.0.22.0
path : /usr/lib64/libtinfo.so.6.1
path : /usr/lib64/libtirpc.so.3.0.0
path : /usr/lib64/libutil-2.27.so
path : /usr/lib64/security/pam_env.so
path : /usr/lib64/security/pam_limits.so
path : /usr/lib64/security/pam_systemd.so
path : /usr/lib64/security/pam_unix.so
path : /usr/libexec/sudo/libsudo_util.so.0.0.0
path : /usr/libexec/sudo/sudoers.so
path : /usr/sbin/sysctl
path : /usr/share/fonts/dejavu/DejaVuSansMono-Bold.ttf
path : /usr/share/fonts/dejavu/DejaVuSansMono.ttf
path : /usr/share/locale/locale.alias
path : /usr/share/zoneinfo/[censored hihihi]
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 31, 2018

no freeze, but over 300megs MemFree with this:

$ sudo sysctl -w vm.block_dump=1 && time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 1 --timeout 10s; echo $?; sudo sysctl -w vm.block_dump=0
vm.block_dump = 1
stress: info: [18713] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: info: [18713] successful run completed in 10s

real	0m10.059s
user	0m9.490s
sys	0m0.498s
0
vm.block_dump = 0

dmesg:

[  726.262790] oom_reaper: reaped process 18594 (cc1plus), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[  910.359934] audit: type=1130 audit(1535722203.879:188): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[  910.359972] audit: type=1131 audit(1535722203.879:189): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'


[ 4129.370587] audit: type=1101 audit(1535725422.890:190): pid=18709 uid=1000 auid=1000 ses=1 msg='op=PAM:accounting grantors=pam_unix acct="user" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4129.370639] audit: type=1123 audit(1535725422.890:191): pid=18709 uid=1000 auid=1000 ses=1 msg='cwd="/home/user/rpmbuild" cmd=73797363746C202D7720766D2E626C6F636B5F64756D703D31 terminal=pts/0 res=success'
[ 4129.370709] audit: type=1110 audit(1535725422.890:192): pid=18709 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4129.371661] audit: type=1105 audit(1535725422.891:193): pid=18709 uid=0 auid=1000 ses=1 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4129.374586] audit: type=1106 audit(1535725422.894:194): pid=18709 uid=0 auid=1000 ses=1 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4129.374618] audit: type=1104 audit(1535725422.894:195): pid=18709 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4129.375603] bash(18712): READ block 4543096 on xvda3 (32 sectors)
[ 4129.376329] awk(18712): READ block 4544280 on xvda3 (176 sectors)
[ 4129.377039] awk(18712): READ block 4543056 on xvda3 (32 sectors)
[ 4129.377367] awk(18712): READ block 4791640 on xvda3 (32 sectors)
[ 4129.377998] awk(18712): READ block 4792176 on xvda3 (8 sectors)
[ 4129.378292] awk(18712): READ block 4792112 on xvda3 (64 sectors)
[ 4129.378301] awk(18712): READ block 4792184 on xvda3 (96 sectors)
[ 4129.378924] awk(18712): READ block 4791672 on xvda3 (224 sectors)
[ 4129.379738] awk(18712): READ block 4636424 on xvda3 (128 sectors)
[ 4129.380781] awk(18712): READ block 4543128 on xvda3 (256 sectors)
[ 4129.380807] awk(18712): READ block 4791896 on xvda3 (216 sectors)
[ 4129.381111] awk(18712): READ block 4543384 on xvda3 (256 sectors)
[ 4129.382134] awk(18712): READ block 4544008 on xvda3 (256 sectors)
[ 4129.383169] awk(18712): READ block 4543640 on xvda3 (232 sectors)
[ 4129.383985] awk(18712): READ block 4543088 on xvda3 (8 sectors)
[ 4129.384050] awk(18712): READ block 4543872 on xvda3 (136 sectors)
[ 4129.384682] awk(18712): READ block 4544264 on xvda3 (16 sectors)
[ 4139.439081] audit: type=1101 audit(1535725432.959:196): pid=18911 uid=1000 auid=1000 ses=1 msg='op=PAM:accounting grantors=pam_unix acct="user" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4139.439132] audit: type=1123 audit(1535725432.959:197): pid=18911 uid=1000 auid=1000 ses=1 msg='cwd="/home/user/rpmbuild" cmd=73797363746C202D7720766D2E626C6F636B5F64756D703D30 terminal=pts/0 res=success'
[ 4139.439233] audit: type=1110 audit(1535725432.959:198): pid=18911 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4139.440291] systemd-journal(290): dirtied inode 28198 (exe) on proc
[ 4139.440308] audit: type=1105 audit(1535725432.960:199): pid=18911 uid=0 auid=1000 ses=1 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4139.441914] audit: type=1106 audit(1535725432.961:200): pid=18911 uid=0 auid=1000 ses=1 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4139.441944] audit: type=1104 audit(1535725432.961:201): pid=18911 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'

$ ./showallblocks |grep 'path :'|sort -u
path : /usr/bin/gawk
path : /usr/lib64/libmpfr.so.4.1.6
path : /usr/lib64/libreadline.so.7.0
path : /usr/lib64/libsigsegv.so.2.0.4
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 31, 2018

Tiny freeze(it seemed like 1 sec long freeze, could it be that sys 0m1.025s from below? if so, I need to revisit something from above) with this:

$ sudo sysctl -w vm.block_dump=1 && time stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 2 --timeout 10s; echo $?; sudo sysctl -w vm.block_dump=0
vm.block_dump = 1
stress: info: [22553] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
stress: FAIL: [22553] (415) <-- worker 22555 got signal 9
stress: WARN: [22553] (417) now reaping child worker processes
stress: FAIL: [22553] (451) failed run completed in 0s

real	0m0.656s
user	0m0.142s
sys	0m1.025s
1
vm.block_dump = 0
dmesg
[ 4232.008829] audit: type=1104 audit(1535725525.528:225): pid=20935 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4305.562558] audit: type=1101 audit(1535725599.082:226): pid=22549 uid=1000 auid=1000 ses=1 msg='op=PAM:accounting grantors=pam_unix acct="user" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4305.562933] audit: type=1123 audit(1535725599.082:227): pid=22549 uid=1000 auid=1000 ses=1 msg='cwd="/home/user" cmd=73797363746C202D7720766D2E626C6F636B5F64756D703D31 terminal=pts/0 res=success'
[ 4305.562965] audit: type=1110 audit(1535725599.082:228): pid=22549 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4305.563863] audit: type=1105 audit(1535725599.083:229): pid=22549 uid=0 auid=1000 ses=1 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4305.565470] audit: type=1106 audit(1535725599.085:230): pid=22549 uid=0 auid=1000 ses=1 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4305.565512] audit: type=1104 audit(1535725599.085:231): pid=22549 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4305.924206] stress invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
[ 4305.924227] stress cpuset=/ mems_allowed=0
[ 4305.924242] CPU: 11 PID: 22554 Comm: stress Tainted: G           O      4.18.5-5.pvops.qubes.x86_64 #1
[ 4305.924255] Call Trace:
[ 4305.924263]  dump_stack+0x63/0x83
[ 4305.924272]  dump_header+0x6e/0x285
[ 4305.924281]  oom_kill_process+0x23c/0x450
[ 4305.924288]  out_of_memory+0x147/0x590
[ 4305.924296]  __alloc_pages_slowpath+0x134c/0x1590
[ 4305.924308]  __alloc_pages_nodemask+0x302/0x3c0
[ 4305.924317]  alloc_pages_vma+0xac/0x4f0
[ 4305.924327]  do_anonymous_page+0x105/0x3f0
[ 4305.924334]  __handle_mm_fault+0xbc9/0xf10
[ 4305.924344]  handle_mm_fault+0x102/0x2c0
[ 4305.924352]  __do_page_fault+0x294/0x540
[ 4305.924361]  ? __audit_syscall_exit+0x2bf/0x3e0
[ 4305.924370]  do_page_fault+0x38/0x120
[ 4305.924378]  ? page_fault+0x8/0x30
[ 4305.924386]  page_fault+0x1e/0x30
[ 4305.924394] RIP: 0033:0x62ab6ec87dd0
[ 4305.924401] Code: 0f 84 3c 02 00 00 8b 54 24 0c 31 c0 85 d2 0f 94 c0 89 04 24 41 83 fd 02 0f 8f f6 00 00 00 31 c0 4d 85 ff 7e 11 0f 1f 44 00 00 <c6> 04 03 5a 4c 01 f0 49 39 c7 7f f4 4d 85 e4 0f 84 e3 01 00 00 7e 
[ 4305.924450] RSP: 002b:00007ffc6a770590 EFLAGS: 00010206
[ 4305.924459] RAX: 000000004dab3000 RBX: 00007c58fe32f010 RCX: 00007c58fe32f010
[ 4305.924473] RDX: 0000000000000001 RSI: 0000000083439000 RDI: 0000000000000000
[ 4305.924485] RBP: 000062ab6ec88bb4 R08: 00000000ffffffff R09: 0000000000000000
[ 4305.924497] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[ 4305.924511] R13: 0000000000000002 R14: 0000000000001000 R15: 0000000083438000
[ 4305.924524] Mem-Info:
[ 4305.924531] active_anon:673319 inactive_anon:8840 isolated_anon:0
                active_file:214693 inactive_file:41 isolated_file:0
                unevictable:15398 dirty:0 writeback:0 unstable:0
                slab_reclaimable:28793 slab_unreclaimable:13872
                mapped:23773 shmem:10760 pagetables:3577 bounce:0
                free:15007 free_pcp:88 free_cma:0
[ 4305.924583] Node 0 active_anon:2693276kB inactive_anon:35360kB active_file:858772kB inactive_file:164kB unevictable:61592kB isolated(anon):0kB isolated(file):0kB mapped:95092kB dirty:0kB writeback:0kB shmem:43040kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 4305.924623] Node 0 DMA free:15660kB min:176kB low:1764kB high:3352kB active_anon:144kB inactive_anon:0kB active_file:100kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 4305.924668] lowmem_reserve[]: 0 3876 3876 3876 3876
[ 4305.924678] Node 0 DMA32 free:44368kB min:44876kB low:442020kB high:839164kB active_anon:2692812kB inactive_anon:35360kB active_file:858672kB inactive_file:460kB unevictable:61592kB writepending:0kB present:4079616kB managed:3971476kB mlocked:61592kB kernel_stack:4832kB pagetables:14308kB bounce:0kB free_pcp:352kB local_pcp:112kB free_cma:0kB
[ 4305.924725] lowmem_reserve[]: 0 0 0 0 0
[ 4305.924733] Node 0 DMA: 1*4kB (M) 1*8kB (M) 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 2*256kB (UM) 1*512kB (M) 2*1024kB (UM) 0*2048kB 3*4096kB (M) = 15660kB
[ 4305.924762] Node 0 DMA32: 691*4kB (UE) 490*8kB (UE) 308*16kB (UME) 199*32kB (UM) 132*64kB (UM) 59*128kB (UM) 37*256kB (U) 1*512kB (U) 0*1024kB 0*2048kB 0*4096kB = 43964kB
[ 4305.924790] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 4305.924805] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 4305.924819] 225468 total pagecache pages
[ 4305.924828] 1023903 pages RAM
[ 4305.924834] 0 pages HighMem/MovableOnly
[ 4305.924840] 27058 pages reserved
[ 4305.924849] 0 pages cma reserved
[ 4305.924855] 0 pages hwpoisoned
[ 4305.924863] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[ 4305.924883] [  290]     0   290    27001     3226   212992        0             0 systemd-journal
[ 4305.924896] [  298]     0   298    30805      551   163840        0             0 qubesdb-daemon
[ 4305.924913] [  327]     0   327    23557     1938   204800        0         -1000 systemd-udevd
[ 4305.924927] [  453]     0   453    19356     1535   188416        0             0 systemd-logind
[ 4305.924940] [  454]    81   454    13254     1166   151552        0          -900 dbus-daemon
[ 4305.924952] [  461]     0   461     3042     1196    69632        0             0 haveged
[ 4305.924964] [  463]     0   463    10243       73   118784        0             0 meminfo-writer
[ 4305.924976] [  472]     0   472    34209      657   180224        0             0 xl
[ 4305.924988] [  484]     0   484    18947     1146   200704        0             0 qubes-gui
[ 4305.925001] [  489]     0   489    16536      830   172032        0             0 qrexec-agent
[ 4305.925028] [  491]     0   491    52863      400    69632        0             0 agetty
[ 4305.925039] [  492]     0   492    52775      538    69632        0             0 agetty
[ 4305.925052] [  572]     0   572    73994     1314   245760        0             0 su
[ 4305.925064] [  578]  1000   578    21933     2030   208896        0             0 systemd
[ 4305.925074] [  579]  1000   579    34788      610   286720        0             0 (sd-pam)
[ 4305.925085] [  584]  1000   584    54160      837    77824        0             0 bash
[ 4305.925097] [  605]  1000   605     3500      287    77824        0             0 xinit
[ 4305.925108] [  606]  1000   606   312404    26065   720896        0             0 Xorg
[ 4305.925120] [  621]  1000   621    53597      756    81920        0             0 qubes-session
[ 4305.925136] [  626]  1000   626    13194     1120   151552        0             0 dbus-daemon
[ 4306.124153] [  638]  1000   638     7233      118    94208        0             0 ssh-agent
[ 4306.124170] [  656]  1000   656    16562      578   172032        0             0 qrexec-client-v
[ 4306.124189] [  734]  1000   734    48107     1303   147456        0             0 dconf-service
[ 4306.124207] [  751]  1000   751   428399    12273   827392        0             0 gsd-xsettings
[ 4306.124231] [  755]  1000   755   122406     1540   192512        0             0 gnome-keyring-d
[ 4306.124250] [  759]  1000   759   120207     1362   172032        0             0 agent
[ 4306.124265] [  760]  1000   760    62744     2938   155648        0             0 icon-sender
[ 4306.124283] [  775]  1000   775   438415    13960   897024        0             0 nm-applet
[ 4306.124316] [  777]  1000   777   128956     2123   401408        0             0 pulseaudio
[ 4306.124348] [  778]   172   778    47723      821   147456        0             0 rtkit-daemon
[ 4306.124382] [  785]   998   785   657134     5390   417792        0             0 polkitd
[ 4306.124412] [  797]  1000   797    16528      101   167936        0             0 qrexec-fork-ser
[ 4306.124430] [  800]  1000   800    52238      181    69632        0             0 sleep
[ 4306.124446] [  914]  1000   914    87397     1533   180224        0             0 at-spi-bus-laun
[ 4306.124464] [  919]  1000   919    13134      972   159744        0             0 dbus-daemon
[ 4306.124500] [  923]  1000   923    56364     1539   208896        0             0 at-spi2-registr
[ 4306.124530] [  930]  1000   930   123835     1753   221184        0             0 gvfsd
[ 4306.124545] [  933]  1000   933   208973    11673   606208        0             0 gnome-terminal-
[ 4306.124563] [  939]  1000   939    89299     1333   192512        0             0 gvfsd-fuse
[ 4306.124581] [  953]  1000   953   206359     2769   290816        0             0 xdg-desktop-por
[ 4306.124598] [  958]  1000   958   173633     1512   200704        0             0 xdg-document-po
[ 4306.124616] [  961]  1000   961   117667     1269   167936        0             0 xdg-permission-
[ 4306.124634] [  971]  1000   971   193293     5053   475136        0             0 xdg-desktop-por
[ 4306.124652] [  979]  1000   979    54291     1077    77824        0             0 bash
[ 4306.124669] [ 1018]  1000  1018    54291     1068    86016        0             0 bash
[ 4306.124685] [ 1043]  1000  1043    53987      752    73728        0             0 watch
[ 4306.124701] [ 1188]  1000  1188    54291     1080    81920        0             0 bash
[ 4306.124717] [ 1239]  1000  1239    53876      273    77824        0             0 dmesg
[ 4306.124733] [22553]  1000 22553     2000      277    61440        0             0 stress
[ 4306.124750] [22554]  1000 22554   539657   318187  2621440        0             0 stress
[ 4306.124765] [22555]  1000 22555   539657   325975  2686976        0             0 stress
[ 4306.124779] Out of memory: Kill process 22555 (stress) score 327 or sacrifice child
[ 4306.124795] Killed process 22555 (stress) total-vm:2158628kB, anon-rss:1303688kB, file-rss:212kB, shmem-rss:0kB
[ 4306.166999] oom_reaper: reaped process 22555 (stress), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[ 4306.228694] audit: type=1101 audit(1535725599.748:232): pid=22564 uid=1000 auid=1000 ses=1 msg='op=PAM:accounting grantors=pam_unix acct="user" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4306.228754] audit: type=1123 audit(1535725599.748:233): pid=22564 uid=1000 auid=1000 ses=1 msg='cwd="/home/user" cmd=73797363746C202D7720766D2E626C6F636B5F64756D703D30 terminal=pts/0 res=success'
[ 4306.228809] audit: type=1110 audit(1535725599.748:234): pid=22564 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[ 4306.230039] systemd-journal(290): dirtied inode 59829 (exe) on proc
[ 4306.230088] audit: type=1105 audit(1535725599.750:235): pid=22564 uid=0 auid=1000 ses=1 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'


@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 31, 2018

I reran the scripts from this question as they are, while having my&malloc swall watchdog patches applied(on kernel 4.18.5) (I know that I'm repeating myself here, I meant to post this as a comment but decided to put it here instead) and there is no mention of stalling(other than installing) in dmesg after a few minutes of letting them run. But I do notice a slight freeze (maybe 200ms? guessing) of the refresh of the gnome-terminal that's running watch -n0.1 -d cat /proc/meminfo, every time MemFree reaches a low point, right before OOM-killer does its job.

btw that's: $ while true; do date; nice -20 stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 1 --timeout 10s; sleep 5s; done in a terminal, which looks like this:

Fri Aug 31 23:49:43 CEST 2018
stress: info: [976] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [976] (415) <-- worker 977 got signal 9
stress: WARN: [976] (417) now reaping child worker processes
stress: FAIL: [976] (451) failed run completed in 1s
Fri Aug 31 23:49:49 CEST 2018
stress: info: [1133] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [1133] (415) <-- worker 1134 got signal 9
stress: WARN: [1133] (417) now reaping child worker processes
stress: FAIL: [1133] (451) failed run completed in 1s
Fri Aug 31 23:49:55 CEST 2018
stress: info: [1277] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [1277] (415) <-- worker 1278 got signal 9
stress: WARN: [1277] (417) now reaping child worker processes
stress: FAIL: [1277] (451) failed run completed in 1s
Fri Aug 31 23:50:01 CEST 2018
stress: info: [1425] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [1425] (415) <-- worker 1426 got signal 9
stress: WARN: [1425] (417) now reaping child worker processes
stress: FAIL: [1425] (451) failed run completed in 1s
Fri Aug 31 23:50:07 CEST 2018
stress: info: [1569] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [1569] (415) <-- worker 1570 got signal 9
stress: WARN: [1569] (417) now reaping child worker processes
stress: FAIL: [1569] (451) failed run completed in 1s
Fri Aug 31 23:50:13 CEST 2018
stress: info: [1713] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [1713] (415) <-- worker 1714 got signal 9
stress: WARN: [1713] (417) now reaping child worker processes
stress: FAIL: [1713] (451) failed run completed in 1s
Fri Aug 31 23:50:19 CEST 2018
stress: info: [1859] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [1859] (415) <-- worker 1860 got signal 9
stress: WARN: [1859] (417) now reaping child worker processes
stress: FAIL: [1859] (451) failed run completed in 1s
Fri Aug 31 23:50:25 CEST 2018
stress: info: [2007] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [2007] (415) <-- worker 2008 got signal 9
stress: WARN: [2007] (417) now reaping child worker processes
stress: FAIL: [2007] (451) failed run completed in 1s
and in another terminal, first: `$ sudo nice -n -19 bash`, then `# while true; do NS=$(date '+%N' | sed 's/^0*//'); let "S=998000000 - $NS"; S=$(( S > 0 ? S : 0)); LC_ALL=C sleep "0.$S"; date --iso=ns; done` and its output looks like [this](https://gist.github.com/constantoverride/662a4db53799b1379e0b4c4b8eea8123)

So I decided to see which files are being re-read, since why else would the freeze be happening, amirite?! :D

dmesg
[  510.290583] bash(10320): READ block 4543096 on xvda3 (32 sectors)
[  510.291503] awk(10320): READ block 4544280 on xvda3 (176 sectors)
[  510.292406] awk(10320): READ block 4543056 on xvda3 (32 sectors)
[  510.292811] awk(10320): READ block 4791640 on xvda3 (32 sectors)
[  510.293253] awk(10320): READ block 4792176 on xvda3 (8 sectors)
[  510.293503] awk(10320): READ block 4792112 on xvda3 (64 sectors)
[  510.293522] awk(10320): READ block 4792184 on xvda3 (96 sectors)
[  510.294363] awk(10320): READ block 4791672 on xvda3 (224 sectors)
[  510.295168] awk(10320): READ block 4635648 on xvda3 (32 sectors)
[  510.295593] awk(10320): READ block 4636416 on xvda3 (8 sectors)
[  510.295919] awk(10320): READ block 4636296 on xvda3 (120 sectors)
[  510.295951] awk(10320): READ block 4636424 on xvda3 (128 sectors)
[  510.297076] awk(10320): READ block 4635680 on xvda3 (224 sectors)
[  510.298213] awk(10320): READ block 4543128 on xvda3 (256 sectors)
[  510.298273] awk(10320): READ block 4791896 on xvda3 (216 sectors)
[  510.298379] awk(10320): READ block 4635904 on xvda3 (256 sectors)
[  510.298870] awk(10320): READ block 4543384 on xvda3 (256 sectors)
[  510.299491] awk(10320): READ block 4544008 on xvda3 (256 sectors)
[  510.300987] awk(10320): READ block 4543640 on xvda3 (232 sectors)
[  510.302032] awk(10320): READ block 4543088 on xvda3 (8 sectors)
[  510.302143] awk(10320): READ block 4543872 on xvda3 (136 sectors)
[  510.302861] awk(10320): READ block 4544264 on xvda3 (16 sectors)
[  510.303930] bash(10321): READ block 5437576 on xvda3 (32 sectors)
[  510.304436] nice(10321): READ block 5437608 on xvda3 (56 sectors)
[  511.007425] stress invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
[  511.007443] stress cpuset=/ mems_allowed=0
[  511.007450] CPU: 6 PID: 10322 Comm: stress Tainted: G           O      4.18.5-5.pvops.qubes.x86_64 #1
[  511.007462] Call Trace:
[  511.007469]  dump_stack+0x63/0x83
[  511.007476]  dump_header+0x6e/0x285
[  511.007483]  oom_kill_process+0x23c/0x450
[  511.007489]  out_of_memory+0x147/0x590
[  511.007495]  __alloc_pages_slowpath+0x134c/0x1590
[  511.007503]  __alloc_pages_nodemask+0x302/0x3c0
[  511.007510]  alloc_pages_vma+0xac/0x4f0
[  511.007517]  do_anonymous_page+0x105/0x3f0
[  511.007524]  __handle_mm_fault+0xbc9/0xf10
[  511.007530]  handle_mm_fault+0x102/0x2c0
[  511.007541]  __do_page_fault+0x294/0x540
[  511.007548]  do_page_fault+0x38/0x120
[  511.007554]  ? page_fault+0x8/0x30
[  511.007560]  page_fault+0x1e/0x30
[  511.007566] RIP: 0033:0x5d8e11196dd0
[  511.007571] Code: 0f 84 3c 02 00 00 8b 54 24 0c 31 c0 85 d2 0f 94 c0 89 04 24 41 83 fd 02 0f 8f f6 00 00 00 31 c0 4d 85 ff 7e 11 0f 1f 44 00 00 <c6> 04 03 5a 4c 01 f0 49 39 c7 7f f4 4d 85 e4 0f 84 e3 01 00 00 7e 
[  511.007610] RSP: 002b:00007ffcea8642f0 EFLAGS: 00010206
[  511.007617] RAX: 00000000b760b000 RBX: 00007d234b189010 RCX: 00007d234b189010
[  511.007627] RDX: 0000000000000001 RSI: 00000000d4633000 RDI: 0000000000000000
[  511.007636] RBP: 00005d8e11197bb4 R08: 00000000ffffffff R09: 0000000000000000
[  511.007646] R10: 0000000000000022 R11: 0000000000000246 R12: ffffffffffffffff
[  511.007656] R13: 0000000000000002 R14: 0000000000001000 R15: 00000000d4632000
[  511.007666] Mem-Info:
[  511.007672] active_anon:781526 inactive_anon:4691 isolated_anon:0
                active_file:132622 inactive_file:0 isolated_file:0
                unevictable:13093 dirty:0 writeback:0 unstable:0
                slab_reclaimable:9336 slab_unreclaimable:12994
                mapped:24420 shmem:4879 pagetables:3924 bounce:0
                free:15062 free_pcp:229 free_cma:0
[  511.007713] Node 0 active_anon:3126104kB inactive_anon:18764kB active_file:530488kB inactive_file:0kB unevictable:52372kB isolated(anon):0kB isolated(file):0kB mapped:97680kB dirty:0kB writeback:0kB shmem:19516kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[  511.007746] Node 0 DMA free:15680kB min:176kB low:220kB high:264kB active_anon:216kB inactive_anon:0kB active_file:8kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15904kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  511.007777] lowmem_reserve[]: 0 3876 3876 3876 3876
[  511.007785] Node 0 DMA32 free:44568kB min:44876kB low:56092kB high:67308kB active_anon:3125860kB inactive_anon:18764kB active_file:530480kB inactive_file:0kB unevictable:52372kB writepending:0kB present:4079616kB managed:3971476kB mlocked:52372kB kernel_stack:4896kB pagetables:15696kB bounce:0kB free_pcp:916kB local_pcp:232kB free_cma:0kB
[  511.007821] lowmem_reserve[]: 0 0 0 0 0
[  511.007828] Node 0 DMA: 0*4kB 0*8kB 0*16kB 2*32kB (UM) 2*64kB (U) 1*128kB (U) 2*256kB (UM) 1*512kB (M) 2*1024kB (UM) 0*2048kB 3*4096kB (M) = 15680kB
[  511.007849] Node 0 DMA32: 831*4kB (UME) 123*8kB (UME) 17*16kB (U) 183*32kB (UME) 160*64kB (UME) 79*128kB (UME) 36*256kB (UME) 1*512kB (U) 0*1024kB 0*2048kB 1*4096kB (M) = 44612kB
[  511.007876] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  511.007888] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  511.007919] 137499 total pagecache pages
[  511.007924] 1023903 pages RAM
[  511.007943] 0 pages HighMem/MovableOnly
[  511.007948] 27058 pages reserved
[  511.007976] 0 pages cma reserved
[  511.007982] 0 pages hwpoisoned
[  511.008031] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[  511.008048] [  283]     0   283    27002     3079   204800        0             0 systemd-journal
[  511.008061] [  308]     0   308    30805      560   163840        0             0 qubesdb-daemon
[  511.008074] [  315]     0   315    23701     2109   212992        0         -1000 systemd-udevd
[  511.008086] [  435]    81   435    13254     1158   147456        0          -900 dbus-daemon
[  511.008098] [  439]     0   439    10243       73   118784        0             0 meminfo-writer
[  511.008110] [  442]     0   442    19356     1509   192512        0             0 systemd-logind
[  511.008122] [  443]     0   443     3042     1207    65536        0             0 haveged
[  511.008133] [  459]     0   459    34209      652   180224        0             0 xl
[  511.008145] [  463]     0   463    18919      918   192512        0             0 qubes-gui
[  511.008156] [  474]     0   474    16536      820   172032        0             0 qrexec-agent
[  511.008168] [  490]     0   490    52775      511    69632        0             0 agetty
[  511.008179] [  491]     0   491    52863      379    65536        0             0 agetty
[  511.008189] [  562]     0   562    73994     1315   241664        0             0 su
[  511.008201] [  576]  1000   576    21933     2065   212992        0             0 systemd
[  511.008212] [  577]  1000   577    34785      620   286720        0             0 (sd-pam)
[  511.008223] [  582]  1000   582    54160      843    90112        0             0 bash
[  511.008234] [  603]  1000   603     3500      279    77824        0             0 xinit
[  511.008245] [  604]  1000   604   316641    26543   704512        0             0 Xorg
[  511.008255] [  619]  1000   619    53597      755    81920        0             0 qubes-session
[  511.008267] [  624]  1000   624    13194     1100   151552        0             0 dbus-daemon
[  511.207273] [  636]  1000   636     7233      117    98304        0             0 ssh-agent
[  511.207298] [  654]  1000   654    16562      554   176128        0             0 qrexec-client-v
[  511.207315] [  706]  1000   706    48107     1268   143360        0             0 dconf-service
[  511.207333] [  743]  1000   743    62744     2926   147456        0             0 icon-sender
[  511.207350] [  748]  1000   748   428398    12423   831488        0             0 gsd-xsettings
[  511.207376] [  754]  1000   754   138640     1374   184320        0             0 agent
[  511.207388] [  755]  1000   755   122379     1530   188416        0             0 gnome-keyring-d
[  511.207400] [  775]  1000   775   438415    13947   913408        0             0 nm-applet
[  511.207411] [  778]  1000   778   128956     2155   401408        0             0 pulseaudio
[  511.207423] [  779]   172   779    47723      832   139264        0             0 rtkit-daemon
[  511.207435] [  785]   998   785   657134     5394   417792        0             0 polkitd
[  511.207446] [  796]  1000   796    16528      101   167936        0             0 qrexec-fork-ser
[  511.207458] [  799]  1000   799    52238      182    65536        0             0 sleep
[  511.207469] [  913]  1000   913    87397     1622   184320        0             0 at-spi-bus-laun
[  511.207481] [  918]  1000   918    13134      969   151552        0             0 dbus-daemon
[  511.207494] [  922]  1000   922    56364     1519   212992        0             0 at-spi2-registr
[  511.207505] [  928]  1000   928   123835     1761   212992        0             0 gvfsd
[  511.207532] [  930]  1000   930   212520    11714   614400        0             0 gnome-terminal-
[  511.207545] [  940]  1000   940    89299     1326   184320        0             0 gvfsd-fuse
[  511.207557] [  951]  1000   951   169493     2819   282624        0             0 xdg-desktop-por
[  511.207569] [  956]  1000   956   157249     1508   188416        0             0 xdg-document-po
[  511.207595] [  960]  1000   960   117667     1253   167936        0             0 xdg-permission-
[  511.207607] [  970]  1000   970   193295     5087   462848        0             0 xdg-desktop-por
[  511.207619] [  978]  1000   978    54266     1042    86016        0             0 bash
[  511.207633] [ 1022]  1000  1022    54266     1047    90112        0             0 bash
[  511.207644] [ 1053]     0  1053    80041     1654   282624        0             0 sudo
[  511.207654] [ 1054]     0  1054    54160      883    77824        0             0 bash
[  511.207665] [ 1084]  1000  1084    54302     1056    86016        0             0 bash
[  511.207676] [ 2040]  1000  2040    54266     1049    86016        0             0 bash
[  511.207686] [ 2113]  1000  2113    53876      270    77824        0             0 dmesg
[  511.207697] [ 7309]  1000  7309    53989      753    86016        0             0 watch
[  511.207707] [ 9564]  1000  9564    54302     1073    77824        0             0 bash
[  511.207719] [10321]  1000 10321     2000      288    61440        0             0 stress
[  511.207729] [10322]  1000 10322   871939   751176  6082560        0             0 stress
[  511.207743] [10333]     0 10333     1084      203    57344        0             0 sleep
[  511.207753] Out of memory: Kill process 10322 (stress) score 755 or sacrifice child
[  511.207765] Killed process 10322 (stress) total-vm:3487756kB, anon-rss:3004368kB, file-rss:336kB, shmem-rss:0kB
[  511.209042] sh(10343): READ block 8932864 on xvda3 (192 sectors)
[  511.210403] sh(10343): READ block 5494784 on xvda3 (32 sectors)
[  511.210963] cat(10343): READ block 5494816 on xvda3 (64 sectors)
[  511.578536] bash(10351): READ block 5434824 on xvda3 (32 sectors)
[  511.579997] date(10351): READ block 5434896 on xvda3 (160 sectors)
[  511.581787] date(10351): READ block 5434856 on xvda3 (40 sectors)
[  511.582792] date(10351): READ block 5095768 on xvda3 (8 sectors)
[  511.586194] bash(10354): READ block 4617216 on xvda3 (32 sectors)
[  511.587483] sed(10354): READ block 4617304 on xvda3 (160 sectors)
[  511.589105] sed(10354): READ block 4617248 on xvda3 (56 sectors)


files read
$ ./showallblocks 
vm.block_dump = 0
dmesg block(512 byte sector number): 4543096
actual block(4096 bytes): 567887
inode: 134403
path : /usr/bin/gawk
--
dmesg block(512 byte sector number): 4544280
actual block(4096 bytes): 568035
inode: 134403
path : /usr/bin/gawk
--
dmesg block(512 byte sector number): 4543056
actual block(4096 bytes): 567882
inode: 134400
path : /usr/lib64/libsigsegv.so.2.0.4
--
dmesg block(512 byte sector number): 4791640
actual block(4096 bytes): 598955
inode: 133809
path : /usr/lib64/libreadline.so.7.0
--
dmesg block(512 byte sector number): 4792176
actual block(4096 bytes): 599022
inode: 133809
path : /usr/lib64/libreadline.so.7.0
--
dmesg block(512 byte sector number): 4792112
actual block(4096 bytes): 599014
inode: 133809
path : /usr/lib64/libreadline.so.7.0
--
dmesg block(512 byte sector number): 4792184
actual block(4096 bytes): 599023
inode: 133809
path : /usr/lib64/libreadline.so.7.0
--
dmesg block(512 byte sector number): 4791672
actual block(4096 bytes): 598959
inode: 133809
path : /usr/lib64/libreadline.so.7.0
--
dmesg block(512 byte sector number): 4635648
actual block(4096 bytes): 579456
inode: 134107
path : /usr/lib64/libmpfr.so.4.1.6
--
dmesg block(512 byte sector number): 4636416
actual block(4096 bytes): 579552
inode: 134107
path : /usr/lib64/libmpfr.so.4.1.6
--
dmesg block(512 byte sector number): 4636296
actual block(4096 bytes): 579537
inode: 134107
path : /usr/lib64/libmpfr.so.4.1.6
--
dmesg block(512 byte sector number): 4636424
actual block(4096 bytes): 579553
inode: 134107
path : /usr/lib64/libmpfr.so.4.1.6
--
dmesg block(512 byte sector number): 4635680
actual block(4096 bytes): 579460
inode: 134107
path : /usr/lib64/libmpfr.so.4.1.6
--
dmesg block(512 byte sector number): 4543128
actual block(4096 bytes): 567891
inode: 134403
path : /usr/bin/gawk
--
dmesg block(512 byte sector number): 4791896
actual block(4096 bytes): 598987
inode: 133809
path : /usr/lib64/libreadline.so.7.0
--
dmesg block(512 byte sector number): 4635904
actual block(4096 bytes): 579488
inode: 134107
path : /usr/lib64/libmpfr.so.4.1.6
--
dmesg block(512 byte sector number): 4543384
actual block(4096 bytes): 567923
inode: 134403
path : /usr/bin/gawk
--
dmesg block(512 byte sector number): 4544008
actual block(4096 bytes): 568001
inode: 134403
path : /usr/bin/gawk
--
dmesg block(512 byte sector number): 4543640
actual block(4096 bytes): 567955
inode: 134403
path : /usr/bin/gawk
--
dmesg block(512 byte sector number): 4543088
actual block(4096 bytes): 567886
inode: 134400
path : /usr/lib64/libsigsegv.so.2.0.4
--
dmesg block(512 byte sector number): 4543872
actual block(4096 bytes): 567984
inode: 134403
path : /usr/bin/gawk
--
dmesg block(512 byte sector number): 4544264
actual block(4096 bytes): 568033
inode: 134403
path : /usr/bin/gawk
--
dmesg block(512 byte sector number): 5437576
actual block(4096 bytes): 679697
inode: 134544
path : /usr/bin/nice
--
dmesg block(512 byte sector number): 5437608
actual block(4096 bytes): 679701
inode: 134544
path : /usr/bin/nice
--
dmesg block(512 byte sector number): 8932864
actual block(4096 bytes): 1116608
inode: 267142
path : /etc/ld.so.cache
--
dmesg block(512 byte sector number): 5494784
actual block(4096 bytes): 686848
inode: 134503
path : /usr/bin/cat
--
dmesg block(512 byte sector number): 5494816
actual block(4096 bytes): 686852
inode: 134503
path : /usr/bin/cat
--
dmesg block(512 byte sector number): 5434824
actual block(4096 bytes): 679353
inode: 134513
path : /usr/bin/date
--
dmesg block(512 byte sector number): 5434896
actual block(4096 bytes): 679362
inode: 134513
path : /usr/bin/date
--
dmesg block(512 byte sector number): 5434856
actual block(4096 bytes): 679357
inode: 134513
path : /usr/bin/date
--
dmesg block(512 byte sector number): 5095768
actual block(4096 bytes): 636971
inode: 131025
path : /usr/share/zoneinfo/[censored hihihi]
--
dmesg block(512 byte sector number): 4617216
actual block(4096 bytes): 577152
inode: 133842
path : /usr/bin/sed
--
dmesg block(512 byte sector number): 4617304
actual block(4096 bytes): 577163
inode: 133842
path : /usr/bin/sed
--
dmesg block(512 byte sector number): 4617248
actual block(4096 bytes): 577156
inode: 133842
path : /usr/bin/sed

summary of files read:
before the dmesg line stress invoked oom-killer:

$ ./showallblocks |grep 'path : '
path : /usr/bin/gawk
path : /usr/bin/gawk
path : /usr/lib64/libsigsegv.so.2.0.4
path : /usr/lib64/libreadline.so.7.0
path : /usr/lib64/libreadline.so.7.0
path : /usr/lib64/libreadline.so.7.0
path : /usr/lib64/libreadline.so.7.0
path : /usr/lib64/libreadline.so.7.0
path : /usr/lib64/libmpfr.so.4.1.6
path : /usr/lib64/libmpfr.so.4.1.6
path : /usr/lib64/libmpfr.so.4.1.6
path : /usr/lib64/libmpfr.so.4.1.6
path : /usr/lib64/libmpfr.so.4.1.6
path : /usr/bin/gawk
path : /usr/lib64/libreadline.so.7.0
path : /usr/lib64/libmpfr.so.4.1.6
path : /usr/bin/gawk
path : /usr/bin/gawk
path : /usr/bin/gawk
path : /usr/lib64/libsigsegv.so.2.0.4
path : /usr/bin/gawk
path : /usr/bin/gawk
path : /usr/bin/nice
path : /usr/bin/nice

after the dmesg line stress invoked oom-killer:

path : /etc/ld.so.cache
path : /usr/bin/cat
path : /usr/bin/cat
path : /usr/bin/date
path : /usr/bin/date
path : /usr/bin/date
path : /usr/share/zoneinfo/[censored hihihi]
path : /usr/bin/sed
path : /usr/bin/sed
path : /usr/bin/sed

So, stuffs are still being re-read which means evicted due to memory pressure. If I could get kernel to not evict those, it won't have to re-read them. But maybe they're evicted because they were moved to Inactive(file) ? since I'm already not evicting Active(file) (or so I think) with my patch.

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 31, 2018

oh I think I see what's happening:

lru += LRU_ACTIVE;
                nr_scanned = targets[lru] - nr[lru];
                nr[lru] = targets[lru] * (100 - percentage) / 100;
                nr[lru] -= min(nr[lru], nr_scanned);

where lru == LRU_FILE so it's the equivalent of, wait almost LRU_ACTIVE_FILE but it needs LRU_BASE added to it... hmm, oh which is zero:

#define LRU_BASE 0
#define LRU_ACTIVE 1
#define LRU_FILE 2

enum lru_list {
        LRU_INACTIVE_ANON = LRU_BASE,
        LRU_ACTIVE_ANON = LRU_BASE + LRU_ACTIVE,
        LRU_INACTIVE_FILE = LRU_BASE + LRU_FILE,
        LRU_ACTIVE_FILE = LRU_BASE + LRU_FILE + LRU_ACTIVE,
        LRU_UNEVICTABLE,
        NR_LRU_LISTS
};

well if LRU_BASE ever gets changed from 0 lots of code's gonna break...

le9c.patch seems to have no effect on the above

Now I'm thinking that maybe some Inactive(file) pages may still come from the executables(cat, bash, sudo) which means that since kernel is evicting those, it's only those (or the entire executable file? nah, probably only those non-active parts of the executable file (mmap?)) that are being re-read from disk. So, if I maybe find a way to not evict Inactive(file) that are related to the same executable that's in Active(file) then, there should be almost no disk re-reading? except for things like gnome-terminal which will re-read the true type font(s) (ttf).

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Aug 31, 2018

too hard(for me):Let's just for tests, try to not evict ALL Inactive(file) (since they only fill up to like 1504kB for my current stress tests), and then see what gets read.
I'm instead trying something that I missed in le9d.patch

EDIT: no difference.... from this above: https://gist.github.com/constantoverride/84eba764f487049ed642eb2111a20830#gistcomment-2694645

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Sep 6, 2018

For the previous showallblocks see: https://gist.github.com/constantoverride/84eba764f487049ed642eb2111a20830/022e70d917ae52dc1016c6f464cada2dabd8c716#file-showallblocks-bash

Got new showallblocks update: https://gist.github.com/constantoverride/84eba764f487049ed642eb2111a20830/76ec2a62f40e7ca6619dddadb8c0016fac8618d6#file-showallblocks-bash

sample output now:

[ 6030.638492] bash(20757): READ block 4543096 on xvda3 (32 sectors) /usr/bin/gawk
[ 6030.639133] awk(20757): READ block 4544280 on xvda3 (176 sectors) /usr/bin/gawk
[ 6030.639796] awk(20757): READ block 4543056 on xvda3 (32 sectors) /usr/lib64/libsigsegv.so.2.0.4
[ 6030.640109] awk(20757): READ block 4635648 on xvda3 (32 sectors) /usr/lib64/libmpfr.so.4.1.6
[ 6030.640456] awk(20757): READ block 4636416 on xvda3 (8 sectors) /usr/lib64/libmpfr.so.4.1.6
[ 6030.640690] awk(20757): READ block 4636296 on xvda3 (120 sectors) /usr/lib64/libmpfr.so.4.1.6
[ 6030.640700] awk(20757): READ block 4636424 on xvda3 (128 sectors) /usr/lib64/libmpfr.so.4.1.6
[ 6030.641400] awk(20757): READ block 4635680 on xvda3 (224 sectors) /usr/lib64/libmpfr.so.4.1.6
[ 6030.642068] awk(20757): READ block 4613896 on xvda3 (32 sectors) /usr/lib64/libgmp.so.10.3.2
[ 6030.642487] awk(20757): READ block 4614864 on xvda3 (8 sectors) /usr/lib64/libgmp.so.10.3.2
[ 6030.642684] awk(20757): READ block 4614752 on xvda3 (112 sectors) /usr/lib64/libgmp.so.10.3.2
[ 6030.642695] awk(20757): READ block 4614872 on xvda3 (136 sectors) /usr/lib64/libgmp.so.10.3.2
[ 6030.643474] awk(20757): READ block 4613928 on xvda3 (224 sectors) /usr/lib64/libgmp.so.10.3.2
[ 6030.644184] awk(20757): READ block 6379440 on xvda3 (256 sectors)
[ 6030.645300] awk(20757): READ block 4543128 on xvda3 (256 sectors) /usr/bin/gawk
[ 6030.645324] awk(20757): READ block 4614152 on xvda3 (256 sectors) /usr/lib64/libgmp.so.10.3.2
[ 6030.645384] awk(20757): READ block 4635904 on xvda3 (256 sectors) /usr/lib64/libmpfr.so.4.1.6
[ 6030.645430] awk(20757): READ block 6376600 on xvda3 (96 sectors)
[ 6030.647204] awk(20757): READ block 4543384 on xvda3 (256 sectors) /usr/bin/gawk
[ 6030.647309] awk(20757): READ block 4544008 on xvda3 (256 sectors) /usr/bin/gawk
[ 6030.648492] awk(20757): READ block 4543640 on xvda3 (232 sectors) /usr/bin/gawk
[ 6030.649291] awk(20757): READ block 4543088 on xvda3 (8 sectors) /usr/lib64/libsigsegv.so.2.0.4
[ 6030.649341] awk(20757): READ block 4543872 on xvda3 (136 sectors) /usr/bin/gawk
[ 6030.649974] awk(20757): READ block 4544264 on xvda3 (16 sectors) /usr/bin/gawk
[ 6030.650041] awk(20757): READ block 4614552 on xvda3 (200 sectors) /usr/lib64/libgmp.so.10.3.2
[ 6030.651732] bash(20758): READ block 5437576 on xvda3 (32 sectors) /usr/bin/nice
[ 6030.652131] nice(20758): READ block 5437608 on xvda3 (56 sectors) /usr/bin/nice
[ 6031.011836] watch(3913): READ block 6645240 on xvda3 (136 sectors)
[ 6031.011860] watch(3913): READ block 6645384 on xvda3 (80 sectors)
[ 6031.011868] watch(3913): READ block 6645480 on xvda3 (16 sectors)
[ 6031.012042] dmesg(3284): READ block 6645616 on xvda3 (64 sectors)
[ 6031.012056] systemd-journal(286): READ block 6645760 on xvda3 (128 sectors)
[ 6031.012068] systemd-journal(286): READ block 6645896 on xvda3 (16 sectors)
[ 6031.012872] watch(3913): READ block 6644176 on xvda3 (80 sectors)
[ 6031.012887] watch(3913): READ block 6644288 on xvda3 (72 sectors)
[ 6031.013000] dmesg(3284): READ block 3440864 on xvda3 (208 sectors)
[ 6031.013236] systemd-journal(286): READ block 6697856 on xvda3 (256 sectors) /usr/lib/systemd/libsystemd-shared-238.so
[ 6031.013644] watch(3913): READ block 6646736 on xvda3 (184 sectors)

Note: I used old dmesg1.log and the system got updated/things changed, so maybe that's why some files aren't found? (i forget if they weren't found before anyway)

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Sep 8, 2018

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Sep 8, 2018

Can trigger OOM after 30 sec of disk thrashing in sys-net if using Fedora 28 and no swap (with the default of 400MB initial memory and kernel 4.14.67-1 currently latest official) if trying to execute inexisting command eg. "sud" which runs packagekitd. If swap is enabled(this is true by default! and it's 1GiB) then there is longer (time-wise) disk thrashing with writing too this time, but no OOM gets triggered! (max can get to 440MB of swap used, when packagekitd is done it's at 201MB constantly).

If using no swap with 4.18.5-9 kernel, needs 500MB initial memory or else(with 400MB) Xorg OOMs instantly (thanks to le9d.patch) on startup, then packagekitd does too when doing the above "sud" idea.

@ValdikSS

This comment has been minimized.

Copy link

commented Dec 7, 2018

For me thrashing is much worse with patchset than without it. With the patchset, it seems that resource unloading happens very often, much more often than it should be performed, and everything swaps for some reason. With 2 GB RAM, the kernel tries to swap everything it has, and used RAM is usually 400-700 MB (1+ G free) with 1 GB SWAP occupied.

@RussianNeuroMancer

This comment has been minimized.

Copy link

commented Dec 16, 2018

@ValdikSS

For me thrashing is much worse with patchset than without it.

Which version of le9 you tested? Do you adjusted sysctl.conf by any chance?

@ValdikSS

This comment has been minimized.

Copy link

commented Dec 16, 2018

@RussianNeuroMancer, le9d.patch, the latest one. The only sysctl.conf option which may have impact is vm.min_free_kbytes=4096.

@RussianNeuroMancer

This comment has been minimized.

Copy link

commented Dec 19, 2018

I reverted all applied workarounds and tested le9d.patch in this particular scenario. Unfortunately, I confirm @ValdikSS observation - pushing to swap is too aggressive. In my case only 150 MB of memory was utilized and everything else end up in swap, which obviously makes system unusable. Few minutes later after attempt to launch web-browser with running messenger Gnome Shell (Wayland session) crashed. Attempt to remove patched kernel packages result in killing package manager by OOM while there was plenty of free memory.

@xftroxgpx

This comment has been minimized.

Copy link

commented Jan 17, 2019

I haven't tested this WITH swap! (swap is disabled in kernel) but WITHOUT swap is works pretty good for me as https://github.com/xftroxgpx/a3/blob/fa97258406c464627af36fa5eaafc65cb51976cf/system/Z575/OSes/3archlinux/on_baremetal/filesystem_now/archlinux/home/xftroxgpx/build/1packages/4used/kernel/linuxgit/le9d.patch#L1
I still have to use freepagecachepages_automatically 2000000 (script) to cleanup too high memory usage to prevent OOM-killer from activating under certain situations and kill chromium/Xorg. That script just does echo 1 | sudo tee /proc/sys/vm/drop_caches (as root) whenever the value of Active(file) from file /proc/meminfo is over 2G (even though it goes no higher than 4G for me with 16G RAM laptop) - this value is 4 times the value of nr_active_file from /proc/vmstat btw.

and my applied sysctl settings can be seen here

@xftroxgpx

This comment has been minimized.

Copy link

commented Jan 17, 2019

@ValdikSS I wonder if vm.swappiness = 0 would have any beneficial effect for you.

@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Jan 17, 2019

I put a warning inside the patch so that those who have swap enabled not try using it.
I appreciate the feedback, thank you!
I hope I'll get to test this with swap sooner than I imagine(months) and perhaps attempt more changes to the patch.

my sysctl.d settings
[user@dev02 qubes-src]$ grep . -H /etc/sysctl.d/*
/etc/sysctl.d/01_no_dmesg_ratelimiting.conf:#this has no effect without kernel.printk_devkmsg=on
/etc/sysctl.d/01_no_dmesg_ratelimiting.conf:#so when kernel.printk_devkmsg=ratelimit then the following printk_ratelimit=0 has no effect!
/etc/sysctl.d/01_no_dmesg_ratelimiting.conf:kernel.printk_ratelimit = 0
/etc/sysctl.d/01_no_dmesg_ratelimiting.conf:kernel.printk_devkmsg=on
/etc/sysctl.d/01_no_dmesg_ratelimiting.conf:#Couldn't write 'on' to 'kernel/printk_devkmsg', ignoring: Invalid argument
/etc/sysctl.d/01_no_dmesg_ratelimiting.conf:#kernel.printk_devkmsg=1 #Couldn't write '1' to 'kernel/printk_devkmsg', ignoring: Invalid argument
/etc/sysctl.d/01_no_dmesg_ratelimiting.conf:#^already set in kernel cmdline so the above cannot be changed!(has no effect! hence the error!) as: printk.devkmsg=on
/etc/sysctl.d/01_no_dmesg_ratelimiting.conf:#src: https://patchwork.kernel.org/patch/9216393/
/etc/sysctl.d/10_delay_writes.conf:#vm.dirty_writeback_centisecs is how often the pdflush/flush/kdmflush processes wake up and check to see if work needs to be done.
/etc/sysctl.d/10_delay_writes.conf:#600 secs (10mins)
/etc/sysctl.d/10_delay_writes.conf:#echo 60000 > /proc/sys/vm/dirty_writeback_centisecs
/etc/sysctl.d/10_delay_writes.conf:vm.dirty_writeback_centisecs=60000
/etc/sysctl.d/10_delay_writes.conf:#unspecified(with tlp) is: vm.dirty_writeback_centisecs = 60000
/etc/sysctl.d/10_delay_writes.conf:#unspecified(w/o tlp) is: vm.dirty_writeback_centisecs = 500
/etc/sysctl.d/10_delay_writes.conf:#vm.dirty_expire_centisecs is how long something can be in cache before it needs to be written. In this case it’s 30 seconds. When the pdflush/flush/kdmflush processes kick in they will check to see how old a dirty page is, and if it’s older than this value it’ll be written asynchronously to disk. Since holding a dirty page in memory is unsafe this is also a safeguard against data loss.
/etc/sysctl.d/10_delay_writes.conf:#3600 secs (1 hour)
/etc/sysctl.d/10_delay_writes.conf:#echo 360000 > /proc/sys/vm/dirty_expire_centisecs
/etc/sysctl.d/10_delay_writes.conf:vm.dirty_expire_centisecs=360000
/etc/sysctl.d/10_delay_writes.conf:#unspecified(with tlp) is: vm.dirty_expire_centisecs = 60000
/etc/sysctl.d/10_delay_writes.conf:#unspecified(w/o tlp) is: vm.dirty_expire_centisecs = 3000
/etc/sysctl.d/10_delay_writes.conf:#descriptions from: https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/
/etc/sysctl.d/10_delay_writes.conf:##vm.dirtytime_expire_seconds = 43200  #w/o tlp if unspecified
/etc/sysctl.d/10_swappiness.conf:#added by me, https://askubuntu.com/questions/432809/why-is-kswapd0-running-on-a-computer-with-no-swap
/etc/sysctl.d/10_swappiness.conf:#https://groups.google.com/d/msg/qubes-users/aSPefKH223U/ahafhI5qCQAJ
/etc/sysctl.d/10_swappiness.conf:vm.swappiness = 0
/etc/sysctl.d/20-idea.conf:fs.inotify.max_user_watches = 524288
/etc/sysctl.d/20-idea.conf:#fs.inotify.max_user_watches = 8192
/etc/sysctl.d/20-idea.conf:#^ default
/etc/sysctl.d/20-idea.conf:#
/etc/sysctl.d/20-idea.conf:#sudo sysctl -p --system
/etc/sysctl.d/20-idea.conf:#src: https://confluence.jetbrains.com/display/IDEADEV/Inotify+Watches+Limit
/etc/sysctl.d/20_tcp_timestamps.conf:net.ipv4.tcp_timestamps=0
/etc/sysctl.d/30_oom.conf:#from: https://unix.stackexchange.com/a/87769/306023
/etc/sysctl.d/30_oom.conf:#The heuristic used by the OOM-killer can be modified through the vm.oom_kill_allocating_task sysctl setting. The possible values are as follows:
/etc/sysctl.d/30_oom.conf:#
/etc/sysctl.d/30_oom.conf:#    0 (default) The OOM-killer will scan through the task list and select a task rogue task utilizing a lot of memory to kill.
/etc/sysctl.d/30_oom.conf:#
/etc/sysctl.d/30_oom.conf:#    1 (non-zero) The OOM-killer will kill the task that triggered the out-of-memory condition.
/etc/sysctl.d/30_oom.conf:vm.oom_kill_allocating_task=0
/etc/sysctl.d/30_oom.conf:#^ set back to 0 because 1 kills gnome-terminal while compiling firefox, for example
/etc/sysctl.d/30_oom.conf:#The kernel memory accounting algorithm can be tuned with the vm.overcommit_memory sysctl settings. The possible values are as follows:
/etc/sysctl.d/30_oom.conf:#
/etc/sysctl.d/30_oom.conf:#    0 (default) Heuristic overcommit with weak checks.
/etc/sysctl.d/30_oom.conf:#
/etc/sysctl.d/30_oom.conf:#    1 Always overcommit, no checks.
/etc/sysctl.d/30_oom.conf:#
/etc/sysctl.d/30_oom.conf:#    2 Strict accounting, in this mode the virtual address space limit is determined by the value of vm.overcommit_ratio settings according to the following formula:
/etc/sysctl.d/30_oom.conf:#
/etc/sysctl.d/30_oom.conf:#    virtual memory = (swap + physical memory * (overcommit_ratio / 100))
/etc/sysctl.d/30_oom.conf:#vm.overcommit_memory=2
/etc/sysctl.d/30_oom.conf:#^ this causes mem allocations to fail instantly instead of disk thrash! only if vm.overcommit_ratio=50 (the default), but not when =200 (which brings disk thrashing back)
/etc/sysctl.d/30_oom.conf:#default is 0
/etc/sysctl.d/30_oom.conf:vm.overcommit_memory=1
/etc/sysctl.d/30_oom.conf:#^ leave on 0 or 1, or else gnome-terminal randomly disappears (with =2)
/etc/sysctl.d/30_oom.conf:#vm.overcommit_ratio=50
/etc/sysctl.d/30_oom.conf:# default is 50 (%)
/etc/sysctl.d/50-gentoo-other.conf:# When the kernel panics, automatically reboot in 3 seconds                     
/etc/sysctl.d/50-gentoo-other.conf:#kernel.panic = 3
/etc/sysctl.d/50-gentoo-other.conf:# Allow for more PIDs (cool factor!); may break some programs
/etc/sysctl.d/50-gentoo-other.conf:#kernel.pid_max = 999999
/etc/sysctl.d/50-gentoo-other.conf:# You should compile nfsd into the kernel or add it
/etc/sysctl.d/50-gentoo-other.conf:# to modules.autoload for this to work properly
/etc/sysctl.d/50-gentoo-other.conf:# TCP Port for lock manager
/etc/sysctl.d/50-gentoo-other.conf:#fs.nfs.nlm_tcpport = 0
/etc/sysctl.d/50-gentoo-other.conf:# UDP Port for lock manager
/etc/sysctl.d/50-gentoo-other.conf:#fs.nfs.nlm_udpport = 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#this is manually taken from a Manjaro install of ufw, file: /etc/ufw/sysctl.conf
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#and mixed in with gentoo's sysctl.conf
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#descriptions: ${HOME}/build/1packages/kernel/linuxgit/makepkg_pacman/linux-git/src/linux-git/Documentation/networking/ip-sysctl.txt
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Uncomment this to allow this host to route packets between interfaces
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#net.ipv4.ip_forward=1
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#net.ipv6.conf.default.forwarding=1
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#net.ipv6.conf.all.forwarding=1
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Disables packet forwarding
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.ip_forward = 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#^ This variable is special, its change resets all configuration parameters to their default state (RFC1122 for hosts, RFC1812 for routers) - so it must be first, then, right?!
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#Adding these(following ones) because it seems to have been also reset by setting net.ipv4.ip_forward above(eg. from 1 to 0):
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.default.forwarding = 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.all.forwarding=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#net.ipv4.conf.enp1s0.forwarding=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#net.ipv4.conf.lo.forwarding=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#continuing normally:
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.default.forwarding=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.all.forwarding=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Disables IP dynaddr
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#net.ipv4.ip_dynaddr = 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Disable ECN
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#net.ipv4.tcp_ecn = 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#From redhat doc:
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#disable forwarding of all multicast packets on all interfaces.
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.all.mc_forwarding=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.all.mc_forwarding=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Turn on Source Address Verification in all interfaces to prevent some
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# spoofing attacks
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Enables source route verification
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# 0  No source validation.
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# 1  Strict mode as defined in RFC 3704.
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# 2  Loose mode as defined in RFC 3704.
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#see also Documentation/networking/ip-sysctl.txt
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.default.rp_filter = 1
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Enable reverse path
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.all.rp_filter = 1
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Do not accept IP source route packets (we are not a router)
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Disable source route
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.default.accept_source_route = 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.all.accept_source_route = 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.default.accept_source_route=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.all.accept_source_route=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Disable ICMP redirects. ICMP redirects are rarely used but can be used in
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# MITM (man-in-the-middle) attacks. Disabling ICMP may disrupt legitimate
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# traffic to those sites.(but this redirects only, right?! so why even say that)
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Disable redirects
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# "Accepting ICMP redirects has few legitimate uses. Disable the acceptance and sending of ICMP redirected packets unless specifically required.
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#The .all.accept_redirects disables "acceptance of all ICMP redirected packets on all interfaces."
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.default.accept_redirects = 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.all.accept_redirects = 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.default.accept_redirects=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.all.accept_redirects=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Disable secure redirects
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.all.secure_redirects = 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.default.secure_redirects = 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.all.secure_redirects=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.default.secure_redirects=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#Next send_redirects added by me, found description later:
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#disables sending of all IPv4 ICMP redirected packets on all interfaces.
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#src: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/sect-Security_Guide-Server_Security-Disable-Source-Routing.html
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.default.send_redirects =0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.all.send_redirects = 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.default.send_redirects =0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.all.send_redirects = 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Ignore ICMP broadcasts (ECHO! bcasts, not all icmp bcasts, right?)
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.icmp_echo_ignore_broadcasts = 1
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Ignore bogus ICMP errors
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.icmp_ignore_bogus_error_responses=1
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#pinging me? ignore
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.icmp_echo_ignore_all=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Don't log Martian Packets (impossible packets)
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#net.ipv4.conf.default.log_martians=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#net.ipv4.conf.all.log_martians=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#actually do log them:
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.default.log_martians=1
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.conf.all.log_martians=1
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Change to '1' to enable TCP/IP SYN cookies This disables TCP Window Scaling
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# (http://lkml.org/lkml/2008/2/5/167)
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Enable SYN cookies (yum!)
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# http://cr.yp.to/syncookies.html
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#net.ipv4.tcp_syncookies = 1
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Default is 60
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.tcp_fin_timeout=30
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Default is 75
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#net.ipv4.tcp_keepalive_intvl=1800
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# normally allowing tcp_sack is ok, but if going through OpenBSD 3.8 RELEASE or
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# earlier pf firewall, should set this to 0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv4.tcp_sack=1
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Uncomment this to turn off ipv6 autoconfiguration
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.default.autoconf=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.all.autoconf=0
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:# Uncomment this to enable ipv6 privacy addressing
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.default.use_tempaddr=2
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:net.ipv6.conf.all.use_tempaddr=2
/etc/sysctl.d/50-gentoo_s_sysctl-mixedinwith-ufw.conf:#the ipv6 stuff doesn't exist when ipv6 isn't enabled in kernel, eg. no /proc/sys/net/ipv6/ but we leave these on for the times when we're booting stock arch kernels
/etc/sysctl.d/60-conntrack.conf:#src: ${HOME}/build/1packages/kernel/linuxgit/makepkg_pacman/linux-git/src/linux-git/Documentation/networking/nf_conntrack-sysctl.txt                   
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_log_invalid=255
/etc/sysctl.d/60-conntrack.conf:#nf_conntrack_log_invalid - INTEGER
/etc/sysctl.d/60-conntrack.conf:#  0   - disable (default)
/etc/sysctl.d/60-conntrack.conf:#  1   - log ICMP packets
/etc/sysctl.d/60-conntrack.conf:#  6   - log TCP packets
/etc/sysctl.d/60-conntrack.conf:#  17  - log UDP packets
/etc/sysctl.d/60-conntrack.conf:#  33  - log DCCP packets
/etc/sysctl.d/60-conntrack.conf:#  41  - log ICMPv6 packets
/etc/sysctl.d/60-conntrack.conf:#  136 - log UDPLITE packets
/etc/sysctl.d/60-conntrack.conf:#  255 - log packets of any protocol
/etc/sysctl.d/60-conntrack.conf:#
/etc/sysctl.d/60-conntrack.conf:#  Log invalid packets of a type specified by value.
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_checksum= 1
/etc/sysctl.d/60-conntrack.conf:#nf_conntrack_checksum - BOOLEAN
/etc/sysctl.d/60-conntrack.conf:#  0 - disabled
/etc/sysctl.d/60-conntrack.conf:#  not 0 - enabled (default)
/etc/sysctl.d/60-conntrack.conf:#
/etc/sysctl.d/60-conntrack.conf:#  Verify checksum of incoming packets. Packets with bad checksums are
/etc/sysctl.d/60-conntrack.conf:#  in INVALID state. If this is enabled, such packets will not be
/etc/sysctl.d/60-conntrack.conf:#  considered for connection tracking.
/etc/sysctl.d/60-conntrack.conf:#default 600
/etc/sysctl.d/60-conntrack.conf:#Default for generic timeout.  This refers to layer 4 unknown/unsupported protocols.
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_generic_timeout=100
/etc/sysctl.d/60-conntrack.conf:#default 30
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_icmp_timeout=5
/etc/sysctl.d/60-conntrack.conf:#default 3
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_tcp_max_retrans=1
/etc/sysctl.d/60-conntrack.conf:#default 60
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_tcp_timeout_close_wait=10
/etc/sysctl.d/60-conntrack.conf:#default 120
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_tcp_timeout_fin_wait=10
/etc/sysctl.d/60-conntrack.conf:#default 30
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_tcp_timeout_last_ack=10
/etc/sysctl.d/60-conntrack.conf:#default 300
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_tcp_timeout_max_retrans=20
/etc/sysctl.d/60-conntrack.conf:#default 60
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_tcp_timeout_syn_recv=10
/etc/sysctl.d/60-conntrack.conf:#default 120
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_tcp_timeout_syn_sent=10
/etc/sysctl.d/60-conntrack.conf:#default 120
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_tcp_timeout_time_wait=10
/etc/sysctl.d/60-conntrack.conf:#default 300
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_tcp_timeout_unacknowledged=20
/etc/sysctl.d/60-conntrack.conf:#default: false(0)
/etc/sysctl.d/60-conntrack.conf:#not 0 - enabled
/etc/sysctl.d/60-conntrack.conf:# Enable connection tracking flow timestamping.
/etc/sysctl.d/60-conntrack.conf:nf_conntrack_timestamp=1
/etc/sysctl.d/60-disable_ipv6.conf:#src: https://askubuntu.com/a/309463
/etc/sysctl.d/60-disable_ipv6.conf:net.ipv6.conf.all.disable_ipv6 = 1
/etc/sysctl.d/60-disable_ipv6.conf:net.ipv6.conf.default.disable_ipv6 = 1
/etc/sysctl.d/60-disable_ipv6.conf:net.ipv6.conf.lo.disable_ipv6 = 1
/etc/sysctl.d/60-disable_ipv6.conf:#src:https://askubuntu.com/a/672302
/etc/sysctl.d/60-disable_ipv6.conf:net.ipv6.conf.enp1s0.disable_ipv6 = 1
/etc/sysctl.d/80-default_TTL.conf:#128 + 2 routers
/etc/sysctl.d/80-default_TTL.conf:#net.ipv4.ip_default_ttl = 130
/etc/sysctl.d/80-default_TTL.conf:#default value = 64
/etc/sysctl.d/80-default_TTL.conf:#TODO: set to this instead:
/etc/sysctl.d/80-default_TTL.conf:#64 + 2 routers
/etc/sysctl.d/80-default_TTL.conf:#net.ipv4.ip_default_ttl = 66
/etc/sysctl.d/80-default_TTL.conf:net.ipv4.ip_default_ttl = 65
/etc/sysctl.d/95-sysrq.conf:kernel.sysrq=1
/etc/sysctl.d/99-sysctl.conf:# sysctl settings are defined through files in
/etc/sysctl.d/99-sysctl.conf:# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
/etc/sysctl.d/99-sysctl.conf:#
/etc/sysctl.d/99-sysctl.conf:# Vendors settings live in /usr/lib/sysctl.d/.
/etc/sysctl.d/99-sysctl.conf:# To override a whole file, create a new file with the same in
/etc/sysctl.d/99-sysctl.conf:# /etc/sysctl.d/ and put new settings there. To override
/etc/sysctl.d/99-sysctl.conf:# only specific settings, add a file with a lexically later
/etc/sysctl.d/99-sysctl.conf:# name in /etc/sysctl.d/ and put new settings there.
/etc/sysctl.d/99-sysctl.conf:#
/etc/sysctl.d/99-sysctl.conf:# For more information, see sysctl.conf(5) and sysctl.d(5).
@constantoverride

This comment has been minimized.

Copy link
Owner Author

commented Feb 6, 2019

I pretty much forgot what I did before(ie. in this very thread above), so I re-figured out how to reproduce the disk thrashing: https://groups.google.com/d/msg/qubes-devel/IPzCbDvjgu4/CWohO_4LFAAJ
to reiterate:
get a Qubes AppVM with settings:
Initial Memory: 14000 MB
Max memory 14000 MB
[ ] Include in memory balancing
(that means, deselect that)
use official qubes vm kernel (ie. not le9d patched)

[user@dev01-w-s-f-fdr28 ~]$ timeout=10s threads=1 alloc="$(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)"; echo "Will alloc: $alloc kB for each of the $threads concurrent threads."; echo "MemTotal before: $(awk '/MemTotal/{printf "%d kB\n", $2;}' < /proc/meminfo)";time stress --vm-bytes "${alloc}k" --vm-keep -m $threads --timeout $timeout ; echo "exit code: $?" ; awk '/MemTotal/{printf "MemTotal afterwards: %d kB\n", $2;}' < /proc/meminfo
Will alloc: 13510504 kB for each of the 1 concurrent threads.
MemTotal before: 14002196 kB
stress: info: [2191] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: info: [2191] successful run completed in 10s

real    0m10.268s
user    0m7.528s
sys    0m2.664s
exit code: 0
MemTotal afterwards: 14002196 kB

So this won't trigger OOM-killer, but will disk thrash(disk reading only) for less than 10 sec (probably less than 5sec).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.