Skip to content

Instantly share code, notes, and snippets.

@crok
Created May 28, 2019 13:26
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save crok/0ba817f88fa25d583481c906429bb7e3 to your computer and use it in GitHub Desktop.
Save crok/0ba817f88fa25d583481c906429bb7e3 to your computer and use it in GitHub Desktop.
Xiaomi Redmi Note 4X and memory management
TL;DR
First and foremost:
if you want some apps not to be killed/closed disable the battery optimization for them and|or lock them in the recents app list.
Ex. in Pie or Oreo: Settings / Apps and notifications / Special / Look for your app(s) / Battery opt -> disable
Pick an LMK profile according to your usage pattern (and fine-tune it if you want):
Heavy Gaming only:
Foreground Applications: 50MB
Visible Applications: 70MB
Secondary Server: 110MB
Hidden Applications: 150MB
Content Providers: 250MB
Empty Applications: 380MB
Adaptive LMK: ON
vfs_cache_pressure: 50
Light / multitasking:
Foreground Applications: 50MB
Visible Applications: 70MB
Secondary Server: 90MB
Hidden Applications: 110MB
Content Providers: 180MB
Empty Applications: 230MB
Adaptive LMK: OFF
vfs_cache_pressure: 150
Common tweaks:
vm.dirty_ratio: 1
vm.dirty_background_ratio: 3
extra_free_kbytes: 0
Disable zRAM or at least decrease the vm.swappiness to 10 or even 5 (or 1)
-------------------------------------------------------------------------------
LMK:
https://www.droidviews.com/tweak-android-low-memory-killer-needs/
https://android.googlesource.com/platform/system/core/+/master/lmkd/README.md
https://source.android.com/devices/tech/perf/low-ram
How does tweaking Low Memory Killer work?
Android manages low memory states using a Kernel module known as Low Memory Killer (LMK).
LMK is a process killer, which is designed to work in cooperation with the Android Framework.
Each process is assigned a special value by the Framework when started, the oom_adj value(oom_score_adj on newer kernels).
This value defines how important the process is for the system(its priority), and thus how easily it can be killed.
Values vary between -17 and +15(0 to 1000 for oom_score_adj).
Higher values are given to less important processes which are killed first.
Moreover, LMK defines 5 different categories for processes, each one containing processes with oom_adj/oom_score_adj in a specific range of values:
- Foreground Applications
The application currently shown on the screen and running.
- Visible Applications
Applications that might not be shown on the screen currently, but are still running. They might be hidden behind an overlay or have a transparent window.
- Secondary Server
Services running in the background and needed for Apps to function properly. This category includes Google Play Services for example.
- Hidden Applications
Applications that are hidden from the user, but are running in the background.
- Content Providers
These are the services providing content to the system like contacts provider, location provider etc.
- Empty Applications
This category contains Applications that the user exited, but Android still keeps in RAM. They do not steal any CPU time or cause any power drain.
Empty Applications category contains processes with the highest oom_adj/oom_score_adj values while Foreground Applications category Apps with the lowest values. Also, each of the categories above has its own memory threshold and when free RAM falls below this threshold, LMK will start killing processes inside this category, starting with processes with lower priorities.
Android ActivityManager is responsible for assigning oom_adj/oom_score_adj values for each process at run-time. Moreover, the system assigns memory thresholds to each process category at boot time. However, category thresholds can be changed directly through the kernel SysFS after boot. This makes room for configuring the Android Memory Management Subsystem according to personal needs.
The Sysfs parameter file to tweak it is:
/sys/module/lowmemorykiller/parameters/minfree
6 LMK values are available, their sequence is respectively the sequence of the app type list above.
The values are actually pages (1 page consist of 4KB of RAM) and these are actually threasholds of free memory ("Linux" free memory, not "Windows" free memory) when the LMK should start to kill processes (associated with higher oom_score_adj values - so less important processes) to get more free RAM.
Memory thresholds advised (should) to be registered in increasing order from Foreground Applications to Empty Applications category.
So, memory threshold of Foreground Applications < Visible Applications < System Server < Hidden Applications < Content Providers < Empty Applications.
Most Android modders only change the thresholds of the Empty and Hidden Applications categories, since these have the most impact on device responsiveness and multitasking capabilities.
Adaptive LMK: setting LMK to adapt to vmpressure
https://android.googlesource.com/kernel/msm.git/+/920cb1d977658ebde8b76af960dbc464b2ec7728
Sometimes (a lot of times tbh) it is better to kill a lower adj task, than thrashing.
The basic idea here is to make LMK more aggressive dynamically when a thrashing scenario is detected.
To detect thrashing adaptive LMK uses vmpressure events.
LMK defaults are around this list - derived from the Qualcomm init.rc scripts:
Foreground Applications: 72MB
Visible Applications: 90MB
Secondary Server: 108MB
Hidden Applications: 126MB
Content Providers: 216MB
Empty Applications: 315MB
Adaptive LMK: ON
In my opinion this is not necessarily bad.
Two "profiles" I would say can fit most users a little bit better:
(In my opinion there's no optimal setting for all users, everyone has their own usage habits)
Heavy Gaming only:
Foreground Applications: 50MB
Visible Applications: 70MB
Secondary Server: 110MB
Hidden Applications: 150MB
Content Providers: 250MB
Empty Applications: 380MB
Adaptive LMK: ON
This profile will aggressively start killing processes at 380MB of free RAM. At 150MB it will start killing Hidden Applications. Foreground and Visible Applications categories have lower thresholds to make sure that, if a game makes use of big amounts of memory, it will not get killed by the LMK while it is running on the screen.
Light / multitasking:
Foreground Applications: 50MB
Visible Applications: 70MB
Secondary Server: 90MB
Hidden Applications: 110MB
Content Providers: 180MB
Empty Applications: 230MB
Adaptive LMK: OFF
On devices with a small number of applications installed, this profile will kick LMK less frequently, saving some CPU cycles and reducing battery drain.
From Pie on there's a new opportunity - when there's no kernel in-built LMK driver: the userspace LMKd
Build.prop values available:
ro.config.low_ram: choose between low-memory vs high-performance device. Default = false.
ro.lmk.use_minfree_levels: use free memory and file cache thresholds for making decisions when to kill. This mode works the same way kernel lowmemorykiller driver used to work. Default = false
ro.lmk.low: min oom_adj score for processes eligible to be killed at low vmpressure level. Default = 1001 (disabled)
ro.lmk.medium: min oom_adj score for processes eligible to be killed at medium vmpressure level. Default = 800 (non-essential processes)
ro.lmk.critical: min oom_adj score for processes eligible to be killed at critical vmpressure level. Default = 0 (all processes)
ro.lmk.critical_upgrade: enables upgrade to critical level. Default = false
ro.lmk.upgrade_pressure: max mem_pressure at which level will be upgraded because system is swapping too much. Default = 100 (disabled)
ro.lmk.downgrade_pressure: min mem_pressure at which vmpressure event will be ignored because enough free memory is still available. Default = 100 (disabled)
ro.lmk.kill_heaviest_task: kill heaviest eligible task (best decision) vs. any eligible task (fast decision). Default = false
ro.lmk.kill_timeout_ms: duration in ms after a kill when no additional kill will be done, Default = 0 (disabled)
ro.lmk.debug: enable lmkd debug logs, Default = false
Values and their meaning are documented here on https://source.android.com/devices/tech/perf/lmkd
vfs_cache_pressure is a variable controls the tendency of the kernel to reclaim the memory which is used for caching of VFS (Virtual FileSystem) caches, versus pagecache and swap. Increasing this value increases the rate at which VFS caches are reclaimed.
At the default value of vfs_cache_pressure=100 the kernel will attempt to reclaim dentries and inodes at a "fair" rate with respect to pagecache and swapcache reclaim.
Decreasing vfs_cache_pressure causes the kernel to prefer to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will never reclaim dentries and inodes due to memory pressure and this can easily lead to out-of-memory conditions.
Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim dentries and inodes.
Increasing vfs_cache_pressure significantly beyond 100 may have negative performance impact. Reclaim code needs to take various locks to find freeable directory and inode objects. With vfs_cache_pressure=1000, it will look for ten times more freeable objects than there are.
Regarding vmpressure there are 2 schools of thoughts:
HIGH: run between 100 and 200 - your device will drop caches as necessary, probably dump early or just dump (too?) often.
LOW: run at like 20 or even 0: but then run a cron.d job every 2 hours or so to "manually" drop caches... and when you realize you can make it longer between drops.
Swapping to zRAM = a bit more usable RAM (ratio used to be 30..50%, 500MB of zRAM used to hold 1000MB of uncompressed, "normal" RAM content) in the cost of CPU cycles at high freq. thus it will affect your battery for sure.
Many people are saying that this is not true. But if you think about it.. even though the SoC in today's phones are quick and effective and even optimized to do such job (lzo or lz4 compressing) it still uses the CPU to swap back and forth to RAM itself, and it is not only the a job for compressing but maintaining the memory tables, not easy to maintain Contiguous Memory Allocation (CMA) and thus when a big region is needed for an operation of an app then movable pages has to be moved (migrated) out of the area, probably triggering more zRAM operation needed, etc.
This can - sometimes - cause lagging as the CPU is only loading your apps and keeping visible content smooth but in the background it has to do a lot of jobs that are a byproduct of using zRAM.
On a desktop PC this is more than likely won't be an issue at all because it is not portable, power consumption and CPU power can be considered "endless" compared to a handheld device - like out phone.
I'm not saying zRAM is bad, I have to emphasize that it is way better than swapping to flash. Using zRAM can be a wise choice - even on handheld devices (low RAM devices.. maybe).
If you don't want to disable zRAM and keep the chance to use it when you run out of RAM I advise you to set vm-swappiness to 10 or even 5 (to start swapping to zRAM at way later than originally (def. value is 60))
Using systems with at least 3GB of RAM I advise not to use zRAM at all. The Qualcomm init scripts zRAM setting for our device is 1GB (when RAM is 3GB, in case of the 2GB RAM version the def. zRAM size is 512MB).
Using a Magisk Module (like Swap Torpedo) is a better choice because it will eliminate zRAM at a really early stage during boot so nothing will be in zRAM when destroying, however doing it by a kernel settings adjuster app (like Kernel Adiutor, SmartPack Kernel Manager, EX Kernel Manager, Franco Kernel Manager or Darkness Control) the app will start it's work after boot finished thus the zRAM already contains data (from system apps and automatically started apps at boot) so it will not just take some time but will take some more CPU power (and thus battery).
PS: vm.swappiness is the variable is used to define how aggressively the kernel swaps out anonymous memory relative to pagecache and other caches. Increasing the value increases the amount of swapping. The default value is 60.
min_free_kbytes parameter:
This is used to force the Linux VM to keep a minimum number of kilobytes free. The VM uses this number to compute a watermark[WMARK_MIN] value for each lowmem zone in the system. Each lowmem zone gets a number of reserved free pages based proportionally on its size.
Some minimal amount of memory is needed to satisfy PF_MEMALLOC allocations; if you set this to lower than 1024KB, your system will become subtly broken, and prone to deadlock under high loads.
Setting this too high will OOM your machine instantly. Some would say decrease it to have more free RAM - but that will result in lagging when high demand for memory raising by an app (games, etc).
Another tunable is extra_free_kbytes. Info from source.android.com:
"A high value will increase the amount of memory that the kernel tries to keep free, reducing allocation time and causing the lowmemorykiller to kill earlier.
A low value allows more memory to be used by processes but may cause more allocations to block waiting on disk I/O or lowmemorykiller."
- and -
"0 uses the default value chosen by ActivityManager.
A positive value will increase the amount of memory that the kernel tries to keep free, reducing allocation time and causing the lowmemorykiller to kill earlier.
A negative value allows more memory to be used by processes but may cause more allocations to block waiting on disk I/O or lowmemorykiller.
Directly added to the default value chosen by ActivityManager based on screen size."
I would use 0 if I were you.
Dirty Ratio (vm.dirty_ratio) and Dirty Background Ratio (vm.dirty_background_ratio):
http://trik-tutorialsmartphone.blogspot.com/2013/12/android-battery-saving-tips-with-edit.htmlfunction
function controls how often the kernel writes the data to disk (disk in this case is the internal sd card in the system / handhelds, not the external sd card).
When applications write data to disk, Linux actually does not write data directly to disk, but writing to the linux kernel handles system memory and when and how the data is inserted into the disc.
Sysctl values presented in percentage, the higher the percentage that you enter, wait longer to enter the data on the disk, on the contrary, the lower the percentage value you entered, the data is more often incorporated into the disk.
These are percentage values of the RAM in the device, so I would say setting 5 to vm.dirty_ratio (150MB) and 20 to vm.dirty_background_ratio (600MB) is in our device is a good choice (starts direct - and blocking - I/O only after 150MB / 600MB of disk I/O cache, respectively) - the system will fire up the cache dropping to the storage in 30secs anyway #FIXME
Bedtime reading on Linux VM variables:
https://www.kernel.org/doc/Documentation/sysctl/vm.txt
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment