I've added a file /vendor/random.data
with 4GB of random data from /dev/urandom
. We are comparing block sizes 2K, 4K, 8K and 16K for ext4 and erofs.
For erofs, we are also using BOARD_EROFS_PCLUSTER_SIZE := 262144
according to the Google documentation. We will compare lz4hc
, lz4
and disabling compression.
I did mount /vendor -o remount,cache_strategy=disabled
for erofs to make sure nothing gets cached. The 5.10 kernel on cupid does not seem to support dax=always
or the legacy dax
flags, neither does it support deduplicating blocks. That may result in more accurate and/or better results, but the numbers difference is huge as is.
Also, ext4 has quite stable read numbers, while erofs with compression fluctuates by ~5-10% on subsequent runs.