-
-
Save bric3/ce236e2c74860fd60f3aa542b5a800d0 to your computer and use it in GitHub Desktop.
/* | |
Moved here => https://github.com/bric3/java-pmap-inspector | |
*/ | |
public class JavaPmapInspector { | |
public static void main(String[] args) throws IOException { | |
System.err.println("Go to:"); | |
System.err.println(" https://github.com/bric3/java-pmap-inspector"); | |
} | |
} | |
Hi @hfu5,
This is a a good question. I do have have all the piece, but here's my understanding of the phenomena.
It appears on 64bit Linux systems when the program uses the malloc implementation of the glibc. And glibc happens to manage memory using a well known strategy that is called region based memory management (a region is also called arena).
These region are created by glibc to avoid contention, and this depends on the number of CPU. Calling malloc
once or two in the same thread won't stress enough the native allocator, in order to see multiple arenas, the program really need to allocate a bunch of data inferior to the 64 MB threshold, and in a multithreaded fashion.
Homnestly, I only saw those on running jvm processes in production. I couldn't reproduce it locally but I likely didn't tried hard enough, if you succeed I'll be happy to learn how you did it.
I started to write a blog post a while ago on this topic and I need to go back writing it.
can you please explain why
MALLOC_ARENA
has the following format? I tested locally -- callingmalloc
in cpp via JNI, andpmap
shows resultsrw-p
for the block Imalloc
, and the address is continuous.