A large-scale JMeter test can consume significant memory over time if scripts aren’t designed or configured properly. Memory leaks in JMeter often manifest as OutOfMemoryError
, excessive GC pauses, or ever-increasing heap usage during long‐running tests. This article provides a deep, technical dive into common sources of memory leaks in JMeter scripts, practical scripting best practices to avoid them, and JVM tuning techniques to ensure stable, efficient execution.
Before diving into leak sources, let’s understand how JMeter manages memory:
-
Test Plan as a Forest of Objects
- Each Thread Group spawns N Java threads.
- Samplers, Pre/Post-Processors, Assertions, Timers, and Listeners are instantiated per thread (or shared, depending on scope).
- Each sampler’s request and response data (including body, headers) reside in memory until garbage collected.
-
Variables and Properties
- JMeter Variables (
vars
) and JMeter Properties (props
) are held in ajava.util.HashMap
. - Values assigned to variables (e.g., large JSON) remain until explicitly removed or until the thread ends.
- Unbounded storage in
vars
is a common leak pattern.
- JMeter Variables (
-
Listeners
- Collectors (e.g., “View Results Tree,” “Aggregate Report,” “Simple Data Writer”) buffer sample results in RAM.
- Unbounded listeners that accumulate all sample results can cause continuous growth of heap usage.
-
Script Engines
- Beanshell or BSF maintain script contexts, classloads, and compiled script instances.
- JSR223 with Groovy creates less overhead (compiles scripts at runtime) but still can hold references to large objects if misused.
-
JVM Heap & GC
- JMeter runs in a JVM, usually with defaults tuned for “desktop” loads.
- A long test or a high thread count can saturate the default heap, causing aggressive GC, long pauses, or OOME.
Below are prevalent patterns and anti-patterns that lead to memory leaks:
-
Problem: Assigning entire response bodies or large data structures to JMeter variables without limits.
-
Symptoms: Gradual heap usage increase (e.g., from 200 MB → 800 MB over time), culminating in
java.lang.OutOfMemoryError: Java heap space
. -
Root Cause:
- Using
${__RegexExtractor(...,)}
to capture entire JSON payload instead of just needed fields. - Using
vars.put("bigPayload", prev.getResponseDataAsString())
. - Thread-local variables are never removed and persist until the thread group completes.
- Using
Best Practice:
- Only store minimal necessary fragments (e.g., a single token or ID).
- If full payload analysis is needed, work with streaming (e.g., use
prev.getResponseDataAsStream()
, parse inline, and discard). - Use
vars.remove("bigPayload")
once processed.
-
Problem: Graphical or memory-based listeners (e.g., “View Results Tree,” “Aggregate Graph”) collect all samples in JVM memory.
-
Symptoms: Heap usage steadily climbs as each sample result is stored; “View Results Tree” can easily blow 2 GB of RAM with large tests.
-
Root Cause:
- Adding listeners under Thread Groups or Test Plan without enabling “Save as XML” or “Save as CSV” to file and clear in‐memory caches.
- Default “Simple Data Writer” writing to RAM, not to disk.
Best Practice:
- Use Backend Listener (InfluxDB, Graphite) or Simple Data Writer with a file output (e.g.,
jtl
file) instead of in-memory listeners. - Remove “View Results Tree” entirely in non‐debug runs. If debugging, limit the number of samples captured with “Log events to file” and disable GUI.
- If using “Summary Report” or “Aggregate Report,” configure them to use non-graphical, CSV-based output.
- For essential real-time insights, rely on command-line mode with
-l results.jtl
and external analysis tools.
-
Problem: Beanshell and BSF are interpreted languages with distinct classloaders; each invocation can allocate new class instances, causing classloader leaks.
-
Symptoms: Elevated Metaspace usage or permgen (for older JVMs).
-
Root Cause:
- Placing Beanshell PreProcessor inside a loop without using “Reuse Interpreter” unchecked.
- Using
beanshell.interp = new org.apache.bsf.BSFManager()
or launching separate script instances each iteration instead of caching.
Best Practice:
-
Prefer JSR223 with Groovy (
Language: groovy
).- Groovy scripts are compiled once (if “Cache compiled script if available” is checked) and reused.
- Example JSR223 PreProcessor snippet:
// Cached & compiled once per thread def token = prev.getResponseDataAsString().find(/"token":"([^"]+)"/) { full, t -> t } vars.put("authToken", token)
-
If Beanshell must be used, enable “Cache compiled script if available” and avoid dynamic class definitions inside the script.
-
Problem: Using
CacheManager
incorrectly or sharing large data structures across threads leads to retention of objects beyond their needed scope. -
Symptoms: Heap not freed between virtual user iterations; memory usage spikes with each iteration.
-
Root Cause:
- Adding too many entries in
CacheManager
(e.g., storing entire HTML, JSON, or binary attachments). - Storing complex Java objects in
props
(test‐wide properties) that persist for the test duration.
- Adding too many entries in
Best Practice:
-
Restrict
CacheManager
to only necessary resources (e.g., images, CSS, static files). -
For dynamic data, use
vars
, but remove entries promptly:// After using the variable vars.remove("temporaryData")
-
Do not put large collections into
props
. If cross-thread sharing is needed, store minimal keys or IDs, not payloads.
-
Problem: Using Post-Processors (e.g., Regular Expression Extractor, JSON Extractor) to capture entire JSON or XML payloads repeatedly, storing into lists or JMeter variables across iterations.
-
Symptoms: Post-Processors can keep references to entire responses if misconfigured.
-
Root Cause:
- Using “Store matched substring” with variable names that aren’t removed.
- In JSR223 PostProcessor, appending to
vars.getObject("myList")
each iteration without clearing the list.
Best Practice:
- Extract only the fields required.
- If repeated capturing is needed, store only the latest value or use ephemeral in-script variables (e.g., local
def tmp = …
). - Avoid maintaining global lists—prefer streaming or writing intermediate results to external files.
-
Problem: Test Plans lack “tearDown Thread Group” or tear-down logic to free reusable resources.
-
Symptoms: Even when threads finish, certain objects linger (especially static caches in JSR223), causing memory fragmentation.
-
Root Cause:
- Not invoking
vars.clear()
or custom cleanup scripts. - Using plugins or sampler implementations that allocate connections or large buffers and never closed.
- Not invoking
Best Practice:
-
Add a tearDown Thread Group with a JSR223 Sampler that explicitly removes variables:
vars.remove("authToken") vars.clear() // If custom caches or file handles opened: MyCustomCache.instance.clear()
-
Close any file handles, database connections, or HTTP connections opened in JSR223 scripts.
Below are actionable best practices when authoring JMeter test plans and scripts:
-
Reasoning:
- Beanshell/BSF interpret code at runtime; each invocation may create new classloader instances, leading to PermGen/Metaspace leaks on older JVMs (Java 8 and earlier).
- Groovy (JSR223) allows compilation and caching of scripts, producing bytecode that executes faster and with fewer classloader churns.
-
Configuration:
- In any PreProcessor/PostProcessor/Listener, select Language: groovy.
- Check “Cache compiled script if available” (JSR223) to ensure one‐time compilation.
-
Example:
// JSR223 Sampler (Groovy) // Extract token from JSON and store in JMeter variable import groovy.json.JsonSlurper def response = prev.getResponseDataAsString() def json = new JsonSlurper().parseText(response) vars.put("token", json.accessToken)
-
Only extract the minimal data needed for subsequent requests (e.g., userID, sessionID, token).
-
If you must process a large payload, parse it and discard or write to disk instead of keeping it in memory:
// Instead of storing entire response string, parse directly def stream = new ByteArrayInputStream(prev.getResponseData()) def parser = new JsonSlurper() def jsonTree = parser.parse(stream) // Use required fields, then let GC reclaim `response`
-
Never attach “View Results Tree” or “View Results in Table” in non-debug runs.
-
Use Simple Data Writer to write raw results to a
.jtl
file. Injmeter.properties
(oruser.properties
), specify:# Save only required fields to minimize file size and memory usage jmeter.save.saveservice.output_format=csv jmeter.save.saveservice.response_data=false jmeter.save.saveservice.response_data.on_error=false jmeter.save.saveservice.samplerData=false jmeter.save.saveservice.url=false jmeter.save.saveservice.thread_counts=true jmeter.save.saveservice.latency=true
-
If real‐time metrics are needed, configure Backend Listener to push to InfluxDB/Grafana or use Graphite.
-
After a JSR223 Sampler or PostProcessor uses a JMeter variable, call:
vars.remove("temporaryVar")
-
In a tearDown Thread Group, clear all variables once test iterations finish:
// JSR223 Sampler in tearDown Thread Group vars.clear()
-
This explicitly instructs JMeter to remove references so GC can reclaim memory sooner.
- HTTP Cache Manager can reduce memory usage by caching only necessary resources (images, static files).
- Cookie Manager should have “Clear cookies each iteration?” enabled if cookies aren’t needed across loops.
- HTTP Samplers configured with “Keep-alive”: false can prevent connection pooling from holding sockets indefinitely (though this trades CPU overhead for memory).
-
GUI Mode adds substantial overhead (GUI components, real-time Chart GUI, Tree Model).
-
Always run high‐scale tests in non-GUI mode:
jmeter -n -t testplan.jmx -l result.jtl -Jjmeterengine.remote.system.exit=true
-
GUI overhead can distort memory measurements and mask script leaks.
-
Append the following JVM flags to the JMeter startup script (
jmeter
orjmeter.bat
):-Xlog:gc*:gc.log:time,uptime,level,tags -XX:+PrintHeapAtGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/path/to/dumps
-
gc.log will record each GC event timestamp, heap occupancy before/after, and pause durations.
-
Use tools like GCViewer or GCMV to visualize and spot “heap never shrinks” patterns.
-
Launch JMeter with JMX enabled:
jmeter -Dcom.sun.management.jmxremote \ -Dcom.sun.management.jmxremote.port=8999 \ -Dcom.sun.management.jmxremote.local.only=false \ -Dcom.sun.management.jmxremote.authenticate=false \ -Dcom.sun.management.jmxremote.ssl=false
-
Connect VisualVM or Java Mission Control to
localhost:8999
. -
Monitor:
- Heap usage over time (look for “steady climb” patterns).
- Thread count (should match expected thread groups; no runaway threads).
- Classes loaded/unloaded (abnormally high class load rates might indicate script engine leaks).
-
Periodically snapshot heap usage with:
jstat -gcutil <pid> 1000
- Observe
%Eden
,%Old
,%Metaspace
usage. - If old generation occupancy never drops, a leak is suspected.
- Observe
-
Trigger a heap dump for offline analysis:
jmap -dump:live,format=b,file=heap_dump.hprof <pid>
-
Open
heap_dump.hprof
in Eclipse Memory Analyzer (MAT). -
Run Leak Suspects Report to find dominator tree roots.
-
Look for large retained sets caused by JMeter classes:
- Instances of
org.apache.jmeter.reporters.*
(listeners holding ontoSampleResult
objects). - Groovy script contexts:
org.codehaus.groovy.vmplugin.v7.Java7
. java.util.HashMap$Node[]
associated with JMeter variables (vars
).
- Instances of
-
Identify which test element holds onto the most memory; then correlate that to the test plan usage.
-
As memory usage climbs:
- Throughput (requests/sec) often degrades, as GC pause times lengthen.
- 95th percentile response times spike due to stop-the-world (STW) events.
-
Plot GC pause vs. throughput over test duration to verify impact. If throughput steadily decreases while heap usage is rising, you have a memory leak.
Proper JVM tuning can mitigate out-of-memory situations and reduce GC pauses during long tests.
-
Rule of thumb: Allocate enough headroom for expected number of threads, data size, and listeners.
-
Example: a 20,000-thread distributed test that parses JSON responses might need 4–8 GB heap.
-Xms2g -Xmx4g
-
Setting
-Xms
=-Xmx
avoids heap resizing pauses at runtime.
-
G1GC is recommended for large heaps (>4 GB) and JDK 11+. It offers:
- Concurrent marking cycles.
- Predictable pause targets (
-XX:MaxGCPauseMillis=200
).
-
Example flags for G1GC:
-XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:InitiatingHeapOccupancyPercent=45
-
CMS (older, JDK 8) can be used for smaller heaps (<4 GB), configured as:
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=70
-
For JDK 8+:
-XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=512m
-
A rising Metaspace usage often indicates script engine (Groovy/Beanshell) loading lots of classes. Monitor with
jstat -gc
and bump if necessary.
-
NewRatio: Ratio of Eden+Survivor to Old generation; e.g.,
-XX:NewRatio=2
sets Eden + Survivor = OldGen/2. -
SurvivorRatio: Commonly
-XX:SurvivorRatio=8
(Eden 8× Survivor). -
Target Pause:
-XX:MaxGCPauseMillis=200
-
Example:
-XX:NewRatio=3 -XX:SurvivorRatio=6 -XX:MaxGCPauseMillis=150
-
Default thread stack size (e.g., 1 MB) × number of threads = total native memory.
-
If you see “OutOfMemoryError: unable to create new native thread,” reduce stack size:
-Xss512k
-
But beware: complex JSR223 scripts may require deeper stacks.
-
Control parallelism:
-XX:ParallelGCThreads=4 # number of threads for young GC -XX:ConcGCThreads=2 # number of threads for concurrent phases
-
For multi-core machines (≥8 cores), increase
ParallelGCThreads
to ~(#cores − 1). Monitor CPU during GC to avoid excessive CPU usage.
Below is a sample snippet to place in jmeter
(Unix) or jmeter.bat
(Windows) right after HEAP="-Xms1g -Xmx1g"
:
# GC Logging
GC_LOG_DIR="/path/to/logs"
mkdir -p "$GC_LOG_DIR"
GC_LOG_FILE="$GC_LOG_DIR/jmeter-gc-$(date +%Y%m%d_%H%M%S).log"
HEAP="-Xms2g -Xmx4g"
G1GC_OPTS="-XX:+UseG1GC \
-XX:MaxGCPauseMillis=200 \
-XX:InitiatingHeapOccupancyPercent=45 \
-XX:ParallelGCThreads=4 \
-XX:ConcGCThreads=2"
METASPACE="-XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=512m"
GC_LOG_OPTS="-Xlog:gc*,safepoint:file=$GC_LOG_FILE:time,uptime,level,tags \
-XX:+PrintHeapAtGC \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath=/path/to/dumps"
# Combining flags:
JVM_ARGS="$HEAP $G1GC_OPTS $METASPACE $GC_LOG_OPTS"
export JVM_ARGS
# Then proceed to launch JMeter:
# exec $JMETER_HOME/bin/jmeter "$@"
-
Explanation:
-Xms2g -Xmx4g
: initial/maximum heap- G1GC with a 200 ms pause target
- GC logs to file, including heap dumps on OOME.
Scenario:
- The test plan uses a Beanshell PostProcessor to parse JSON and store the entire response in a JMeter variable called
jsonPayload
. - After parsing, the script appends the entire JSON to a JMeter list
vars.getObject("payloadList")
. - The plan includes “View Results Tree” to debug every sampler.
- There is no tear-down cleanup.
Beanshell PostProcessor code (faulty):
import org.apache.jmeter.protocol.http.sampler.HTTPSampleResult;
import org.json.JSONObject;
// Capture full response
String response = prev.getResponseDataAsString();
// Store entire JSON in a JMeter variable
vars.put("jsonPayload", response);
// Append to list for later analysis
Object obj = vars.getObject("payloadList");
if (obj == null) {
java.util.List<String> newList = new java.util.ArrayList<>();
newList.add(response);
vars.putObject("payloadList", newList);
} else {
((java.util.List<String>) obj).add(response);
}
// Extract a token
JSONObject json = new JSONObject(response);
String token = json.getString("token");
vars.put("token", token);
Problems Identified:
jsonPayload
variable holds the entire response each iteration.payloadList
grows unbounded, accumulating every response for the test’s duration.- Beanshell interpreter overhead: recompiling the above script every iteration (unless “Cache compiled script” is enabled).
- “View Results Tree”: collects full response bodies for each sampler.
- No tear-down cleanup:
payloadList
andjsonPayload
never removed.
Goals:
- Extract only the
token
value. - Avoid storing the full response.
- Use JSR223 (Groovy) with script caching.
- Remove
token
after use in tear-down. - Eliminate unbounded list; if data accumulation is needed, write to disk.
Refactored JSR223 PostProcessor (Groovy):
import groovy.json.JsonSlurper
// Parse JSON response as stream
def parser = new JsonSlurper()
def json = parser.parse(new ByteArrayInputStream(prev.getResponseData()))
// Extract only the token
def token = json.token
vars.put("token", token)
// OPTIONAL: Write entire response to disk if persistent storage is needed
def threadNum = ctx.getThreadNum()
def samplerName = ctx.getCurrentSampler().getName()
def timeStamp = System.currentTimeMillis()
def filePath = "/path/to/output/responses/response_${threadNum}_${samplerName}_${timeStamp}.json"
new File(filePath).withWriter { writer ->
writer.write(prev.getResponseDataAsString())
}
// By writing to disk, we avoid holding full payloads in heap
-
Key Changes:
- Groovy (JSR223): faster, compiled once, reduces classloader churn.
- Streamed parsing: uses
ByteArrayInputStream
so the rawbyte[]
can be GC’ed as soon as parsed. - Minimal variable storage: only
token
stored invars
. - Persistent storage to disk: if the entire response is needed for analysis, write it to a file—removes retention in memory.
Tear-Down JSR223 Sampler (Groovy) in “tearDown Thread Group”:
// Remove any variables to free memory
vars.remove("token")
// If any other custom caches or data structures used, clear them here
// No need to delete response files on disk; they can be cleaned later
Listener Configuration Changes:
-
Remove “View Results Tree” completely.
-
Instead, use Simple Data Writer:
-
In GUI, add Listener → Simple Data Writer.
-
Configure Filename:
./results/results.jtl
-
In
user.properties
orjmeter.properties
, ensure:jmeter.save.saveservice.output_format=csv jmeter.save.saveservice.response_data=false jmeter.save.saveservice.response_data.on_error=false
-
-
This ensures sample results go directly to disk, avoiding in-memory storage.
Description: A test uploads multi‐MB video files every minute for 48 hours, simulating a media ingestion pipeline.
-
Original Issue:
- Large file contents buffered in memory by HTTP Sampler until request completion.
- PostProcessor stored MIME body in a variable for verification.
- Heap usage grew ~1 GB every hour; test crashed after 6 hours with
OOME
.
-
Remediation:
- Use
FileToStringConfig
sparingly: read file in a streaming manner (e.g.,FileInputStream
) so JMeter doesn’t buffer entire file in RAM. - Remove PostProcessor storage of file content; instead, verify status codes only (200, 201).
- Implement streaming parser: if checksum needed, compute via streaming, e.g., Java’s
DigestInputStream
. - Tuned JVM: increased heap to 6 GB (
-Xms4g -Xmx6g
), switched to G1GC with-XX:MaxGCPauseMillis=300
. - Result: Heap stabilized around ~3 GB, test ran 48 hours without leak.
- Use
Description: Running a distributed test with 50 slaves in GUI mode to monitor debug logs in real time.
-
Original Issue:
- Each slave’s GUI consumed ~400 MB.
- Master’s heap kept growing due to “View Results Tree” in GUI mode.
- Coordination messages (RMI calls) also buffered large sample objects in master.
-
Remediation:
-
Switch slaves to non-GUI (headless) mode:
jmeter-server -Djava.rmi.server.hostname=<slave-ip> -n
-
Master sends only aggregated metrics: use
-Gclient.rmi.localport=0
to minimize RMI overhead. -
Master GUI only opened on a separate monitoring instance, not part of the test coordination.
-
Master’s listeners reduced to Backend Listener pushing metrics to Grafana.
-
JVM tuning on master:
-Xms3g -Xmx3g
,-XX:+UseG1GC
,-XX:MaxGCPauseMillis=200
. -
Result:
- Slaves consumed ~200 MB each (headless).
- Master’s heap consumption peaked ~2.5 GB and stabilized.
- No OOME during 4 hour 20k-thread run.
-
Description: A microservices ecosystem returning nested JSON payloads of ~150 KB per response. 10,000 threads calling endpoint every second.
-
Original Issue:
- JSON Extractor captured entire payload into variables.
- JSR223 PostProcessor appended payloads into a list for batch processing.
- After ~20 minutes, heap reached 8 GB; test crashed.
-
Remediation:
-
Extract only required fields (e.g.,
user.id
,order.total
) using JSON Path. -
Stream JSON parsing with
JsonSlurper().parseText()
and discard raw string immediately. -
If full JSON content needed for business logic, write to a temporary file or external message queue.
-
Reduced WebSocket Sampler to fetch only delta changes, not full payload each iteration.
-
Tuned JMeter property:
# Turn off duplicate sampler result storage jmeter.save.saveservice.assertion_results_failure_message=false jmeter.save.saveservice.samplerData=false
-
JVM Settings:
-Xms4g -Xmx6g
,-XX:+UseG1GC
,-XX:InitiatingHeapOccupancyPercent=40
. -
Outcome: Sustained load for 1 hour with stable heap (~3 GB) and average GC pause ~50 ms.
-
-
Memory leaks in JMeter typically stem from:
- Holding onto large payloads in JMeter variables or caches.
- Unbounded listeners buffering all sample results in JVM memory.
- Outdated script engines (Beanshell/BSF) loading redundant classes.
- Missing tear-down or explicit cleanup steps.
-
Scripting best practices:
- Use JSR223 with Groovy (cached compilation).
- Extract minimal data; refrain from storing entire responses in
vars
. - Write necessary large outputs to disk rather than memory.
- Explicitly remove variables (
vars.remove()
orvars.clear()
) in tear-down.
-
Listener configuration:
- Replace GUI-based listeners with Simple Data Writer or Backend Listener.
- Configure
jmeter.properties
to save only essential fields and avoid response bodies unless absolutely needed.
-
JVM tuning:
- Size heap appropriately (
-XmsX -XmxY
) and keep them equal to avoid resizing pauses. - Use G1GC (JDK 11+) or CMS (JDK 8) depending on heap size.
- Adjust Metaspace (
-XX:MetaspaceSize
,-XX:MaxMetaspaceSize
) to handle script engines. - Monitor with GC logs, VisualVM, jstat/jmap, and heap dump analysis (MAT).
- Size heap appropriately (
-
Monitoring and diagnosis:
- Enable GC logging and analyze with tools like GCViewer.
- Use VisualVM/JMC for live monitoring of heap, threads, and class loading.
- Periodically capture heap dumps and analyze suspicious retention trees.
-
Iterative validation:
- Run smaller-scale (e.g., 10% thread count) tests first to verify no memory anomalies.
- Gradually ramp up, monitoring heap usage and GC pause times.
- Use distributed, non-GUI mode for larger runs.
By combining these scripting best practices, listener configurations, and JVM tuning recommendations, you can significantly reduce memory leak risks and ensure stable, efficient JMeter test executions—even for large, long‐running performance scenarios.
What an article.. Best JMeter practice that ive ever read. Thank you for sharing. Btw, do i need to download or adjust anything to use

GC1G
? Or i can put to thejmeter.bat
file immediately with logging capabilities?