-
-
Save svanoort/66a766ea68781140b108f465be45ff00 to your computer and use it in GitHub Desktop.
# Base settings and GC logging | |
-server -XX:+AlwaysPreTouch # First should be default, but we make it explicit, second pre-zeroes memory mapped pages on JVM startup -- improves runtime performance | |
# -Xloggc:gc-%t.log # CUSTOMIZE LOCATION HERE - $path/gc-%t.log -- the %t in the gc log file path is so we get a new file with each JVM restart | |
-XX:NumberOfGCLogFiles=5 -XX:+UseGCLogFileRotation -XX:GCLogFileSize=20m # Limits the number of files, logs to folder | |
-XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCCause | |
-XX:+PrintTenuringDistribution -XX:+PrintReferenceGC -XX:+PrintAdaptiveSizePolicy # gather info on object age & reference GC time for further tuning if needed. | |
# G1 specific settings -- probably should be default for multi-core systems with >2 GB of heap (below that, default is probably fine) | |
-XX:+UseG1GC | |
-XX:+UseStringDeduplication | |
-XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=20 # Prevents G1 undersizing young gen, which otherwise causes a cascade of issues | |
-XX:+ParallelRefProcEnabled # parallelize reference processing, reducing young and old GC times. We use a LOT of weak references, so should have big impact. | |
-XX:+ExplicitGCInvokesConcurrent # Avoid explicit System.gc() call triggering full GC, instead trigger G1 | |
-XX:+UnlockDiagnosticVMOptions -XX:G1SummarizeRSetStatsPeriod=1 # Additional logging for G1 status | |
-XX:MaxMetaspaceExpansion=64M # Avoids triggering full GC when we just allocate a bit more metaspace, and metaspace automatically gets cleaned anyway | |
# Tuned CMS GC - probably not worth using though (prefer G1 or default parallel GC if heap is <2 GB or so) | |
-XX:+UseConcMarkSweepGC | |
-XX:+ExplicitGCInvokesConcurrentAndUnloadsClasses # fixes issues with superlong explicit GC cycles (switches to concurrent GC, but also enables class unloading so we don't get memory leaks). | |
-XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled # parallelize re-marking and reference processing. The first will reduce full GC time, the latter should reduce both GC times -- we use a LOT of weak references, so should have big impact. | |
-XX:+CMSClassUnloadingEnabled # allows GC of classloaders & classes (important because we do some dynamic loading and because of groovy). | |
-XX:+ScavengeBeforeFullGC -XX:+CMSScavengeBeforeRemark # will force young gen GC before full GC (reduces full GC duration) | |
# Based on GC logs but may need refining, somewhere from 1:1 to 1:3 ratio is best because Jenkins makes mostly young garbage, cap at 3 G to limit minor GC pause duration | |
-XX:NewSize=512m -XX:MaxNewSize=3g -XX:NewRatio=2 | |
# Options NOT to use so far: | |
# * -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=<percent> - Generally suggested, but we don't have good benchmarks to collect these. So far we haven't done it | |
# Settings we *may* want to add for CMS but need more data for | |
#-Xms4g # because realistically allocating anything under 4 GB just increases startup time. We need that much RAM. Optional. | |
#-XX:NewSize=512m -XX:MaxNewSize=3g -XX:NewRatio=2 # start young gen somewhere reasonable and let it grow, but not too big (3 GB is a compromise b/w young GC pause time and the full GC pause time, the latter of which we know to be quite good even with a 10+ GB old gen heap) | |
# LOW-MEMORY use Jenkins, I.E. Jenkins on OpenShift, I have been using this for a long time to run Jenkins on an AWS t2.micro | |
# Will run A-OK with -Xmx384m if you limit load & workers | |
-XX:+UseSerialGC -XX:MinHeapFreeRatio=15 -XX:MaxHeapFreeRatio=30 -XX:MinMetaspaceFreeRatio=10 -XX:MaxMetaspaceFreeRatio=20 | |
# -XX:MaxMetaspaceSize=128m -XX:CompressedClassSpaceSize=128m # Tweak to your needs. This will run a decent-sized Jenkins however. Too low == crashes | |
# Make sure to limit metaspace too, i.e. -XX:MaxMetaspaceSize=128m because it allocates a lot of RAM for that |
Consider: -XX:SoftRefLRUPolicyMSPerMB=10 for cases where we get memory pileup on big heaps
Consider setting metaspace initial size to max to prevent longer GCs upon resize - -XX:MetaspaceSize=100M
.
For low memory with multicore, try settings here: https://developers.redhat.com/blog/2014/07/22/dude-wheres-my-paas-memory-tuning-javas-footprint-in-openshift-part-2/
Hey, is correct to set s0/s1 size without switching-off UseAdaptiveSizePolicy ?
Hello. We're using a HEAP from 32G to 48G. Should this G1 configs still apply?
@fiaraujo The G1 configs still apply.
Hi,
We are running Wildfly 10 in the production environment(CentOS 7) and below is my JVM configuration.
Total available memory is 32 GB & 16 Core and running WildFly alone.
-XX:+DisableExplicitGC -Xms26624m -Xmx26624m -XX:MetaspaceSize=512M -XX:MaxMetaspaceSize=2048m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:gclog.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=2000k
We are using dynatrace to monitor the server performance. most of the time it shows memory utilization is 100%. please check and suggest JVM configuration or if any changes need to be done to provide better performance. please guide.
From local profiling of Jenkins with G1: average object size will be a MB or two. Probably a bad idea to use regions <8 MB.