Skip to content

Instantly share code, notes, and snippets.

@svanoort
Last active December 20, 2023 02:27
Show Gist options
  • Star 79 You must be signed in to star a gist
  • Fork 16 You must be signed in to fork a gist
  • Save svanoort/66a766ea68781140b108f465be45ff00 to your computer and use it in GitHub Desktop.
Save svanoort/66a766ea68781140b108f465be45ff00 to your computer and use it in GitHub Desktop.
Blessed GC settings for big servers
# Base settings and GC logging
-server -XX:+AlwaysPreTouch # First should be default, but we make it explicit, second pre-zeroes memory mapped pages on JVM startup -- improves runtime performance
# -Xloggc:gc-%t.log # CUSTOMIZE LOCATION HERE - $path/gc-%t.log -- the %t in the gc log file path is so we get a new file with each JVM restart
-XX:NumberOfGCLogFiles=5 -XX:+UseGCLogFileRotation -XX:GCLogFileSize=20m # Limits the number of files, logs to folder
-XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCCause
-XX:+PrintTenuringDistribution -XX:+PrintReferenceGC -XX:+PrintAdaptiveSizePolicy # gather info on object age & reference GC time for further tuning if needed.
# G1 specific settings -- probably should be default for multi-core systems with >2 GB of heap (below that, default is probably fine)
-XX:+UseG1GC
-XX:+UseStringDeduplication
-XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=20 # Prevents G1 undersizing young gen, which otherwise causes a cascade of issues
-XX:+ParallelRefProcEnabled # parallelize reference processing, reducing young and old GC times. We use a LOT of weak references, so should have big impact.
-XX:+ExplicitGCInvokesConcurrent # Avoid explicit System.gc() call triggering full GC, instead trigger G1
-XX:+UnlockDiagnosticVMOptions -XX:G1SummarizeRSetStatsPeriod=1 # Additional logging for G1 status
-XX:MaxMetaspaceExpansion=64M # Avoids triggering full GC when we just allocate a bit more metaspace, and metaspace automatically gets cleaned anyway
# Tuned CMS GC - probably not worth using though (prefer G1 or default parallel GC if heap is <2 GB or so)
-XX:+UseConcMarkSweepGC
-XX:+ExplicitGCInvokesConcurrentAndUnloadsClasses # fixes issues with superlong explicit GC cycles (switches to concurrent GC, but also enables class unloading so we don't get memory leaks).
-XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled # parallelize re-marking and reference processing. The first will reduce full GC time, the latter should reduce both GC times -- we use a LOT of weak references, so should have big impact.
-XX:+CMSClassUnloadingEnabled # allows GC of classloaders & classes (important because we do some dynamic loading and because of groovy).
-XX:+ScavengeBeforeFullGC -XX:+CMSScavengeBeforeRemark # will force young gen GC before full GC (reduces full GC duration)
# Based on GC logs but may need refining, somewhere from 1:1 to 1:3 ratio is best because Jenkins makes mostly young garbage, cap at 3 G to limit minor GC pause duration
-XX:NewSize=512m -XX:MaxNewSize=3g -XX:NewRatio=2
# Options NOT to use so far:
# * -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=<percent> - Generally suggested, but we don't have good benchmarks to collect these. So far we haven't done it
# Settings we *may* want to add for CMS but need more data for
#-Xms4g # because realistically allocating anything under 4 GB just increases startup time. We need that much RAM. Optional.
#-XX:NewSize=512m -XX:MaxNewSize=3g -XX:NewRatio=2 # start young gen somewhere reasonable and let it grow, but not too big (3 GB is a compromise b/w young GC pause time and the full GC pause time, the latter of which we know to be quite good even with a 10+ GB old gen heap)
# LOW-MEMORY use Jenkins, I.E. Jenkins on OpenShift, I have been using this for a long time to run Jenkins on an AWS t2.micro
# Will run A-OK with -Xmx384m if you limit load & workers
-XX:+UseSerialGC -XX:MinHeapFreeRatio=15 -XX:MaxHeapFreeRatio=30 -XX:MinMetaspaceFreeRatio=10 -XX:MaxMetaspaceFreeRatio=20
# -XX:MaxMetaspaceSize=128m -XX:CompressedClassSpaceSize=128m # Tweak to your needs. This will run a decent-sized Jenkins however. Too low == crashes
# Make sure to limit metaspace too, i.e. -XX:MaxMetaspaceSize=128m because it allocates a lot of RAM for that
@svanoort
Copy link
Author

svanoort commented Oct 26, 2016

MetaSpace can trigger long (full) GC cycles on hitting the metadata GC threshold for G1 (AD)
MetaSpace may be 200-1000 MB depending

HOWEVER, for AzB systems it doesn't trigger a FullGC with CMS

@svanoort
Copy link
Author

svanoort commented Oct 27, 2016

To test, for example:

Supply a jenkins_home with JENKINS_HOME=./path

java -server -XX:NumberOfGCLogFiles=2 -XX:+UseGCLogFileRotation -XX:GCLogFileSize=100m -Xloggc:gc-%t.log -XX:+UseG1GC -XX:MaxGCPauseMillis=400 -XX:+ParallelRefProcEnabled -XX:+ExplicitGCInvokesConcurrentAndUnloadsClasses -XX:+UnlockDiagnosticVMOptions -XX:G1SummarizeRSetStatsPeriod=1 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCCause -XX:+PrintTenuringDistribution -XX:+PrintReferenceGC -XX:+PrintAdaptiveSizePolicy -XX:MaxMetaspaceExpansion=128M -Dhudson.DNSMultiCast.disabled=true -jar jenkins.war

@svanoort
Copy link
Author

Using $JENKINS_HOME in the -Xloggc:$JENKINS_HOME/gc-%t.log will work for Red Hat RPM, not sure about Debian though (looked like not)

@svanoort
Copy link
Author

svanoort commented Nov 2, 2016

-XX:MaxGCPauseMillis was removed because it seems that after deeper analysis of standard large systems, the 250 ms default was adequate.

@svanoort
Copy link
Author

svanoort commented Nov 2, 2016

From local profiling of Jenkins with G1: average object size will be a MB or two. Probably a bad idea to use regions <8 MB.

@svanoort
Copy link
Author

svanoort commented Nov 3, 2016

Consider: -XX:SoftRefLRUPolicyMSPerMB=10 for cases where we get memory pileup on big heaps

@svanoort
Copy link
Author

svanoort commented Nov 19, 2016

Consider setting metaspace initial size to max to prevent longer GCs upon resize - -XX:MetaspaceSize=100M.

@svanoort
Copy link
Author

@KowalczykBartek
Copy link

Hey, is correct to set s0/s1 size without switching-off UseAdaptiveSizePolicy ?

@fiaraujo
Copy link

Hello. We're using a HEAP from 32G to 48G. Should this G1 configs still apply?

@svanoort
Copy link
Author

@fiaraujo The G1 configs still apply.

@mahen025
Copy link

mahen025 commented Sep 5, 2018

Hi,
We are running Wildfly 10 in the production environment(CentOS 7) and below is my JVM configuration.
Total available memory is 32 GB & 16 Core and running WildFly alone.

-XX:+DisableExplicitGC -Xms26624m -Xmx26624m -XX:MetaspaceSize=512M -XX:MaxMetaspaceSize=2048m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:gclog.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=2000k

We are using dynatrace to monitor the server performance. most of the time it shows memory utilization is 100%. please check and suggest JVM configuration or if any changes need to be done to provide better performance. please guide.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment