Skip to content

Instantly share code, notes, and snippets.

@mageddo
Created March 12, 2023 19:26
Show Gist options
  • Save mageddo/e4335799380c557c1b8e7a81a1243c68 to your computer and use it in GitHub Desktop.
Save mageddo/e4335799380c557c1b8e7a81a1243c68 to your computer and use it in GitHub Desktop.
native image

native-image --version GraalVM 22.3.1 Java 19 CE (Java Version 19.0.2+7-jvmci-22.3-b12)

-H:±AbortOnBenchmarkCounterOverflow Default: - (disabled).
Abort VM with SIGILL if benchmark counters controlled by the (Generic|Timed|Benchmark)DynamicCounters
option overflow. WARNING: No descriptive error message will be printed! In
case of an overflow, manual inspection of the emitted code is required.
-H:BenchmarkDynamicCounters=... Default: None
Turn on the benchmark counters. The format of this option is:
(err|out),start pattern,end pattern
Start counting when the start pattern matches on the given stream and stop when the end pattern occurs.
You can use "~" to match 1 or more digits.
Examples:
err, starting =====, PASSED in
out,Iteration ~ (~s) begins:,Iteration ~ (~s) ends:
The first pattern matches DaCapo output and the second matches SPECjvm2008 output.
As a more detailed example, here are the options to use for getting statistics
about allocations within the DaCapo pmd benchmark:
-XX:JVMCICounterSize=<value> -XX:-JVMCICountersExcludeCompiler \
-Dgraal.BenchmarkDynamicCounters="err, starting ====, PASSED in " \
-Dgraal.ProfileAllocations=true
The JVMCICounterSize value depends on the granularity of the profiling -
10000 should be sufficient. Omit JVMCICountersExcludeCompiler to exclude
counting allocations on the compiler threads.
The counters can be further configured by the ProfileAllocationsContext option.
We highly recommend the use of -Dgraal.AbortOnBenchmarkCounterOverflow=true to
detect counter overflows eagerly.
-H:CompilationFailureAction=Silent
Specifies the action to take when compilation fails.
The accepted values are:
Silent - Print nothing to the console.
Print - Print a stack trace to the console.
Diagnose* - Retry the compilation with extra diagnostics.
ExitVM - Same as Diagnose except that the VM process exits after retrying.
* If "Diagnose" is set compilation will be retried with extra diagnostics enabled including dumping (see file:doc-files/DumpHelp.txt).
In such a scenario DiagnoseDumpLevel can be used to specify the dump level (DebugContext dump levels) accordingly.
-H:Dump=... Default: None
Filter pattern for specifying scopes in which dumping is enabled.
A filter is a list of comma-separated terms of the form:
<pattern>[:<level>]
If <pattern> contains a "*" or "?" character, it is interpreted as a glob pattern.
Otherwise, it is interpreted as a substring. If <pattern> is empty, it
matches every scope. If :<level> is omitted, it defaults to 1. The term
~<pattern> is a shorthand for <pattern>:0 to disable a debug facility for a pattern.
The default log level is 0 (disabled). Terms with an empty pattern set
the default log level to the specified value. The last
matching term with a non-empty pattern selects the level specified. If
no term matches, the log level is the default level. A filter with no
terms matches every scope with a log level of 1.
Examples of debug filters:
---------
(empty string)
Matches any scope with level 1.
---------
:1
Matches any scope with level 1.
---------
*
Matches any scope with level 1.
---------
CodeGen,CodeInstall
Matches scopes containing "CodeGen" or "CodeInstall", both with level 1.
---------
CodeGen:2,CodeInstall:1
Matches scopes containing "CodeGen" with level 2, or "CodeInstall" with level 1.
---------
Outer:2,Inner:0}
Matches scopes containing "Outer" with log level 2, or "Inner" with log level 0. If the scope
name contains both patterns then the log level will be 0. This is useful for silencing subscopes.
---------
:1,Dead:2
Matches scopes containing "Dead" with level 2, and all other scopes with level 1.
---------
Dead:0,:1
Matches all scopes with level 1, except those containing "Dead". Note that the location of
the :1 doesn't matter since it's specifying the default log level so it's the same as
specifying :1,Dead:0.
---------
Code*
Matches scopes starting with "Code" with level 1.
---------
Code,~Dead
Matches scopes containing "Code" but not "Dead", with level 1.
-H:DynamicProxyConfigurationFiles=<string>*
One or several (comma-separated) paths to JSON files that specify lists of interfaces that define Java proxy classes.
The structure is an array of arrays of fully qualified interface names.
Example:
[
["java.lang.AutoCloseable", "java.util.Comparator"],
["java.util.Comparator"],
["java.util.List"]
]
-H:LinkAtBuildTime=<string>*
Require types to be fully defined at image build-time. If used without args, all classes in scope of the option are required to be fully defined.
Using --link-at-build-time without arguments is only allowed on command line or when embedded in a
native-image.properties file of some zip/jar file on the module-path (but not on class-path).
In the module path case, the option will cause all classes of the module to be required to be
fully defined at image build-time. If used without arguments on command line all classes are
required to be fully defined at image build-time.
Using --link-at-build-time with arguments is allowed in every scope:
1. On command line
2. Embedded in a native-image.properties file of some zip/jar file on module-path
3. Embedded in a native-image.properties file of some zip/jar file on class-path
If the option is embedded in native-image.properties file in some zip/jar file all class-names
and package-names passed to the option have to be found in the zip/jar files the option is embedded
in. Using --link-at-build-time with arguments on command line does not have that restriction.
-H:LinkAtBuildTimePaths=<string>*
Require all types in given class or module-path entries to be fully defined at image build-time.
This option requires arguments that are of the same type as the
arguments passed via -p (--module-path) or -cp (--class-path):
--link-at-build-time-paths <class search path of directories and zip/jar files>
The given entries are searched and all classes inside are registered as --link-at-build-time classes.
This option is only allowed to be used on command line. I.e. the option will be rejected if it is provided
by Args of a native-image.properties file embedded in a zip/jar file.
-H:MethodFilter=... Default: None
Pattern for matching methods. The syntax for a pattern is:
SourcePatterns = SourcePattern ["," SourcePatterns] .
SourcePattern = [ "~" ] [ Class "." ] method [ "(" [ Parameter { ";" Parameter } ] ")" ] .
Parameter = Class | "int" | "long" | "float" | "double" | "short" | "char" | "boolean" .
Class = { package "." } class .
Glob pattern matching (*, ?) is allowed in all parts of the source pattern.
The "~" prefix negates the pattern.
Positive patterns are joined by an "or" operator: "A,B" matches anything
matched by "A" or "B". Negative patterns are joined by "and not": "~A,~B"
matches anything not matched by "A" and not matched by "B". "A,~B,~C,D"
matches anything matched by "A" or "D" and not matched by "B" and not
matched by "C".
A set of patterns containing negative patterns but no positive ones contains
an implicit positive "*" pattern: "~A,~B" is equivalent to "*,~A,~B".
Examples of method filters:
---------
*
Matches all methods in all classes.
---------
canonical(CanonicalizerTool;LogicNode;LogicNode)
Matches all methods named "canonical", with the first parameter of type
"CanonicalizerTool", and the second and third parameters of type
"LogicNode".
The packages of the parameter types are irrelevant.
---------
arraycopy(Object;;;;)
Matches all methods named "arraycopy", with the first parameter
of type "Object", and four more parameters of any type. The
packages of the parameter types are irrelevant.
---------
List.set
Matches all methods named "set" in a class whose simple name is "List".
---------
*List.set
Matches all methods named "set" in a class whose simple name ends with "List".
---------
org.graalvm.compiler.nodes.PhiNode.*
Matches all methods in the class "org.graalvm.compiler.nodes.PhiNode".
---------
org.graalvm.compiler.nodes.*.canonical
Matches all methods named "canonical" in classes in the package
"org.graalvm.compiler.nodes".
---------
arraycopy,toString
Matches all methods named "arraycopy" or "toString", meaning that ',' acts
as an "or" operator.
---------
java.util.*.*.,~java.util.*Array*.*
java.util.*.*.,~*Array*.*
These patterns are equivalent and match all methods in the package
"java.util" except for classes that have "Array" in their name.
---------
~java.util.*.*
Matches all methods in all classes in all packages except for anything in
the "java.util" package.
-H:MetricsFile=... Default: None
File to which metrics are dumped per compilation.
A CSV format is used if the file ends with .csv otherwise a more
human readable format is used. The fields in the CSV format are:
compilable - method being compiled
compilable_identity - identity hash code of compilable
compilation_nr - where this compilation lies in the ordered
sequence of all compilations identified by
compilable_identity
compilation_id - runtime issued identifier for the compilation
metric_name - name of metric
metric_value - value of metric
-H:NeverInline=<string>*
Pattern for disabling inlining of methods during image generation.
The syntax for a pattern is:
SourcePatterns = SourcePattern ["," SourcePatterns] .
SourcePattern = [ Class "." ] method [ "(" [ Parameter { ";" Parameter } ] ")" ] .
Parameter = Class | "int" | "long" | "float" | "double" | "short" | "char" | "boolean" .
Class = { package "." } class .
Glob pattern matching (*, ?) is allowed in all parts of the source pattern.
Examples of method filters:
---------
visit(Argument;BlockScope)
Matches all methods named "visit", with the first parameter of
type "Argument", and the second parameter of type "BlockScope".
The packages of the parameter types are irrelevant.
---------
arraycopy(Object;;;;)
Matches all methods named "arraycopy", with the first parameter
of type "Object", and four more parameters of any type. The
packages of the parameter types are irrelevant.
---------
org.graalvm.compiler.core.graph.PostOrderNodeIterator.*
Matches all methods in the class "org.graalvm.compiler.core.graph.PostOrderNodeIterator".
---------
*
Matches all methods in all classes
---------
org.graalvm.compiler.core.graph.*.visit
Matches all methods named "visit" in classes in the package
"org.graalvm.compiler.core.graph".
---------
arraycopy,toString
Matches all methods named "arraycopy" or "toString", meaning that ',' acts as an or operator.
-H:PrintGraph=File
Where IdealGraphVisualizer graph dumps triggered by Dump or DumpOnError should be written.
The accepted values are:
File - Dump IGV graphs to the local file system (see DumpPath).
Network - Dump IGV graphs to the network destination specified by PrintGraphHost and PrintGraphPort.
If a network connection cannot be opened, dumping falls back to file dumping.
Disable - Do not dump IGV graphs.
-H:ProfileAllocationsContext=AllocatingMethod
Control the naming and granularity of the counters when using ProfileAllocations.
The accepted values are:
AllocatingMethod - a counter per method
InstanceOrArray - one counter for all instance allocations and
one counter for all array allocations
AllocatedType - one counter per allocated type
AllocatedTypesInMethod - one counter per allocated type, per method
-H:ReflectionConfigurationFiles=<string>*
One or several (comma-separated) paths to JSON files that specify which program elements should be made available via reflection.
The JSON object schema is:
{
String name; // fully qualified class name
boolean allDeclaredConstructors; // include all declared constructors, see Class.getDeclaredConstructors()
boolean allPublicConstructors; // include all public constructors, see Class.getConstructors()
boolean allDeclaredMethods; // include all declared methods, see Class.getDeclaredMethods()
boolean allPublicMethods; // include all public methods, see Class.getMethods()
boolean allDeclaredFields; // include all declared fields, see Class.getDeclaredFields()
boolean allPublicFields; // include all public fields, see Class.getFields()
{
String name; // method name
String[] parameterTypes; // parameter types (optional, use if ambiguous)
}[] methods;
{
String name; // field name
}[] fields;
}[];
Example:
[
{
"name" : "java.lang.Class",
"allDeclaredConstructors" : "true",
"allPublicConstructors" : "true",
"allDeclaredMethods" : "true",
"allPublicMethods" : "true"
},
{
"name" : "java.lang.String",
"fields" : [
{ "name" : "value" },
{ "name" : "hash" }
],
"methods" : [
{ "name" : "<init>", "parameterTypes" : [] },
{ "name" : "<init>", "parameterTypes" : ["char[]"] },
{ "name" : "charAt" },
{ "name" : "format", "parameterTypes" : ["java.lang.String", "java.lang.Object[]"] },
]
},
{
"name" : "java.lang.String$CaseInsensitiveComparator",
"methods" : [
{ "name" : "compare" }
]
}
]
-H:SerializationConfigurationFiles=<string>*
One or several (comma-separated) paths to JSON files that specify lists of serialization configurations.
The structure is an array of elements specifying the target serialization/deserialization class.
Example:
[
{
"condition":{"typeReachable":"app.DataSerializer"},
"name":"java.util.ArrayList"
}
]
For deserializing lambda classes, the capturing class of the lambda needs to be specified in a separate section of the configuration file, for example:
[
"types": [
{"name":"java.lang.Object"}
],
"lambdaCapturingTypes": [
{"name":"java.util.Comparator"}
]
]
This JSON file format is also used for the serialization deny list.
In rare cases an application might explicitly make calls to
ReflectionFactory.newConstructorForSerialization(Class<?> cl, Constructor<?> constructorToCall)
where the passed `constructorToCall` differs from what would automatically be used if regular serialization of `cl`
would happen. To also support such serialization usecases it is possible to register serialization for a class with a
custom constructorToCall. For example, to allow serialization of `org.apache.spark.SparkContext$$anonfun$hadoopFile$1`
using the DeclaredConstructor of java.lang.Object as custom targetConstructor the following can be used in
serialization-config.json:
[
{
"condition":{"typeReachable":"org.apache.spark.SparkContext"},
"name":"org.apache.spark.SparkContext$$anonfun$hadoopFile$1",
"customTargetConstructorClass":"java.lang.Object"
}
]
-H:SerializationDenyConfigurationFiles=<string>*
One or several (comma-separated) paths to JSON files that specify lists of serialization configurations.
The structure is an array of elements specifying the target serialization/deserialization class.
Example:
[
{
"condition":{"typeReachable":"app.DataSerializer"},
"name":"java.util.ArrayList"
}
]
For deserializing lambda classes, the capturing class of the lambda needs to be specified in a separate section of the configuration file, for example:
[
"types": [
{"name":"java.lang.Object"}
],
"lambdaCapturingTypes": [
{"name":"java.util.Comparator"}
]
]
This JSON file format is also used for the serialization deny list.
In rare cases an application might explicitly make calls to
ReflectionFactory.newConstructorForSerialization(Class<?> cl, Constructor<?> constructorToCall)
where the passed `constructorToCall` differs from what would automatically be used if regular serialization of `cl`
would happen. To also support such serialization usecases it is possible to register serialization for a class with a
custom constructorToCall. For example, to allow serialization of `org.apache.spark.SparkContext$$anonfun$hadoopFile$1`
using the DeclaredConstructor of java.lang.Object as custom targetConstructor the following can be used in
serialization-config.json:
[
{
"condition":{"typeReachable":"org.apache.spark.SparkContext"},
"name":"org.apache.spark.SparkContext$$anonfun$hadoopFile$1",
"customTargetConstructorClass":"java.lang.Object"
}
]
-H:SpectrePHTBarriers=None
Select a strategy to mitigate speculative bounds check bypass (aka Spectre-PHT or Spectre V1).
This is an experimental option - execution of untrusted code is not supported by GraalVM CE.
The accepted values are:
None - No mitigations are used in JIT compiled code.
AllTargets - Speculative execution on all conditional branch targets is
stopped using speculative execution barrier instructions.
GuardTargets - Branch targets relevant to Java memory safety are instrumented
with barrier instructions. This option has less performance impact
than AllTargets.
NonDeoptGuardTargets - Same as GuardTargets, except that branches which deoptimize are not
protected since they can not be executed repeatedly and are thus less
likely to be successfully exploited in an attack.
Note that all modes except "None" will also instrument branch target blocks containing UNSAFE memory accesses
with barrier instructions.
-H:±TraceInlining Default: - (disabled).
Enable tracing of inlining decisions.
Output format:
compilation of 'Signature of the compilation root method':
at 'Signature of the root method' ['Bytecode index']: <'Phase'> 'Child method signature': 'Decision made about this callsite'
at 'Signature of the child method' ['Bytecode index']:
|--<'Phase 1'> 'Grandchild method signature': 'First decision made about this callsite'
\--<'Phase 2'> 'Grandchild method signature': 'Second decision made about this callsite'
at 'Signature of the child method' ['Bytecode index']: <'Phase'> 'Another grandchild method signature': 'The only decision made about this callsite.'
-R:±AbortOnBenchmarkCounterOverflow Default: - (disabled).
Abort VM with SIGILL if benchmark counters controlled by the (Generic|Timed|Benchmark)DynamicCounters
option overflow. WARNING: No descriptive error message will be printed! In
case of an overflow, manual inspection of the emitted code is required.
-R:BenchmarkDynamicCounters=... Default: None
Turn on the benchmark counters. The format of this option is:
(err|out),start pattern,end pattern
Start counting when the start pattern matches on the given stream and stop when the end pattern occurs.
You can use "~" to match 1 or more digits.
Examples:
err, starting =====, PASSED in
out,Iteration ~ (~s) begins:,Iteration ~ (~s) ends:
The first pattern matches DaCapo output and the second matches SPECjvm2008 output.
As a more detailed example, here are the options to use for getting statistics
about allocations within the DaCapo pmd benchmark:
-XX:JVMCICounterSize=<value> -XX:-JVMCICountersExcludeCompiler \
-Dgraal.BenchmarkDynamicCounters="err, starting ====, PASSED in " \
-Dgraal.ProfileAllocations=true
The JVMCICounterSize value depends on the granularity of the profiling -
10000 should be sufficient. Omit JVMCICountersExcludeCompiler to exclude
counting allocations on the compiler threads.
The counters can be further configured by the ProfileAllocationsContext option.
We highly recommend the use of -Dgraal.AbortOnBenchmarkCounterOverflow=true to
detect counter overflows eagerly.
-R:CompilationFailureAction=Silent
Specifies the action to take when compilation fails.
The accepted values are:
Silent - Print nothing to the console.
Print - Print a stack trace to the console.
Diagnose* - Retry the compilation with extra diagnostics.
ExitVM - Same as Diagnose except that the VM process exits after retrying.
* If "Diagnose" is set compilation will be retried with extra diagnostics enabled including dumping (see file:doc-files/DumpHelp.txt).
In such a scenario DiagnoseDumpLevel can be used to specify the dump level (DebugContext dump levels) accordingly.
-R:Dump=... Default: None
Filter pattern for specifying scopes in which dumping is enabled.
A filter is a list of comma-separated terms of the form:
<pattern>[:<level>]
If <pattern> contains a "*" or "?" character, it is interpreted as a glob pattern.
Otherwise, it is interpreted as a substring. If <pattern> is empty, it
matches every scope. If :<level> is omitted, it defaults to 1. The term
~<pattern> is a shorthand for <pattern>:0 to disable a debug facility for a pattern.
The default log level is 0 (disabled). Terms with an empty pattern set
the default log level to the specified value. The last
matching term with a non-empty pattern selects the level specified. If
no term matches, the log level is the default level. A filter with no
terms matches every scope with a log level of 1.
Examples of debug filters:
---------
(empty string)
Matches any scope with level 1.
---------
:1
Matches any scope with level 1.
---------
*
Matches any scope with level 1.
---------
CodeGen,CodeInstall
Matches scopes containing "CodeGen" or "CodeInstall", both with level 1.
---------
CodeGen:2,CodeInstall:1
Matches scopes containing "CodeGen" with level 2, or "CodeInstall" with level 1.
---------
Outer:2,Inner:0}
Matches scopes containing "Outer" with log level 2, or "Inner" with log level 0. If the scope
name contains both patterns then the log level will be 0. This is useful for silencing subscopes.
---------
:1,Dead:2
Matches scopes containing "Dead" with level 2, and all other scopes with level 1.
---------
Dead:0,:1
Matches all scopes with level 1, except those containing "Dead". Note that the location of
the :1 doesn't matter since it's specifying the default log level so it's the same as
specifying :1,Dead:0.
---------
Code*
Matches scopes starting with "Code" with level 1.
---------
Code,~Dead
Matches scopes containing "Code" but not "Dead", with level 1.
-R:FlightRecorderLogging="all=warning"
Usage: -XX:FlightRecorderLogging=[tag1[+tag2...][*][=level][,...]]
When this option is not set, logging is enabled at a level of WARNING.
When this option is set to the empty string, logging is enabled at a level of INFO.
When this option is set to "disable", logging is disabled entirely.
Otherwise, this option expects a comma separated list of tag combinations, each with an optional wildcard (*) and level.
A tag combination without a level is given a default level of INFO.
Messages with tags that match a given tag combination are set to log at that tag combination's level.
If a tag combination does not have a wildcard, then only messages with exactly the same tags are matched.
Otherwise, messages whose tags are a subset of the tag combination are matched.
Specifying "all" instead of a tag combination matches all tag combinations.
If more than one tag combination matches a message's tags, the rightmost one will apply.
Messages with tags that do not have any matching tag combinations are set to log at a default level of WARNING.
This option is case insensitive.
Available log levels:
[trace, debug, info, warning, error, off]
Available log tags:
[jfr, system, event, setting, bytecode, parser, metadata, dcmd]
-R:MethodFilter=... Default: None
Pattern for matching methods. The syntax for a pattern is:
SourcePatterns = SourcePattern ["," SourcePatterns] .
SourcePattern = [ "~" ] [ Class "." ] method [ "(" [ Parameter { ";" Parameter } ] ")" ] .
Parameter = Class | "int" | "long" | "float" | "double" | "short" | "char" | "boolean" .
Class = { package "." } class .
Glob pattern matching (*, ?) is allowed in all parts of the source pattern.
The "~" prefix negates the pattern.
Positive patterns are joined by an "or" operator: "A,B" matches anything
matched by "A" or "B". Negative patterns are joined by "and not": "~A,~B"
matches anything not matched by "A" and not matched by "B". "A,~B,~C,D"
matches anything matched by "A" or "D" and not matched by "B" and not
matched by "C".
A set of patterns containing negative patterns but no positive ones contains
an implicit positive "*" pattern: "~A,~B" is equivalent to "*,~A,~B".
Examples of method filters:
---------
*
Matches all methods in all classes.
---------
canonical(CanonicalizerTool;LogicNode;LogicNode)
Matches all methods named "canonical", with the first parameter of type
"CanonicalizerTool", and the second and third parameters of type
"LogicNode".
The packages of the parameter types are irrelevant.
---------
arraycopy(Object;;;;)
Matches all methods named "arraycopy", with the first parameter
of type "Object", and four more parameters of any type. The
packages of the parameter types are irrelevant.
---------
List.set
Matches all methods named "set" in a class whose simple name is "List".
---------
*List.set
Matches all methods named "set" in a class whose simple name ends with "List".
---------
org.graalvm.compiler.nodes.PhiNode.*
Matches all methods in the class "org.graalvm.compiler.nodes.PhiNode".
---------
org.graalvm.compiler.nodes.*.canonical
Matches all methods named "canonical" in classes in the package
"org.graalvm.compiler.nodes".
---------
arraycopy,toString
Matches all methods named "arraycopy" or "toString", meaning that ',' acts
as an "or" operator.
---------
java.util.*.*.,~java.util.*Array*.*
java.util.*.*.,~*Array*.*
These patterns are equivalent and match all methods in the package
"java.util" except for classes that have "Array" in their name.
---------
~java.util.*.*
Matches all methods in all classes in all packages except for anything in
the "java.util" package.
-R:MetricsFile=... Default: None
File to which metrics are dumped per compilation.
A CSV format is used if the file ends with .csv otherwise a more
human readable format is used. The fields in the CSV format are:
compilable - method being compiled
compilable_identity - identity hash code of compilable
compilation_nr - where this compilation lies in the ordered
sequence of all compilations identified by
compilable_identity
compilation_id - runtime issued identifier for the compilation
metric_name - name of metric
metric_value - value of metric
-R:PrintGraph=File
Where IdealGraphVisualizer graph dumps triggered by Dump or DumpOnError should be written.
The accepted values are:
File - Dump IGV graphs to the local file system (see DumpPath).
Network - Dump IGV graphs to the network destination specified by PrintGraphHost and PrintGraphPort.
If a network connection cannot be opened, dumping falls back to file dumping.
Disable - Do not dump IGV graphs.
-R:ProfileAllocationsContext=AllocatingMethod
Control the naming and granularity of the counters when using ProfileAllocations.
The accepted values are:
AllocatingMethod - a counter per method
InstanceOrArray - one counter for all instance allocations and
one counter for all array allocations
AllocatedType - one counter per allocated type
AllocatedTypesInMethod - one counter per allocated type, per method
-R:SpectrePHTBarriers=None
Select a strategy to mitigate speculative bounds check bypass (aka Spectre-PHT or Spectre V1).
This is an experimental option - execution of untrusted code is not supported by GraalVM CE.
The accepted values are:
None - No mitigations are used in JIT compiled code.
AllTargets - Speculative execution on all conditional branch targets is
stopped using speculative execution barrier instructions.
GuardTargets - Branch targets relevant to Java memory safety are instrumented
with barrier instructions. This option has less performance impact
than AllTargets.
NonDeoptGuardTargets - Same as GuardTargets, except that branches which deoptimize are not
protected since they can not be executed repeatedly and are thus less
likely to be successfully exploited in an attack.
Note that all modes except "None" will also instrument branch target blocks containing UNSAFE memory accesses
with barrier instructions.
-R:±TraceInlining Default: - (disabled).
Enable tracing of inlining decisions.
Output format:
compilation of 'Signature of the compilation root method':
at 'Signature of the root method' ['Bytecode index']: <'Phase'> 'Child method signature': 'Decision made about this callsite'
at 'Signature of the child method' ['Bytecode index']:
|--<'Phase 1'> 'Grandchild method signature': 'First decision made about this callsite'
\--<'Phase 2'> 'Grandchild method signature': 'Second decision made about this callsite'
at 'Signature of the child method' ['Bytecode index']: <'Phase'> 'Another grandchild method signature': 'The only decision made about this callsite.'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment